Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

Overview

Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

PWC PWC


Results

results on COCO val

Backbone Method Lr Schd PQ Config Download
R-50 Panoptic-SegFormer 1x 48.0 config model
R-50 Panoptic-SegFormer 2x 49.6 config model
R-101 Panoptic-SegFormer 2x 50.6 config model
PVTv2-B5 (much lighter) Panoptic-SegFormer 2x 55.6 config model
Swin-L (window size 7) Panoptic-SegFormer 2x 55.8 config model

Install

Prerequisites

  • Linux
  • Python 3.6+
  • PyTorch 1.5+
  • torchvision
  • CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
  • GCC 5+
  • mmcv-full==1.3.4
  • mmdet==2.12.0 # higher version may not work
  • timm==0.4.5
  • einops==0.3.0
  • Pillow==8.0.1
  • opencv-python==4.5.2

note: PyTorch1.8 has a bug in its adamw.py and it is solved in PyTorch1.9(see), you can easily solve it by comparing the difference.

install Panoptic SegFormer

python setup.py install 

Datasets

When I began this project, mmdet dose not support panoptic segmentation officially. I convert the dataset from panoptic segmentation format to instance segmentation format for convenience.

1. prepare data (COCO)

cd Panoptic-SegFormer
mkdir datasets
cd datasets
ln -s path_to_coco coco
mkdir annotations/
cd annotations
wget http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip
unzip panoptic_annotations_trainval2017.zip

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

2. convert panoptic format to detection format

cd Panoptic-SegFormer
./tools/convert_panoptic_coco.sh coco

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017_detection_format.json
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   ├── panoptic_val2017_detection_format.json
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

Run (panoptic segmentation)

train

single-machine with 8 gpus.

./tools/dist_train.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py 8

test

./tools/dist_test.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py path/to/model.pth 8

Citing

If you use Panoptic SegFormer in your research, please use the following BibTeX entry.

@article{li2021panoptic,
  title={Panoptic SegFormer},
  author={Li, Zhiqi and Wang, Wenhai and Xie, Enze and Yu, Zhiding and Anandkumar, Anima and Alvarez, Jose M and Lu, Tong and Luo, Ping},
  journal={arXiv},
  year={2021}
}

Acknowledgement

Mainly based on Defromable DETR from MMdet.

Thanks very much for other open source works: timm, Panoptic FCN, MaskFomer, QueryInst

Comments
  • How demo one picture result ?

    How demo one picture result ?

    Dear friend, Thanks you for your good job. Now we do not want to download coco datasets, just want to give one picture, segment it and show its result. How to do it ? Best regards,

    opened by delldu 3
  • what's pvt_v2_ap in code?

    what's pvt_v2_ap in code?

    I found there are many names that obscure to understand. For example: pvt_v2_ap what that stands for? and what's single_stage_w_mask stands for?

    image

    and those file differences?

    opened by jinfagang 2
  • how to visualize demo image?

    how to visualize demo image?

    Dear friend, how to visualize the segmentation result of custom images? I run the infererce.py and didn’t get a good result. Like this: 000000

    I think there are some faults in my code.

    Here is my code:

    from mmcv.runner import checkpoint
    from mmdet.apis.inference import init_detector,LoadImage, inference_detector
    import easymd
    import cv2
    import random
    import colorsys
    import numpy as np
    
    def random_colors(N, bright=True):
        brightness = 1.0 if bright else 0.7
        hsv = [(i / float(N), 1, brightness) for i in range(N)]
        colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))
        random.shuffle(colors)
        return colors
    
    def apply_mask(image, mask, color, alpha=0.5):
        for c in range(3):
            image[:, :, c] = np.where(mask == 0,
                                      image[:, :, c],
                                      image[:, :, c] *
                                      (1 - alpha) + alpha * color[c] * 255)
        return image
    
    config = './configs/panformer/panformer_pvtb5_24e_coco_panoptic.py'
    #checkpoints = './checkpoints/pseg_r101_r50_latest.pth'
    checkpoints = "./checkpoints/panoptic_segformer_pvtv2b5_2x.pth"
    img_path = "img_path "
    mask_save_path = "save_path"
    
    colors = random_colors(80)
    
    model = init_detector(config,checkpoint=checkpoints)
    
    results = inference_detector(model, [img_path])
    
    img = cv2.imread(img_path)
    
    seg = results['segm'][0]
    N = len(seg)
    
    masked_image = img.copy()
    for i in range(N):
        color = colors[i]
        masks = np.sum(seg[i], axis=0)
        masked_image = apply_mask(masked_image, masks, color)
        # for mask in seg[i]:
        #     masked_image = apply_mask(masked_image, mask, color)
    
    # cv2.imshow("a", masked_image)
    
    opened by garriton 0
  • Location Decoder loss

    Location Decoder loss

    https://github.com/zhiqi-li/Panoptic-SegFormer/blob/e604ef810eaf5101106d221db4b6970c2daca5c9/easymd/models/panformer/panformer_head.py#L360-L364

    Why does the location decoder only compute the losses of the first L-1 layers not the whole L layers?

    opened by hust-nj 0
  • Instruction for single GPU run

    Instruction for single GPU run

    Hi thanks for sharing your works. Iwas trying to run it on single gpu. Would you pls add some instructions or scripts to run it in single gpu? That would be a great help.

    kind regards Abdullah

    opened by nazib 1
  • Impossible to debug, single_gpu code paths are broken

    Impossible to debug, single_gpu code paths are broken

    It seems that the multi gpu training and eval works great, however, while trying to debug you're opt for using a single gpu.
    In that case the code breaks in several parts during the evaluation of the validation set.
    Any chance for a hotfix? :)

    To reproduce, try to run the code from PyCharm in debug mode while there's only one GPU available.

    opened by aviadmx 1
  • Why instance annotations are required along panoptic ones?

    Why instance annotations are required along panoptic ones?

    The model solves the panoptic segmentation task, why does the validation dataset uses the instance segmentation annotations?

    data = dict(
        samples_per_gpu=2,
        workers_per_gpu=2,
        train=dict(
            type=dataset_type,
            ann_file= './datasets/annotations/panoptic_train2017_detection_format.json',
            img_prefix=data_root + 'train2017/',
            pipeline=train_pipeline),
        val=dict( 
          
            segmentations_folder='./seg',
            gt_json = './datasets/annotations/panoptic_val2017.json',
            gt_folder = './datasets/annotations/panoptic_val2017',
            type=dataset_type,
            ann_file=data_root + 'annotations/instances_val2017.json', # Why?
            img_prefix=data_root + 'val2017/',
            pipeline=test_pipeline),
        test=dict(
            segmentations_folder='./seg',
            gt_json = './datasets/annotations/panoptic_val2017.json',
            gt_folder = './datasets/annotations/panoptic_val2017',
            type=dataset_type,
            #ann_file= './datasets/coco/annotations/image_info_test-dev2017.json',
            ann_file=data_root + 'annotations/instances_val2017.json', # Why?
            #img_prefix=data_root + '/test2017/',
            img_prefix=data_root + 'val2017/',
            pipeline=test_pipeline)
            )
    

    We eventually use the instances_val2017.json file instead of panoptic_val2017.json

    opened by aviadmx 3
  • Loading checkpoint

    Loading checkpoint

    When loading the Swin-L checkpoint by adding a load_from line to the config configs/panformer/panformer_swinl_24e_coco_panoptic.pyz as following:

    load_from='./pretrained/panoptic_segformer_swinl_2x.pth'
    

    The loading fails with an error about keys mismatch:

    unexpected key in source state_dict: bbox_head.cls_branches2.0.weight, bbox_head.cls_branches2.0.bias, bbox_head.cls_branches2.1.weight, bbox_head.cls_branches2.1.bias, bbox_head.cls_branches2.2.weight, bbox_head.cls_branches2.2.bias, bbox_head.cls_branches2.3.weight, bbox_head.cls_branches2.3.bias, bbox_head.mask_head.blocks.0.head_norm1.weight, bbox_head.mask_head.blocks.0.head_norm1.bias, bbox_head.mask_head.blocks.0.attn.q.weight, bbox_head.mask_head.blocks.0.attn.q.bias, bbox_head.mask_head.blocks.0.attn.k.weight, bbox_head.mask_head.blocks.0.attn.k.bias, bbox_head.mask_head.blocks.0.attn.v.weight, bbox_head.mask_head.blocks.0.attn.v.bias, bbox_head.mask_head.blocks.0.attn.proj.weight, bbox_head.mask_head.blocks.0.attn.proj.bias, bbox_head.mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.0.attn.linear.0.weight, bbox_head.mask_head.blocks.0.attn.linear.0.bias, bbox_head.mask_head.blocks.0.head_norm2.weight, bbox_head.mask_head.blocks.0.head_norm2.bias, bbox_head.mask_head.blocks.0.mlp.fc1.weight, bbox_head.mask_head.blocks.0.mlp.fc1.bias, bbox_head.mask_head.blocks.0.mlp.fc2.weight, bbox_head.mask_head.blocks.0.mlp.fc2.bias, bbox_head.mask_head.blocks.1.head_norm1.weight, bbox_head.mask_head.blocks.1.head_norm1.bias, bbox_head.mask_head.blocks.1.attn.q.weight, bbox_head.mask_head.blocks.1.attn.q.bias, bbox_head.mask_head.blocks.1.attn.k.weight, bbox_head.mask_head.blocks.1.attn.k.bias, bbox_head.mask_head.blocks.1.attn.v.weight, bbox_head.mask_head.blocks.1.attn.v.bias, bbox_head.mask_head.blocks.1.attn.proj.weight, bbox_head.mask_head.blocks.1.attn.proj.bias, bbox_head.mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.1.attn.linear.0.weight, bbox_head.mask_head.blocks.1.attn.linear.0.bias, bbox_head.mask_head.blocks.1.head_norm2.weight, bbox_head.mask_head.blocks.1.head_norm2.bias, bbox_head.mask_head.blocks.1.mlp.fc1.weight, bbox_head.mask_head.blocks.1.mlp.fc1.bias, bbox_head.mask_head.blocks.1.mlp.fc2.weight, bbox_head.mask_head.blocks.1.mlp.fc2.bias, bbox_head.mask_head.blocks.2.head_norm1.weight, bbox_head.mask_head.blocks.2.head_norm1.bias, bbox_head.mask_head.blocks.2.attn.q.weight, bbox_head.mask_head.blocks.2.attn.q.bias, bbox_head.mask_head.blocks.2.attn.k.weight, bbox_head.mask_head.blocks.2.attn.k.bias, bbox_head.mask_head.blocks.2.attn.v.weight, bbox_head.mask_head.blocks.2.attn.v.bias, bbox_head.mask_head.blocks.2.attn.proj.weight, bbox_head.mask_head.blocks.2.attn.proj.bias, bbox_head.mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.2.attn.linear.0.weight, bbox_head.mask_head.blocks.2.attn.linear.0.bias, bbox_head.mask_head.blocks.2.head_norm2.weight, bbox_head.mask_head.blocks.2.head_norm2.bias, bbox_head.mask_head.blocks.2.mlp.fc1.weight, bbox_head.mask_head.blocks.2.mlp.fc1.bias, bbox_head.mask_head.blocks.2.mlp.fc2.weight, bbox_head.mask_head.blocks.2.mlp.fc2.bias, bbox_head.mask_head.blocks.3.head_norm1.weight, bbox_head.mask_head.blocks.3.head_norm1.bias, bbox_head.mask_head.blocks.3.attn.q.weight, bbox_head.mask_head.blocks.3.attn.q.bias, bbox_head.mask_head.blocks.3.attn.k.weight, bbox_head.mask_head.blocks.3.attn.k.bias, bbox_head.mask_head.blocks.3.attn.v.weight, bbox_head.mask_head.blocks.3.attn.v.bias, bbox_head.mask_head.blocks.3.attn.proj.weight, bbox_head.mask_head.blocks.3.attn.proj.bias, bbox_head.mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.3.attn.linear.0.weight, bbox_head.mask_head.blocks.3.attn.linear.0.bias, bbox_head.mask_head.blocks.3.head_norm2.weight, bbox_head.mask_head.blocks.3.head_norm2.bias, bbox_head.mask_head.blocks.3.mlp.fc1.weight, bbox_head.mask_head.blocks.3.mlp.fc1.bias, bbox_head.mask_head.blocks.3.mlp.fc2.weight, bbox_head.mask_head.blocks.3.mlp.fc2.bias, bbox_head.mask_head.attnen.q.weight, bbox_head.mask_head.attnen.q.bias, bbox_head.mask_head.attnen.k.weight, bbox_head.mask_head.attnen.k.bias, bbox_head.mask_head.attnen.linear_l1.0.weight, bbox_head.mask_head.attnen.linear_l1.0.bias, bbox_head.mask_head.attnen.linear_l2.0.weight, bbox_head.mask_head.attnen.linear_l2.0.bias, bbox_head.mask_head.attnen.linear_l3.0.weight, bbox_head.mask_head.attnen.linear_l3.0.bias, bbox_head.mask_head.attnen.linear.0.weight, bbox_head.mask_head.attnen.linear.0.bias, bbox_head.mask_head2.blocks.0.head_norm1.weight, bbox_head.mask_head2.blocks.0.head_norm1.bias, bbox_head.mask_head2.blocks.0.attn.q.weight, bbox_head.mask_head2.blocks.0.attn.q.bias, bbox_head.mask_head2.blocks.0.attn.k.weight, bbox_head.mask_head2.blocks.0.attn.k.bias, bbox_head.mask_head2.blocks.0.attn.v.weight, bbox_head.mask_head2.blocks.0.attn.v.bias, bbox_head.mask_head2.blocks.0.attn.proj.weight, bbox_head.mask_head2.blocks.0.attn.proj.bias, bbox_head.mask_head2.blocks.0.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.0.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.0.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.0.attn.linear.0.weight, bbox_head.mask_head2.blocks.0.attn.linear.0.bias, bbox_head.mask_head2.blocks.0.head_norm2.weight, bbox_head.mask_head2.blocks.0.head_norm2.bias, bbox_head.mask_head2.blocks.0.mlp.fc1.weight, bbox_head.mask_head2.blocks.0.mlp.fc1.bias, bbox_head.mask_head2.blocks.0.mlp.fc2.weight, bbox_head.mask_head2.blocks.0.mlp.fc2.bias, bbox_head.mask_head2.blocks.0.self_attention.qkv.weight, bbox_head.mask_head2.blocks.0.self_attention.qkv.bias, bbox_head.mask_head2.blocks.0.self_attention.proj.weight, bbox_head.mask_head2.blocks.0.self_attention.proj.bias, bbox_head.mask_head2.blocks.0.norm3.weight, bbox_head.mask_head2.blocks.0.norm3.bias, bbox_head.mask_head2.blocks.1.head_norm1.weight, bbox_head.mask_head2.blocks.1.head_norm1.bias, bbox_head.mask_head2.blocks.1.attn.q.weight, bbox_head.mask_head2.blocks.1.attn.q.bias, bbox_head.mask_head2.blocks.1.attn.k.weight, bbox_head.mask_head2.blocks.1.attn.k.bias, bbox_head.mask_head2.blocks.1.attn.v.weight, bbox_head.mask_head2.blocks.1.attn.v.bias, bbox_head.mask_head2.blocks.1.attn.proj.weight, bbox_head.mask_head2.blocks.1.attn.proj.bias, bbox_head.mask_head2.blocks.1.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.1.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.1.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.1.attn.linear.0.weight, bbox_head.mask_head2.blocks.1.attn.linear.0.bias, bbox_head.mask_head2.blocks.1.head_norm2.weight, bbox_head.mask_head2.blocks.1.head_norm2.bias, bbox_head.mask_head2.blocks.1.mlp.fc1.weight, bbox_head.mask_head2.blocks.1.mlp.fc1.bias, bbox_head.mask_head2.blocks.1.mlp.fc2.weight, bbox_head.mask_head2.blocks.1.mlp.fc2.bias, bbox_head.mask_head2.blocks.1.self_attention.qkv.weight, bbox_head.mask_head2.blocks.1.self_attention.qkv.bias, bbox_head.mask_head2.blocks.1.self_attention.proj.weight, bbox_head.mask_head2.blocks.1.self_attention.proj.bias, bbox_head.mask_head2.blocks.1.norm3.weight, bbox_head.mask_head2.blocks.1.norm3.bias, bbox_head.mask_head2.blocks.2.head_norm1.weight, bbox_head.mask_head2.blocks.2.head_norm1.bias, bbox_head.mask_head2.blocks.2.attn.q.weight, bbox_head.mask_head2.blocks.2.attn.q.bias, bbox_head.mask_head2.blocks.2.attn.k.weight, bbox_head.mask_head2.blocks.2.attn.k.bias, bbox_head.mask_head2.blocks.2.attn.v.weight, bbox_head.mask_head2.blocks.2.attn.v.bias, bbox_head.mask_head2.blocks.2.attn.proj.weight, bbox_head.mask_head2.blocks.2.attn.proj.bias, bbox_head.mask_head2.blocks.2.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.2.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.2.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.2.attn.linear.0.weight, bbox_head.mask_head2.blocks.2.attn.linear.0.bias, bbox_head.mask_head2.blocks.2.head_norm2.weight, bbox_head.mask_head2.blocks.2.head_norm2.bias, bbox_head.mask_head2.blocks.2.mlp.fc1.weight, bbox_head.mask_head2.blocks.2.mlp.fc1.bias, bbox_head.mask_head2.blocks.2.mlp.fc2.weight, bbox_head.mask_head2.blocks.2.mlp.fc2.bias, bbox_head.mask_head2.blocks.2.self_attention.qkv.weight, bbox_head.mask_head2.blocks.2.self_attention.qkv.bias, bbox_head.mask_head2.blocks.2.self_attention.proj.weight, bbox_head.mask_head2.blocks.2.self_attention.proj.bias, bbox_head.mask_head2.blocks.2.norm3.weight, bbox_head.mask_head2.blocks.2.norm3.bias, bbox_head.mask_head2.blocks.3.head_norm1.weight, bbox_head.mask_head2.blocks.3.head_norm1.bias, bbox_head.mask_head2.blocks.3.attn.q.weight, bbox_head.mask_head2.blocks.3.attn.q.bias, bbox_head.mask_head2.blocks.3.attn.k.weight, bbox_head.mask_head2.blocks.3.attn.k.bias, bbox_head.mask_head2.blocks.3.attn.v.weight, bbox_head.mask_head2.blocks.3.attn.v.bias, bbox_head.mask_head2.blocks.3.attn.proj.weight, bbox_head.mask_head2.blocks.3.attn.proj.bias, bbox_head.mask_head2.blocks.3.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.3.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.3.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.3.attn.linear.0.weight, bbox_head.mask_head2.blocks.3.attn.linear.0.bias, bbox_head.mask_head2.blocks.3.head_norm2.weight, bbox_head.mask_head2.blocks.3.head_norm2.bias, bbox_head.mask_head2.blocks.3.mlp.fc1.weight, bbox_head.mask_head2.blocks.3.mlp.fc1.bias, bbox_head.mask_head2.blocks.3.mlp.fc2.weight, bbox_head.mask_head2.blocks.3.mlp.fc2.bias, bbox_head.mask_head2.blocks.3.self_attention.qkv.weight, bbox_head.mask_head2.blocks.3.self_attention.qkv.bias, bbox_head.mask_head2.blocks.3.self_attention.proj.weight, bbox_head.mask_head2.blocks.3.self_attention.proj.bias, bbox_head.mask_head2.blocks.3.norm3.weight, bbox_head.mask_head2.blocks.3.norm3.bias, bbox_head.mask_head2.blocks.4.head_norm1.weight, bbox_head.mask_head2.blocks.4.head_norm1.bias, bbox_head.mask_head2.blocks.4.attn.q.weight, bbox_head.mask_head2.blocks.4.attn.q.bias, bbox_head.mask_head2.blocks.4.attn.k.weight, bbox_head.mask_head2.blocks.4.attn.k.bias, bbox_head.mask_head2.blocks.4.attn.v.weight, bbox_head.mask_head2.blocks.4.attn.v.bias, bbox_head.mask_head2.blocks.4.attn.proj.weight, bbox_head.mask_head2.blocks.4.attn.proj.bias, bbox_head.mask_head2.blocks.4.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.4.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.4.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.4.attn.linear.0.weight, bbox_head.mask_head2.blocks.4.attn.linear.0.bias, bbox_head.mask_head2.blocks.4.head_norm2.weight, bbox_head.mask_head2.blocks.4.head_norm2.bias, bbox_head.mask_head2.blocks.4.mlp.fc1.weight, bbox_head.mask_head2.blocks.4.mlp.fc1.bias, bbox_head.mask_head2.blocks.4.mlp.fc2.weight, bbox_head.mask_head2.blocks.4.mlp.fc2.bias, bbox_head.mask_head2.blocks.4.self_attention.qkv.weight, bbox_head.mask_head2.blocks.4.self_attention.qkv.bias, bbox_head.mask_head2.blocks.4.self_attention.proj.weight, bbox_head.mask_head2.blocks.4.self_attention.proj.bias, bbox_head.mask_head2.blocks.4.norm3.weight, bbox_head.mask_head2.blocks.4.norm3.bias, bbox_head.mask_head2.blocks.5.head_norm1.weight, bbox_head.mask_head2.blocks.5.head_norm1.bias, bbox_head.mask_head2.blocks.5.attn.q.weight, bbox_head.mask_head2.blocks.5.attn.q.bias, bbox_head.mask_head2.blocks.5.attn.k.weight, bbox_head.mask_head2.blocks.5.attn.k.bias, bbox_head.mask_head2.blocks.5.attn.v.weight, bbox_head.mask_head2.blocks.5.attn.v.bias, bbox_head.mask_head2.blocks.5.attn.proj.weight, bbox_head.mask_head2.blocks.5.attn.proj.bias, bbox_head.mask_head2.blocks.5.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.5.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.5.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.5.attn.linear.0.weight, bbox_head.mask_head2.blocks.5.attn.linear.0.bias, bbox_head.mask_head2.blocks.5.head_norm2.weight, bbox_head.mask_head2.blocks.5.head_norm2.bias, bbox_head.mask_head2.blocks.5.mlp.fc1.weight, bbox_head.mask_head2.blocks.5.mlp.fc1.bias, bbox_head.mask_head2.blocks.5.mlp.fc2.weight, bbox_head.mask_head2.blocks.5.mlp.fc2.bias, bbox_head.mask_head2.blocks.5.self_attention.qkv.weight, bbox_head.mask_head2.blocks.5.self_attention.qkv.bias, bbox_head.mask_head2.blocks.5.self_attention.proj.weight, bbox_head.mask_head2.blocks.5.self_attention.proj.bias, bbox_head.mask_head2.blocks.5.norm3.weight, bbox_head.mask_head2.blocks.5.norm3.bias, bbox_head.mask_head2.attnen.q.weight, bbox_head.mask_head2.attnen.q.bias, bbox_head.mask_head2.attnen.k.weight, bbox_head.mask_head2.attnen.k.bias, bbox_head.mask_head2.attnen.linear_l1.0.weight, bbox_head.mask_head2.attnen.linear_l1.0.bias, bbox_head.mask_head2.attnen.linear_l2.0.weight, bbox_head.mask_head2.attnen.linear_l2.0.bias, bbox_head.mask_head2.attnen.linear_l3.0.weight, bbox_head.mask_head2.attnen.linear_l3.0.bias, bbox_head.mask_head2.attnen.linear.0.weight, bbox_head.mask_head2.attnen.linear.0.bias
    
    missing keys in source state_dict: bbox_head.cls_thing_branches.0.weight, bbox_head.cls_thing_branches.0.bias, bbox_head.cls_thing_branches.1.weight, bbox_head.cls_thing_branches.1.bias, bbox_head.cls_thing_branches.2.weight, bbox_head.cls_thing_branches.2.bias, bbox_head.cls_thing_branches.3.weight, bbox_head.cls_thing_branches.3.bias, bbox_head.things_mask_head.blocks.0.head_norm1.weight, bbox_head.things_mask_head.blocks.0.head_norm1.bias, bbox_head.things_mask_head.blocks.0.attn.q.weight, bbox_head.things_mask_head.blocks.0.attn.q.bias, bbox_head.things_mask_head.blocks.0.attn.k.weight, bbox_head.things_mask_head.blocks.0.attn.k.bias, bbox_head.things_mask_head.blocks.0.attn.v.weight, bbox_head.things_mask_head.blocks.0.attn.v.bias, bbox_head.things_mask_head.blocks.0.attn.proj.weight, bbox_head.things_mask_head.blocks.0.attn.proj.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear.0.bias, bbox_head.things_mask_head.blocks.0.head_norm2.weight, bbox_head.things_mask_head.blocks.0.head_norm2.bias, bbox_head.things_mask_head.blocks.0.mlp.fc1.weight, bbox_head.things_mask_head.blocks.0.mlp.fc1.bias, bbox_head.things_mask_head.blocks.0.mlp.fc2.weight, bbox_head.things_mask_head.blocks.0.mlp.fc2.bias, bbox_head.things_mask_head.blocks.1.head_norm1.weight, bbox_head.things_mask_head.blocks.1.head_norm1.bias, bbox_head.things_mask_head.blocks.1.attn.q.weight, bbox_head.things_mask_head.blocks.1.attn.q.bias, bbox_head.things_mask_head.blocks.1.attn.k.weight, bbox_head.things_mask_head.blocks.1.attn.k.bias, bbox_head.things_mask_head.blocks.1.attn.v.weight, bbox_head.things_mask_head.blocks.1.attn.v.bias, bbox_head.things_mask_head.blocks.1.attn.proj.weight, bbox_head.things_mask_head.blocks.1.attn.proj.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear.0.bias, bbox_head.things_mask_head.blocks.1.head_norm2.weight, bbox_head.things_mask_head.blocks.1.head_norm2.bias, bbox_head.things_mask_head.blocks.1.mlp.fc1.weight, bbox_head.things_mask_head.blocks.1.mlp.fc1.bias, bbox_head.things_mask_head.blocks.1.mlp.fc2.weight, bbox_head.things_mask_head.blocks.1.mlp.fc2.bias, bbox_head.things_mask_head.blocks.2.head_norm1.weight, bbox_head.things_mask_head.blocks.2.head_norm1.bias, bbox_head.things_mask_head.blocks.2.attn.q.weight, bbox_head.things_mask_head.blocks.2.attn.q.bias, bbox_head.things_mask_head.blocks.2.attn.k.weight, bbox_head.things_mask_head.blocks.2.attn.k.bias, bbox_head.things_mask_head.blocks.2.attn.v.weight, bbox_head.things_mask_head.blocks.2.attn.v.bias, bbox_head.things_mask_head.blocks.2.attn.proj.weight, bbox_head.things_mask_head.blocks.2.attn.proj.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear.0.bias, bbox_head.things_mask_head.blocks.2.head_norm2.weight, bbox_head.things_mask_head.blocks.2.head_norm2.bias, bbox_head.things_mask_head.blocks.2.mlp.fc1.weight, bbox_head.things_mask_head.blocks.2.mlp.fc1.bias, bbox_head.things_mask_head.blocks.2.mlp.fc2.weight, bbox_head.things_mask_head.blocks.2.mlp.fc2.bias, bbox_head.things_mask_head.blocks.3.head_norm1.weight, bbox_head.things_mask_head.blocks.3.head_norm1.bias, bbox_head.things_mask_head.blocks.3.attn.q.weight, bbox_head.things_mask_head.blocks.3.attn.q.bias, bbox_head.things_mask_head.blocks.3.attn.k.weight, bbox_head.things_mask_head.blocks.3.attn.k.bias, bbox_head.things_mask_head.blocks.3.attn.v.weight, bbox_head.things_mask_head.blocks.3.attn.v.bias, bbox_head.things_mask_head.blocks.3.attn.proj.weight, bbox_head.things_mask_head.blocks.3.attn.proj.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear.0.bias, bbox_head.things_mask_head.blocks.3.head_norm2.weight, bbox_head.things_mask_head.blocks.3.head_norm2.bias, bbox_head.things_mask_head.blocks.3.mlp.fc1.weight, bbox_head.things_mask_head.blocks.3.mlp.fc1.bias, bbox_head.things_mask_head.blocks.3.mlp.fc2.weight, bbox_head.things_mask_head.blocks.3.mlp.fc2.bias, bbox_head.things_mask_head.attnen.q.weight, bbox_head.things_mask_head.attnen.q.bias, bbox_head.things_mask_head.attnen.k.weight, bbox_head.things_mask_head.attnen.k.bias, bbox_head.things_mask_head.attnen.linear_l1.0.weight, bbox_head.things_mask_head.attnen.linear_l1.0.bias, bbox_head.things_mask_head.attnen.linear_l2.0.weight, bbox_head.things_mask_head.attnen.linear_l2.0.bias, bbox_head.things_mask_head.attnen.linear_l3.0.weight, bbox_head.things_mask_head.attnen.linear_l3.0.bias, bbox_head.things_mask_head.attnen.linear.0.weight, bbox_head.things_mask_head.attnen.linear.0.bias, bbox_head.stuff_mask_head.blocks.0.head_norm1.weight, bbox_head.stuff_mask_head.blocks.0.head_norm1.bias, bbox_head.stuff_mask_head.blocks.0.attn.q.weight, bbox_head.stuff_mask_head.blocks.0.attn.q.bias, bbox_head.stuff_mask_head.blocks.0.attn.k.weight, bbox_head.stuff_mask_head.blocks.0.attn.k.bias, bbox_head.stuff_mask_head.blocks.0.attn.v.weight, bbox_head.stuff_mask_head.blocks.0.attn.v.bias, bbox_head.stuff_mask_head.blocks.0.attn.proj.weight, bbox_head.stuff_mask_head.blocks.0.attn.proj.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.0.head_norm2.weight, bbox_head.stuff_mask_head.blocks.0.head_norm2.bias, bbox_head.stuff_mask_head.blocks.0.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.0.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.0.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.0.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.0.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.0.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.0.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.0.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.0.norm3.weight, bbox_head.stuff_mask_head.blocks.0.norm3.bias, bbox_head.stuff_mask_head.blocks.1.head_norm1.weight, bbox_head.stuff_mask_head.blocks.1.head_norm1.bias, bbox_head.stuff_mask_head.blocks.1.attn.q.weight, bbox_head.stuff_mask_head.blocks.1.attn.q.bias, bbox_head.stuff_mask_head.blocks.1.attn.k.weight, bbox_head.stuff_mask_head.blocks.1.attn.k.bias, bbox_head.stuff_mask_head.blocks.1.attn.v.weight, bbox_head.stuff_mask_head.blocks.1.attn.v.bias, bbox_head.stuff_mask_head.blocks.1.attn.proj.weight, bbox_head.stuff_mask_head.blocks.1.attn.proj.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.1.head_norm2.weight, bbox_head.stuff_mask_head.blocks.1.head_norm2.bias, bbox_head.stuff_mask_head.blocks.1.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.1.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.1.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.1.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.1.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.1.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.1.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.1.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.1.norm3.weight, bbox_head.stuff_mask_head.blocks.1.norm3.bias, bbox_head.stuff_mask_head.blocks.2.head_norm1.weight, bbox_head.stuff_mask_head.blocks.2.head_norm1.bias, bbox_head.stuff_mask_head.blocks.2.attn.q.weight, bbox_head.stuff_mask_head.blocks.2.attn.q.bias, bbox_head.stuff_mask_head.blocks.2.attn.k.weight, bbox_head.stuff_mask_head.blocks.2.attn.k.bias, bbox_head.stuff_mask_head.blocks.2.attn.v.weight, bbox_head.stuff_mask_head.blocks.2.attn.v.bias, bbox_head.stuff_mask_head.blocks.2.attn.proj.weight, bbox_head.stuff_mask_head.blocks.2.attn.proj.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.2.head_norm2.weight, bbox_head.stuff_mask_head.blocks.2.head_norm2.bias, bbox_head.stuff_mask_head.blocks.2.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.2.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.2.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.2.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.2.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.2.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.2.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.2.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.2.norm3.weight, bbox_head.stuff_mask_head.blocks.2.norm3.bias, bbox_head.stuff_mask_head.blocks.3.head_norm1.weight, bbox_head.stuff_mask_head.blocks.3.head_norm1.bias, bbox_head.stuff_mask_head.blocks.3.attn.q.weight, bbox_head.stuff_mask_head.blocks.3.attn.q.bias, bbox_head.stuff_mask_head.blocks.3.attn.k.weight, bbox_head.stuff_mask_head.blocks.3.attn.k.bias, bbox_head.stuff_mask_head.blocks.3.attn.v.weight, bbox_head.stuff_mask_head.blocks.3.attn.v.bias, bbox_head.stuff_mask_head.blocks.3.attn.proj.weight, bbox_head.stuff_mask_head.blocks.3.attn.proj.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.3.head_norm2.weight, bbox_head.stuff_mask_head.blocks.3.head_norm2.bias, bbox_head.stuff_mask_head.blocks.3.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.3.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.3.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.3.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.3.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.3.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.3.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.3.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.3.norm3.weight, bbox_head.stuff_mask_head.blocks.3.norm3.bias, bbox_head.stuff_mask_head.blocks.4.head_norm1.weight, bbox_head.stuff_mask_head.blocks.4.head_norm1.bias, bbox_head.stuff_mask_head.blocks.4.attn.q.weight, bbox_head.stuff_mask_head.blocks.4.attn.q.bias, bbox_head.stuff_mask_head.blocks.4.attn.k.weight, bbox_head.stuff_mask_head.blocks.4.attn.k.bias, bbox_head.stuff_mask_head.blocks.4.attn.v.weight, bbox_head.stuff_mask_head.blocks.4.attn.v.bias, bbox_head.stuff_mask_head.blocks.4.attn.proj.weight, bbox_head.stuff_mask_head.blocks.4.attn.proj.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.4.head_norm2.weight, bbox_head.stuff_mask_head.blocks.4.head_norm2.bias, bbox_head.stuff_mask_head.blocks.4.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.4.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.4.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.4.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.4.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.4.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.4.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.4.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.4.norm3.weight, bbox_head.stuff_mask_head.blocks.4.norm3.bias, bbox_head.stuff_mask_head.blocks.5.head_norm1.weight, bbox_head.stuff_mask_head.blocks.5.head_norm1.bias, bbox_head.stuff_mask_head.blocks.5.attn.q.weight, bbox_head.stuff_mask_head.blocks.5.attn.q.bias, bbox_head.stuff_mask_head.blocks.5.attn.k.weight, bbox_head.stuff_mask_head.blocks.5.attn.k.bias, bbox_head.stuff_mask_head.blocks.5.attn.v.weight, bbox_head.stuff_mask_head.blocks.5.attn.v.bias, bbox_head.stuff_mask_head.blocks.5.attn.proj.weight, bbox_head.stuff_mask_head.blocks.5.attn.proj.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.5.head_norm2.weight, bbox_head.stuff_mask_head.blocks.5.head_norm2.bias, bbox_head.stuff_mask_head.blocks.5.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.5.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.5.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.5.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.5.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.5.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.5.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.5.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.5.norm3.weight, bbox_head.stuff_mask_head.blocks.5.norm3.bias, bbox_head.stuff_mask_head.attnen.q.weight, bbox_head.stuff_mask_head.attnen.q.bias, bbox_head.stuff_mask_head.attnen.k.weight, bbox_head.stuff_mask_head.attnen.k.bias, bbox_head.stuff_mask_head.attnen.linear_l1.0.weight, bbox_head.stuff_mask_head.attnen.linear_l1.0.bias, bbox_head.stuff_mask_head.attnen.linear_l2.0.weight, bbox_head.stuff_mask_head.attnen.linear_l2.0.bias, bbox_head.stuff_mask_head.attnen.linear_l3.0.weight, bbox_head.stuff_mask_head.attnen.linear_l3.0.bias, bbox_head.stuff_mask_head.attnen.linear.0.weight, bbox_head.stuff_mask_head.attnen.linear.0.bias
    
    opened by aviadmx 3
  • ImportError Libtorch_cpu.so: undefined symbol

    ImportError Libtorch_cpu.so: undefined symbol

    Thank you for this awesome work

    Unfortunately I can't run the training because I get the following error

    ./tools/dist_train.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py 1
    + CONFIG=./configs/panformer/panformer_r50_24e_coco_panoptic.py
    + GPUS=1
    + PORT=29503
    ++ dirname ./tools/dist_train.sh
    ++ dirname ./tools/dist_train.sh
    + PYTHONPATH=./tools/..:
    + python -m torch.distributed.launch --nproc_per_node=1 --master_port=29503 ./tools/train.py ./configs/panformer/panformer_r50_24e_coco_panoptic.py --launcher pytorch --deterministic
    Traceback (most recent call last):
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/runpy.py", line 183, in _run_module_as_main
        mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/runpy.py", line 109, in _get_module_details
        __import__(pkg_name)
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/site-packages/torch/__init__.py", line 197, in <module>
        from torch._C import *  # noqa: F403
    ImportError: /home/vision/anaconda3/envs/psf/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: _ZNK3c1010TensorImpl23shallow_copy_and_detachERKNS_15VariableVersionEb
    

    This is my environment:

    screen screen1

    opened by EnnioEvo 0
Owner
Nanjing University, China.
Graph Attention Networks

GAT Graph Attention Networks (Veličković et al., ICLR 2018): https://arxiv.org/abs/1710.10903 GAT layer t-SNE + Attention coefficients on Cora Overvie

Petar Veličković 2.6k Jan 05, 2023
Multi-Scale Geometric Consistency Guided Multi-View Stereo

ACMM [News] The code for ACMH is released!!! [News] The code for ACMP is released!!! About ACMM is a multi-scale geometric consistency guided multi-vi

Qingshan Xu 118 Jan 04, 2023
A framework for using LSTMs to detect anomalies in multivariate time series data. Includes spacecraft anomaly data and experiments from the Mars Science Laboratory and SMAP missions.

Telemanom (v2.0) v2.0 updates: Vectorized operations via numpy Object-oriented restructure, improved organization Merge branches into single branch fo

Kyle Hundman 844 Dec 28, 2022
ELSED: Enhanced Line SEgment Drawing

ELSED: Enhanced Line SEgment Drawing This repository contains the source code of ELSED: Enhanced Line SEgment Drawing the fastest line segment detecto

Iago Suárez 125 Dec 31, 2022
Exploring whether attention is necessary for vision transformers

Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet Paper/Report TL;DR We replace the attention layer in a v

Luke Melas-Kyriazi 461 Jan 07, 2023
ImageNet-CoG is a benchmark for concept generalization. It provides a full evaluation framework for pre-trained visual representations which measure how well they generalize to unseen concepts.

The ImageNet-CoG Benchmark Project Website Paper (arXiv) Code repository for the ImageNet-CoG Benchmark introduced in the paper "Concept Generalizatio

NAVER 23 Oct 09, 2022
Heterogeneous Deep Graph Infomax

Heterogeneous-Deep-Graph-Infomax Parameter Setting: HDGI-A: Node-level dimension: 16 Attention head: 4 Semantic-level attention vector: 8 learning rat

52 Oct 31, 2022
A practical ML pipeline for data labeling with experiment tracking using DVC.

Auto Label Pipeline A practical ML pipeline for data labeling with experiment tracking using DVC Goals: Demonstrate reproducible ML Use DVC to build a

Todd Cook 4 Mar 08, 2022
Code used for the results in the paper "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning"

Code used for the results in the paper "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning" Getting started Prerequisites CUD

70 Dec 02, 2022
PyTorch implementation of DirectCLR from paper Understanding Dimensional Collapse in Contrastive Self-supervised Learning

DirectCLR DirectCLR is a simple contrastive learning model for visual representation learning. It does not require a trainable projector as SimCLR. It

Meta Research 49 Dec 21, 2022
Extremely easy multi instancing software for minecraft speedrunning.

Easy Multi Extremely easy multi/single instancing software for minecraft speedrunning. A couple of goals of this project: Setup multi in minutes No fi

Duncan 8 Jul 16, 2022
python 93% acc. CNN Dogs Vs Cats ( Pytorch )

English | 简体中文(测试中...敬请期待) Cnn-Classification-Dog-Vs-Cat 猫狗辨别 (pytorch版本) CNN Resnet18 的猫狗分类器,基于ResNet及其变体网路系列,对于一般的图像识别任务表现优异,模型精准度高达93%(小型样本)。 项目制作于

apple ye 1 May 22, 2022
Github Traffic Insights as Prometheus metrics.

github-traffic Github Traffic collects your repository's traffic data and exposes it as Prometheus metrics. Grafana dashboard that displays the metric

Grafana Labs 34 Oct 27, 2022
Certifiable Outlier-Robust Geometric Perception

Certifiable Outlier-Robust Geometric Perception About This repository holds the implementation for certifiably solving outlier-robust geometric percep

83 Dec 31, 2022
This repository compare a selfie with images from identity documents and response if the selfie match.

aws-rekognition-facecompare This repository compare a selfie with images from identity documents and response if the selfie match. This code was made

1 Jan 27, 2022
Joint Channel and Weight Pruning for Model Acceleration on Mobile Devices

Joint Channel and Weight Pruning for Model Acceleration on Mobile Devices Abstract For practical deep neural network design on mobile devices, it is e

11 Dec 30, 2022
Official PyTorch implementation of "BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation" (NeurIPS 2021)

BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation Official PyTorch implementation of the NeurIPS 2021 paper Mingcong Liu, Qiang

onion 462 Dec 29, 2022
4D Human Body Capture from Egocentric Video via 3D Scene Grounding

4D Human Body Capture from Egocentric Video via 3D Scene Grounding [Project] [Paper] Installation: Our method requires the same dependencies as SMPLif

Miao Liu 37 Nov 08, 2022
TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain

TCNN Pandey A, Wang D L. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain[C]//ICASSP 2019-2019 IEEE Int

凌逆战 16 Dec 30, 2022
A series of Python scripts to access measurements from Fluke 28X meters. Fluke IR Remote Interface required.

Fluke289_data_access A series of Python scripts to access measurements from Fluke 28X meters. Fluke IR Remote Interface required. Created from informa

3 Dec 08, 2022