code for CVPR paper Zero-shot Instance Segmentation

Overview

Code for CVPR2021 paper

Zero-shot Instance Segmentation

Code requirements

  • python: python3.7
  • nvidia GPU
  • pytorch1.1.0
  • GCC >=5.4
  • NCCL 2
  • the other python libs in requirement.txt

Install

conda create -n zsi python=3.7 -y
conda activate zsi

conda install pytorch=1.1.0 torchvision=0.3.0 cudatoolkit=10.0 -c pytorch

pip install cython && pip --no-cache-dir install -r requirements.txt
   
python setup.py develop

Dataset prepare

  • Download the train and test annotations files for zsi from annotations, put all json label file to

    data/coco/annotations/
    
  • Download MSCOCO-2014 dataset and unzip the images it to path:

    data/coco/train2014/
    data/coco/val2014/
    
  • Training:

    • 48/17 split:

         chmod +x tools/dist_train.sh
         ./tools/dist_train.sh configs/zsi/train/zero-shot-mask-rcnn-BARPN-bbox_mask_sync_bg_decoder.py 4
      
    • 65/15 split:

      chmod +x tools/dist_train.sh
      ./tools/dist_train.sh configs/zsi/train/zero-shot-mask-rcnn-BARPN-bbox_mask_sync_bg_65_15_decoder_notanh.py 4
      
  • Inference & Evaluate:

    • ZSI task:

      • 48/17 split ZSI task:
        • download 48/17 ZSI model, put it in checkpoints/ZSI_48_17.pth

        • inference:

          chmod +x tools/dist_test.sh
          ./tools/dist_test.sh configs/zsi/48_17/test/zsi/zero-shot-mask-rcnn-BARPN-bbox_mask_sync_bg_decoder.py checkpoints/ZSI_48_17.pth 4 --json_out results/zsi_48_17.json
          
        • our results zsi_48_17.bbox.json and zsi_48_17.segm.json can also downloaded from zsi_48_17_reults.

        • evaluate:

          • for zsd performance
            python tools/zsi_coco_eval.py results/zsi_48_17.bbox.json --ann data/coco/annotations/instances_val2014_unseen_48_17.json
            
          • for zsi performance
            python tools/zsi_coco_eval.py results/zsi_48_17.segm.json --ann data/coco/annotations/instances_val2014_unseen_48_17.json --types segm
            
      • 65/15 split ZSI task:
        • download 65/15 ZSI model, put it in checkpoints/ZSI_65_15.pth

        • inference:

          chmod +x tools/dist_test.sh
          ./toools/dist_test.sh configs/zsi/65_15/test/zsi/zero-shot-mask-rcnn-BARPN-bbox_mask_sync_bg_65_15_decoder_notanh.py checkpoints/ZSI_65_15.pth 4 --json_out results/zsi_65_15.json
          
        • our results zsi_65_15.bbox.json and zsi_65_15.segm.json can also downloaded from zsi_65_15_reults.

        • evaluate:

          • for zsd performance
            python tools/zsi_coco_eval.py results/zsi_65_15.bbox.json --ann data/coco/annotations/instances_val2014_unseen_65_15.json
            
          • for zsi performance
            python tools/zsi_coco_eval.py results/zsi_65_15.segm.json --ann data/coco/annotations/instances_val2014_unseen_65_15.json --types segm
            
    • GZSI task:

      • 48/17 split GZSI task:
        • use the same model file ZSI_48_17.pth in ZSI task
        • inference:
          chmod +x tools/dist_test.sh
          ./tools/dist_test.sh configs/zsi/48_17/test/gzsi/zero-shot-mask-rcnn-BARPN-bbox_mask_sync_bg_decoder_gzsi.py checkpoints/ZSI_48_17.pth 4 --json_out results/gzsi_48_17.json
          
        • our results gzsi_48_17.bbox.json and gzsi_48_17.segm.json can also downloaded from gzsi_48_17_results.
        • evaluate:
          • for gzsd
            python tools/gzsi_coco_eval.py results/gzsi_48_17.bbox.json --ann data/coco/annotations/instances_val2014_gzsi_48_17.json --gzsi --num-seen-classes 48
            
          • for gzsi
            python tools/gzsi_coco_eval.py results/gzsi_48_17.segm.json --ann data/coco/annotations/instances_val2014_gzsi_48_17.json --gzsi --num-seen-classes 48 --types segm
            
      • 65/15 split GZSI task:
        • use the same model file ZSI_48_17.pth in ZSI task
        • inference:
          chmod +x tools/dist_test.sh
          ./tools/dist_test.sh configs/zsi/65_15/test/gzsi/zero-shot-mask-rcnn-BARPN-bbox_mask_sync_bg_65_15_decoder_notanh_gzsi.py checkpoints/ZSI_65_15.pth 4 --json_out results/gzsi_65_15.json
          
        • our results gzsi_65_15.bbox.json and gzsi_65_15.segm.json can also downloaded from gzsi_65_15_results.
        • evaluate:
          • for gzsd
            python tools/gzsi_coco_eval.py results/gzsi_65_15.bbox.json --ann data/coco/annotations/instances_val2014_gzsi_65_15.json --gzsd --num-seen-classes 65
            
          • for gzsi
            python tools/gzsi_coco_eval.py results/gzsi_65_15.segm.json --ann data/coco/annotations/instances_val2014_gzsi_65_15.json --gzsd --num-seen-classes 65 --types segm
            

License

ZSI is released under MIT License.

Citing

If you use ZSI in your research or wish to refer to the baseline results published here, please use the following BibTeX entries:

@InProceedings{zhengye2021zsi,
  author  =  {Ye, Zheng and Jiahong, Wu and Yongqiag, Qin and Faen, Zhang and Li, Cui},
  title   =  {Zero-shot Instance Segmentation},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2021}
}
Owner
zhengye
CS Phd
zhengye
Tooling for the Common Objects In 3D dataset.

CO3D: Common Objects In 3D This repository contains a set of tools for working with the Common Objects in 3D (CO3D) dataset. Download the dataset The

Facebook Research 724 Jan 06, 2023
Code for paper Decoupled Dynamic Spatial-Temporal Graph Neural Network for Traffic Forecasting

Decoupled Spatial-Temporal Graph Neural Networks Code for our paper: Decoupled Dynamic Spatial-Temporal Graph Neural Network for Traffic Forecasting.

S22 43 Jan 04, 2023
Supplementary materials for ISMIR 2021 LBD paper "Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes"

Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes Supplementary materials for ISMIR 2021 LBD submission: K. N. W

Karn Watcharasupat 2 Oct 25, 2021
PyTorch implementation of paper: HPNet: Deep Primitive Segmentation Using Hybrid Representations.

HPNet This repository contains the PyTorch implementation of paper: HPNet: Deep Primitive Segmentation Using Hybrid Representations. Installation The

Siming Yan 42 Dec 07, 2022
Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks

This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset and my original methods that are publi

Shunta Saito 255 Sep 07, 2022
GPT, but made only out of gMLPs

GPT - gMLP This repository will attempt to crack long context autoregressive language modeling (GPT) using variations of gMLPs. Specifically, it will

Phil Wang 80 Dec 01, 2022
Complete U-net Implementation with keras

U Net Lowered with Keras Complete U-net Implementation with keras Original Paper Link : https://arxiv.org/abs/1505.04597 Special Implementations : The

Sagnik Roy 14 Oct 10, 2022
A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers.

ViTGAN: Training GANs with Vision Transformers A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers. Refer

Hong-Jia Chen 127 Dec 23, 2022
The source code of "SIDE: Center-based Stereo 3D Detector with Structure-aware Instance Depth Estimation", accepted to WACV 2022.

SIDE: Center-based Stereo 3D Detector with Structure-aware Instance Depth Estimation The source code of our work "SIDE: Center-based Stereo 3D Detecto

10 Dec 18, 2022
Single Image Super-Resolution (SISR) with SRResNet, EDSR and SRGAN

Single Image Super-Resolution (SISR) with SRResNet, EDSR and SRGAN Introduction Image super-resolution (SR) is the process of recovering high-resoluti

8 Apr 15, 2022
The hippynn python package - a modular library for atomistic machine learning with pytorch.

The hippynn python package - a modular library for atomistic machine learning with pytorch. We aim to provide a powerful library for the training of a

Los Alamos National Laboratory 37 Dec 29, 2022
用opencv的dnn模块做yolov5目标检测,包含C++和Python两个版本的程序

yolov5-dnn-cpp-py yolov5s,yolov5l,yolov5m,yolov5x的onnx文件在百度云盘下载, 链接:https://pan.baidu.com/s/1d67LUlOoPFQy0MV39gpJiw 提取码:bayj python版本的主程序是main_yolov5.

365 Jan 04, 2023
Over-the-Air Ensemble Inference with Model Privacy

Over-the-Air Ensemble Inference with Model Privacy This repository contains simulations for our private ensemble inference method. Installation Instal

Selim Firat Yilmaz 1 Jun 29, 2022
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 63 Dec 09, 2022
GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily

GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily Abstract Graph Neural Networks (GNNs) are widely used on a

10 Dec 20, 2022
Fang Zhonghao 13 Nov 19, 2022
[ICSE2020] MemLock: Memory Usage Guided Fuzzing

MemLock: Memory Usage Guided Fuzzing This repository provides the tool and the evaluation subjects for the paper "MemLock: Memory Usage Guided Fuzzing

Cheng Wen 54 Jan 07, 2023
Air Quality Prediction Using LSTM

AirQualityPredictionUsingLSTM In this Repo, i present to you the winning solution of smart gujarat hackathon 2019 where the task was to predict the qu

Deepak Nandwani 2 Dec 13, 2022
Pytorch implementation of the DeepDream computer vision algorithm

deep-dream-in-pytorch Pytorch (https://github.com/pytorch/pytorch) implementation of the deep dream (https://en.wikipedia.org/wiki/DeepDream) computer

102 Dec 05, 2022
Spatial Attentive Single-Image Deraining with a High Quality Real Rain Dataset (CVPR'19)

Spatial Attentive Single-Image Deraining with a High Quality Real Rain Dataset (CVPR'19) Tianyu Wang*, Xin Yang*, Ke Xu, Shaozhe Chen, Qiang Zhang, Ry

Steve Wong 177 Dec 01, 2022