Implementation of ECCV20 paper: the devil is in classification: a simple framework for long-tail object detection and instance segmentation

Overview

Implementation of our ECCV 2020 paper The Devil is in Classification: A Simple Framework for Long-tail Instance Segmentation

This repo contains code of Simcal, which won the LVIS 2019 challenge. Note that it can achieve much higher tail class performance by simply change the calibration head from 2-layer fc with random initialization (2fc_rand) to 3-layer fc initialized from original model with standard training (3fc_ft), refer to paper for details. But we did not notice this during the challenge submission and used 2fc_rand, so much higher result of tail clasees on test set is expected with SimCal 3fc_ft.

License

This project is released under the Apache 2.0 license.

TODO

  • remove and clean redundant and commented codes
  • update script for installing with pytorch 1.1.0 to have faster calibration training
  • merge mask r-cnn and htc model test file, add htc calibration code, add Props-GT experiment code

Pull requests to improve the codebase or fix bugs are welcome

Installation

Simcal is based on mmdetection, Please refer to INSTALL.md for installation and dataset preparation.

Or run the following installation script:

#!/usr/bin/env bash
conda create -n simcal_mmdet python=3.7
source ~/anaconda3/etc/profile.d/conda.sh
conda init bash
conda activate simcal_mmdet
echo "python path"
which python
conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=9.2 -c pytorch
pip install cython==0.29.12 mmcv==0.2.16 matplotlib terminaltables
pip install "git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI"
pip install opencv-python-headless
pip install Pillow==6.1
pip install numpy==1.17.1 --no-deps
git clone https://github.com/twangnh/SimCal
cd SimCal
pip install -v -e .

To also get instance-centric AP results, please do not install official LVIS api or cocoapi with pip, as we have modified it with a local copy in the repository to additionally calculate instance centric bin AP results (i.e., AP1,AP2,AP3,AP4). We may create a pull request to update the official APIs for this purpose later.

Dataset preparation

For LVIS dataset, please arrange the data as:

SimCal
├── configs
├── data
│   ├── LVIS
│   │   ├── lvis_v0.5_train.json.zip
│   │   ├── lvis_v0.5_val.json.zip
│   │   ├── images
│   │   │   ├── train2017
│   │   │   ├── val2017

note for LVIS images, you can just create a softlink for the val2017 to point to COCO val2017

For COCO-LT (our sampled long-tail version of COCO, refer to paper for details), please download the sampled annotation file train_coco2017_LT_sampled.json and put it at data/coco/annotations/

Training (Calibration)

Calibration uses multi-gpu training to perform bi-level proposal sampling, to run calibration on a model, e.g.,

python tools/train.py configs/simcal/calibration/mask_rcnn_r50_fpn_1x_lvis_agnostic.py --use_model 3fc_ft --exp_prefix xxx --gpus 4/8

will use 3fc_ft head as described in the paper and save calibrated head ckpt with exp_prefix

Pre-trained models and calibrated heads

All the calibrated models reported in the paper are released for reproduction and future research:

Model Link
r50-ag epoch-12 Googledrive
calibrated cls head Googledrive
Model Link
r50 epoch-12 Googledrive
calibrated cls head Googledrive
Model Link
r50-ag-coco-lt epoch-12 Googledrive
calibrated cls head Googledrive
Model Link
htc-x101 epoch-20 Googledrive
calhead-stege0 Googledrive
calhead-stege1 Googledrive
calhead-stege2 Googledrive

To evaluate and reproduce the paper result models, please first download the model checkpoints and arrange them as:

SimCal
├── configs
├── work_dirs
    |-- htc
    |   |-- 3fc_ft_stage0.pth
    |   |-- 3fc_ft_stage1.pth
    |   |-- 3fc_ft_stage2.pth
    |   `-- epoch_20.pth
    |-- mask_rcnn_r50_fpn_1x_cocolt_agnostic
    |   |-- 3fc_ft.pth
    |   `-- epoch_12.pth
    |-- mask_rcnn_r50_fpn_1x_lvis_agnostic
    |   |-- 3fc_ft.pth
    |   `-- epoch_12.pth
    `-- mask_rcnn_r50_fpn_1x_lvis_clswise
        |-- 3fc_ft_epoch.pth
        |-- 3fc_ft.pth
        `-- epoch_12.pth

Test with pretrained models and calibrated heads

mrcnn on lvis, paper result:

mrcnn on lvis paper result

Test LVIS r50-ag model (use --eval bbox for box result)

./tools/dist_test.sh configs/simcal/calibration/mask_rcnn_r50_fpn_1x_lvis_agnostic.py 8 --cal_head 3fc_ft --out ./temp.pkl --eval segm

bin 0_10 AP: 0.13286122428874017
bin 10_100 AP: 0.23243947868384135
bin 100_1000 AP: 0.20696891455408
bin 1000_* AP: 0.2615438157753328
bAP 0.20845335832549858
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=all] = 0.222
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=300 catIds=all] = 0.354
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=300 catIds=all] = 0.236
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=     s | maxDets=300 catIds=all] = 0.154
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=     m | maxDets=300 catIds=all] = 0.298
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=     l | maxDets=300 catIds=all] = 0.373
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=  r] = 0.182
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=  c] = 0.215
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=  f] = 0.247
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=all] = 0.315
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=     s | maxDets=300 catIds=all] = 0.216
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=     m | maxDets=300 catIds=all] = 0.382
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=     l | maxDets=300 catIds=all] = 0.453

Test LVIS r50 model (use --eval bbox for box result)

./tools/dist_test.sh configs/simcal/calibration/mask_rcnn_r50_fpn_1x_lvis_clswise.py 8 --cal_head 3fc_ft --out ./temp.pkl --eval segm

bin 0_10 AP: 0.10187003036862649
bin 10_100 AP: 0.23907519508889202
bin 100_1000 AP: 0.22468457541750592
bin 1000_* AP: 0.28687985066050825
bAP 0.21312741288388318
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=all] = 0.234
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=300 catIds=all] = 0.375
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=300 catIds=all] = 0.245
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=     s | maxDets=300 catIds=all] = 0.167
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=     m | maxDets=300 catIds=all] = 0.316
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=     l | maxDets=300 catIds=all] = 0.405
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=  r] = 0.164
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=  c] = 0.225
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=  f] = 0.272
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=all] = 0.331
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=     s | maxDets=300 catIds=all] = 0.233
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=     m | maxDets=300 catIds=all] = 0.399
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=     l | maxDets=300 catIds=all] = 0.481

mrcnn on cocolt, paper result:

mrcnn on lvis paper result

Test COCO-LT r50-ag model (use --eval bbox for box result)

./tools/dist_test.sh configs/simcal/calibration/mask_rcnn_r50_fpn_1x_lvis_agnostic.py 8 --cal_head 3fc_ft --out ./temp.pkl --eval segm

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.246
bin ins nums: [4, 24, 32, 20]
bins ap: [0.1451797625472811, 0.1796142130031695, 0.27337165679657216, 0.3027201541441131]
eAP : 0.22522144662278398
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.412
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.257
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.133
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.278
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.334
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.239
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.424
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.450
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.269
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.481
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.586

htc on lvis, paper result:

mrcnn on lvis paper result

Test HTC model (use --eval bbox for box result)

./tools/dist_test_htc.sh configs/simcal/calibration/htc_lvis_31d9.py 8 --out ./temp2.pkl --eval segm

bin 0_10 AP: 0.18796762487467375
bin 10_100 AP: 0.34907335159564473
bin 100_1000 AP: 0.3304618611020927
bin 1000_* AP: 0.3674197862439286
bAP 0.30873065595408494
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=all] = 0.334
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=300 catIds=all] = 0.490
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=300 catIds=all] = 0.357
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=     s | maxDets=300 catIds=all] = 0.228
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=     m | maxDets=300 catIds=all] = 0.422
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=     l | maxDets=300 catIds=all] = 0.565
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=  r] = 0.247
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=  c] = 0.337
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=  f] = 0.364
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=300 catIds=all] = 0.428
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=     s | maxDets=300 catIds=all] = 0.300
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=     m | maxDets=300 catIds=all] = 0.506
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=     l | maxDets=300 catIds=all] = 0.631

note testing with LVIS can be significantly slower as max_det is 300 and det confidence threshold is 0.0

Props-GT experiment

By Props-GT experiment, we would like to emphasize that there is still large room of improvement along the direction of improving object proposal classification.

Balanced Group Softmax

We also encourage you to check our following up work Balanced Group Softmax after the LVIS challenge, (accepted by CVPR20 oral). It employs a more specific calibration approach with redesigned the softmax function, the calibration is more effective without dual-head inference, and only calibrates last layer of classification head. Code is available at https://github.com/FishYuLi/BalancedGroupSoftmax

Citation

Please consider to cite our ECCV20 paper:

@article{wang2020devil,
  title={The Devil is in Classification: A Simple Framework for Long-tail Instance Segmentation},
  author={Wang, Tao and Li, Yu and Kang, Bingyi and Li, Junnan and Liew, Junhao and Tang, Sheng and Hoi, Steven and Feng, Jiashi},
  journal={arXiv preprint arXiv:2007.11978},
  year={2020}
}

tech report for LVIS challenge 2019 at ICCV19 (Yu Li and Tao Wang have equal contribution for the LVIS challenge):

@article{wang2019classification,
  title={Classification Calibration for Long-tail Instance Segmentation},
  author={Wang, Tao and Li, Yu and Kang, Bingyi and Li, Junnan and Liew, Jun Hao and Tang, Sheng and Hoi, Steven and Feng, Jiashi},
  journal={arXiv preprint arXiv:1910.13081},
  year={2019}
}

Our following work Group Softmax at CVPR20 (oral):

@inproceedings{li2020overcoming,
  title={Overcoming Classifier Imbalance for Long-Tail Object Detection With Balanced Group Softmax},
  author={Li, Yu and Wang, Tao and Kang, Bingyi and Tang, Sheng and Wang, Chunfeng and Li, Jintao and Feng, Jiashi},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={10991--11000},
  year={2020}
}
Owner
twang
make things work
twang
A bare-bones Python library for quality diversity optimization.

pyribs Website Source PyPI Conda CI/CD Docs Docs Status Twitter pyribs.org GitHub docs.pyribs.org A bare-bones Python library for quality diversity op

ICAROS 127 Jan 06, 2023
BBB streaming without Xorg and Pulseaudio and Chromium and other nonsense (heavily WIP)

BBB Streamer NG? Makes a conference like this... ...streamable like this! I also recorded a small video showing the basic features: https://www.youtub

Lukas Schauer 60 Oct 21, 2022
Official implementation of "Robust channel-wise illumination estimation"

This repository provides the official implementation of "Robust channel-wise illumination estimation." accepted in BMVC (2021).

Firas Laakom 4 Nov 08, 2022
Codes and scripts for "Explainable Semantic Space by Grounding Languageto Vision with Cross-Modal Contrastive Learning"

Visually Grounded Bert Language Model This repository is the official implementation of Explainable Semantic Space by Grounding Language to Vision wit

17 Dec 17, 2022
ImageNet-CoG is a benchmark for concept generalization. It provides a full evaluation framework for pre-trained visual representations which measure how well they generalize to unseen concepts.

The ImageNet-CoG Benchmark Project Website Paper (arXiv) Code repository for the ImageNet-CoG Benchmark introduced in the paper "Concept Generalizatio

NAVER 23 Oct 09, 2022
The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".

Cutoff: A Simple Data Augmentation Approach for Natural Language This repository contains source code necessary to reproduce the results presented in

Dinghan Shen 49 Dec 22, 2022
Source code related to the article submitted to the International Conference on Computational Science ICCS 2022 in London

POTHER: Patch-Voted Deep Learning-based Chest X-ray Bias Analysis for COVID-19 Detection Source code related to the article submitted to the Internati

Tomasz Szczepański 1 Apr 29, 2022
Bolt Online Learning Toolbox

Bolt Online Learning Toolbox Bolt features discriminative learning of linear predictors (e.g. SVM or Logistic Regression) using fast online learning a

Peter Prettenhofer 87 Dec 12, 2022
Interacting Two-Hand 3D Pose and Shape Reconstruction from Single Color Image (ICCV 2021)

Interacting Two-Hand 3D Pose and Shape Reconstruction from Single Color Image Interacting Two-Hand 3D Pose and Shape Reconstruction from Single Color

75 Dec 02, 2022
Pytorch implementation of MaskFlownet

MaskFlownet-Pytorch Unofficial PyTorch implementation of MaskFlownet (https://github.com/microsoft/MaskFlownet). Tested with: PyTorch 1.5.0 CUDA 10.1

Daniele Cattaneo 84 Nov 02, 2022
Skipgram Negative Sampling in PyTorch

PyTorch SGNS Word2Vec's SkipGramNegativeSampling in Python. Yet another but quite general negative sampling loss implemented in PyTorch. It can be use

Jamie J. Seol 287 Dec 14, 2022
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch

CoCa - Pytorch Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch. They were able to elegantly fit in contras

Phil Wang 565 Dec 30, 2022
Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021)

PGpoints Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021) Hyeontae Son, Young Min Kim Pre

Hyeontae Son 9 Jun 06, 2022
Hard cater examples from Hopper ICLR paper

CATER-h Honglu Zhou*, Asim Kadav, Farley Lai, Alexandru Niculescu-Mizil, Martin Renqiang Min, Mubbasir Kapadia, Hans Peter Graf (*Contact: honglu.zhou

NECLA ML Group 6 May 11, 2021
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Kai Zhang 1.2k Dec 29, 2022
Framework for training options with different attention mechanism and using them to solve downstream tasks.

Using Attention in HRL Framework for training options with different attention mechanism and using them to solve downstream tasks. Requirements GPU re

5 Nov 03, 2022
The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization [Paper] accepted at the EMNLP 2021: Vision Guided Genera

CAiRE 42 Jan 07, 2023
Tensorflow2.0 🍎🍊 is delicious, just eat it! 😋😋

How to eat TensorFlow2 in 30 days ? 🔥 🔥 Click here for Chinese Version(中文版) 《10天吃掉那只pyspark》 🚀 github项目地址: https://github.com/lyhue1991/eat_pyspark

lyhue1991 9.7k Jan 01, 2023
PyTorch implementation of federated learning framework based on the acceleration of global momentum

Federated Learning with Acceleration of Global Momentum PyTorch implementation of federated learning framework based on the acceleration of global mom

0 Dec 23, 2021
HODEmu, is both an executable and a python library that is based on Ragagnin 2021 in prep.

HODEmu HODEmu, is both an executable and a python library that is based on Ragagnin 2021 in prep. and emulates satellite abundance as a function of co

Antonio Ragagnin 1 Oct 13, 2021