Official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

Overview

Test-Agnostic Long-Tailed Recognition

This repository is the official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

  • TADE (our method) innovates the expert training scheme by introducing diversity-promoting expertise-guided losses, which train different experts to handle distinct class distributions. In this way, the learned experts would be more diverse than existing multi-expert methods, leading to better ensemble performance, and aggregatedly simulate a wide spectrum of possible class distributions.
  • TADE develops a new self-supervised method, namely prediction stability maximization, to adaptively aggregate these experts for better handling unknown test distribution, using unlabeled test class data.

Results

ImageNet-LT (ResNeXt-50)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc. Model
Softmax 4.26 48.0
RIDE 6.08 56.3
TADE (ours) 6.08 58.8 Download

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 4.26 66.1 60.3 48.0 34.9 27.6
RIDE 6.08 67.6 64.0 56.3 48.7 44.0
TADE (ours) 6.08 69.4 65.4 58.8 54.5 53.1

CIFAR100-Imbalance ratio 100 (ResNet-32)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 0.07 41.4
RIDE 0.11 48.0
TADE (ours) 0.11 49.8

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 0.07 62.3 56.2 41.4 25.8 17.5
RIDE 0.11 63.0 57.0 48.0 35.4 29.3
TADE (ours) 0.11 65.9 58.3 49.8 43.9 42.4

Places-LT (ResNet-152)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 11.56 31.4
RIDE 13.18 40.3
TADE (ours) 13.18 40.9

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 11.56 45.6 40.2 31.4 23.4 19.4
RIDE 13.18 43.1 41.6 40.3 38.2 36.9
TADE (ours) 13.18 46.4 43.3 40.9 41.4 41.6

iNaturalist 2018 (ResNet-50)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 4.14 64.7
RIDE 5.80 71.8
TADE (ours) 5.80 72.9

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-3 Forward-2 Uniform Backward-2 Backward-3
Softmax 4.14 65.4 65.5 64.7 64.0 63.4
RIDE 5.80 71.5 71.9 71.8 71.9 71.8
TADE (ours) 5.80 72.3 72.5 72.9 73.5 73.3

Requirements

  • To install requirements:
pip install -r requirements.txt

Hardware requirements

8 GPUs with >= 11G GPU RAM are recommended. Otherwise the model with more experts may not fit in, especially on datasets with more classes (the FC layers will be large). We do not support CPU training, but CPU inference could be supported by slight modification.

Datasets

Four bechmark datasets

  • Please download these datasets and put them to the /data file.
  • ImageNet-LT and Places-LT can be found at here.
  • iNaturalist data should be the 2018 version from here.
  • CIFAR-100 will be downloaded automatically with the dataloader.
data
├── ImageNet_LT
│   ├── test
│   ├── train
│   └── val
├── CIFAR100
│   └── cifar-100-python
├── Place365
│   ├── data_256
│   ├── test_256
│   └── val_256
└── iNaturalist 
    ├── test2018
    └── train_val2018

Txt files

  • We provide txt files for test-agnostic long-tailed recognition for ImageNet-LT, Places-LT and iNaturalist 2018. CIFAR-100 will be generated automatically with the code.
  • For iNaturalist 2018, please unzip the iNaturalist_train.zip.
data_txt
├── ImageNet_LT
│   ├── ImageNet_LT_backward2.txt
│   ├── ImageNet_LT_backward5.txt
│   ├── ImageNet_LT_backward10.txt
│   ├── ImageNet_LT_backward25.txt
│   ├── ImageNet_LT_backward50.txt
│   ├── ImageNet_LT_forward2.txt
│   ├── ImageNet_LT_forward5.txt
│   ├── ImageNet_LT_forward10.txt
│   ├── ImageNet_LT_forward25.txt
│   ├── ImageNet_LT_forward50.txt
│   ├── ImageNet_LT_test.txt
│   ├── ImageNet_LT_train.txt
│   ├── ImageNet_LT_uniform.txt
│   └── ImageNet_LT_val.txt
├── Places_LT_v2
│   ├── Places_LT_backward2.txt
│   ├── Places_LT_backward5.txt
│   ├── Places_LT_backward10.txt
│   ├── Places_LT_backward25.txt
│   ├── Places_LT_backward50.txt
│   ├── Places_LT_forward2.txt
│   ├── Places_LT_forward5.txt
│   ├── Places_LT_forward10.txt
│   ├── Places_LT_forward25.txt
│   ├── Places_LT_forward50.txt
│   ├── Places_LT_test.txt
│   ├── Places_LT_train.txt
│   ├── Places_LT_uniform.txt
│   └── Places_LT_val.txt
└── iNaturalist18
    ├── iNaturalist18_backward2.txt
    ├── iNaturalist18_backward3.txt
    ├── iNaturalist18_forward2.txt
    ├── iNaturalist18_forward3.txt
    ├── iNaturalist18_train.txt
    ├── iNaturalist18_uniform.txt
    └── iNaturalist18_val.txt 

Pretrained models

  • For the training on Places-LT, we follow previous method and use the pre-trained model.
  • Please download the checkpoint. Unzip and move the checkpoint files to /model/pretrained_model_places/.

Script

ImageNet-LT

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_imagenet_lt_resnext50_tade.json

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_imagenet.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_imagenet.py -c configs/test_time_imagenet_lt_resnext50_tade.json -r checkpoint_path

CIFAR100-LT

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_cifar100_ir100_tade.json
  • One can change the imbalance ratio from 100 to 10/50 by changing the config file.

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_cifar.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_cifar.py -c configs/test_time_cifar100_ir100_tade.json -r checkpoint_path
  • One can change the imbalance ratio from 100 to 10/50 by changing the config file.

Places-LT

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_places_lt_resnet152_tade.json

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test_places.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_places.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_places.py -c configs/test_time_places_lt_resnet152_tade.json -r checkpoint_path

iNaturalist 2018

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_iNaturalist_resnet50_tade.json

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_inat.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_inat.py -c configs/test_time_iNaturalist_resnet50_tade.json -r checkpoint_path

Citation

If you find our work inspiring or use our codebase in your research, please cite our work.

@article{zhang2021test,
  title={Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision},
  author={Zhang, Yifan and Hooi, Bryan and Hong, Lanqing and Feng, Jiashi},
  journal={arXiv},
  year={2021}
}

Acknowledgements

This is a project based on this pytorch template.

The mutli-expert framework are based on RIDE. The data generation of agnostic test class distributions takes references from LADE.

Owner
vanint
vanint
Fast topic modeling platform

The state-of-the-art platform for topic modeling. Full Documentation User Mailing List Download Releases User survey What is BigARTM? BigARTM is a pow

BigARTM 633 Dec 21, 2022
An assignment on creating a minimalist neural network toolkit for CS11-747

minnn by Graham Neubig, Zhisong Zhang, and Divyansh Kaushik This is an exercise in developing a minimalist neural network toolkit for NLP, part of Car

Graham Neubig 63 Dec 29, 2022
Words-per-minute - A terminal app written in python utilizing the curses module that tests the user's ability to type

words-per-minute A terminal app written in python utilizing the curses module th

Tanim Islam 1 Jan 14, 2022
Blazing fast language detection using fastText model

Luga A blazing fast language detection using fastText's language models Luga is a Swahili word for language. fastText provides a blazing fast language

Prayson Wilfred Daniel 18 Dec 20, 2022
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
Almost State-of-the-art Text Generation library

Ps: we are adding transformer model soon Text Gen 🐐 Almost State-of-the-art Text Generation library Text gen is a python library that allow you build

Emeka boris ama 63 Jun 24, 2022
JaQuAD: Japanese Question Answering Dataset

JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension (2022, Skelter Labs)

SkelterLabs 84 Dec 27, 2022
Linear programming solver for paper-reviewer matching and mind-matching

Paper-Reviewer Matcher A python package for paper-reviewer matching algorithm based on topic modeling and linear programming. The algorithm is impleme

Titipat Achakulvisut 66 Jul 05, 2022
[Preprint] Escaping the Big Data Paradigm with Compact Transformers, 2021

Compact Transformers Preprint Link: Escaping the Big Data Paradigm with Compact Transformers By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Ab

SHI Lab 367 Dec 31, 2022
中文問句產生器;使用台達電閱讀理解資料集(DRCD)

Transformer QG on DRCD The inputs of the model refers to we integrate C and A into a new C' in the following form. C' = [c1, c2, ..., [HL], a1, ..., a

Philip 1 Oct 22, 2021
RoNER is a Named Entity Recognition model based on a pre-trained BERT transformer model trained on RONECv2

RoNER RoNER is a Named Entity Recognition model based on a pre-trained BERT transformer model trained on RONECv2. It is meant to be an easy to use, hi

Stefan Dumitrescu 9 Nov 07, 2022
DeLighT: Very Deep and Light-Weight Transformers

DeLighT: Very Deep and Light-weight Transformers This repository contains the source code of our work on building efficient sequence models: DeFINE (I

Sachin Mehta 440 Dec 18, 2022
Estimation of the CEFR complexity score of a given word, sentence or text.

NLP-Swedish … allows to estimate CEFR (Common European Framework of References) complexity score of a given word, sentence or text. CEFR scores come f

3 Apr 30, 2022
MEDIALpy: MEDIcal Abbreviations Lookup in Python

A small python package that allows the user to look up common medical abbreviations.

Aberystwyth Systems Biology 7 Nov 09, 2022
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.

keytotext Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: Marketing Sea

Gagan Bhatia 364 Jan 03, 2023
Random Directed Acyclic Graph Generator

DAG_Generator Random Directed Acyclic Graph Generator verison1.0 简介 工作流通常由DAG(有向无环图)来定义,其中每个计算任务$T_i$由一个顶点(node,task,vertex)表示。同时,任务之间的每个数据或控制依赖性由一条加权

Livion 17 Dec 27, 2022
File-based TF-IDF: Calculates keywords in a document, using a word corpus.

File-based TF-IDF Calculates keywords in a document, using a word corpus. Why? Because I found myself with hundreds of plain text files, with no way t

Jakob Lindskog 1 Feb 11, 2022
VoiceFixer VoiceFixer is a framework for general speech restoration.

VoiceFixer VoiceFixer is a framework for general speech restoration. We aim at the restoration of severly degraded speech and historical speech. Paper

Leo 174 Jan 06, 2023
LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation

LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation Tasks | Datasets | LongLM | Baselines | Paper Introduction LOT is a ben

46 Dec 28, 2022
Code for PED: DETR For (Crowd) Pedestrian Detection

Code for PED: DETR For (Crowd) Pedestrian Detection

36 Sep 13, 2022