Implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning"

Related tags

Deep LearningSPPR
Overview

Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning

This is the implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning" (accepted to CVPR2021).

For more information, check out the paper on [arXiv].

Requirements

  • Python 3.8
  • PyTorch 1.8.1 (>1.1.0)
  • cuda 11.2

Preparing Few-Shot Class-Incremental Learning Datasets

Download following datasets:

1. CIFAR-100

Automatically downloaded on torchvision.

2. MiniImageNet

(1) Download MiniImageNet train/test images[github], and prepare related datasets according to [TOPIC].

(2) or Download processed data from our Google Drive: [mini-imagenet.zip], (and locate the entire folder under datasets/ directory).

3. CUB200

(1) Download CUB200 train/test images, and prepare related datasets according to [TOPIC]:

wget http://www.vision.caltech.edu/visipedia-data/CUB-200-2011/CUB_200_2011.tgz

(2) or Download processed data from our Google Drive: [cub.zip], (and locate the entire folder under datasets/ directory).

Create a directory '../datasets' for the above three datasets and appropriately place each dataset to have following directory structure:

../                                                        # parent directory
├── ./                                           # current (project) directory
│   ├── log/                              # (dir.) running log
│   ├── pre/                              # (dir.) trained models for test.
│   ├── utils/                            # (dir.) implementation of paper 
│   ├── README.md                          # intstruction for reproduction
│   ├── test.sh                          # bash for testing.
│   ├── train.py                        # code for training model
│   └── train.sh                        # bash for training model
└── datasets/
    ├── CIFAR100/                      # CIFAR100 devkit
    ├── mini-imagenet/           
    │   ├── train/                         # (dir.) training images (from Google Drive)
    │   ├── test/                           # (dir.) testing images (from Google Drive)
    │   └── ..some csv files..
    └── cub/                                   # (dir.) contains 200 object classes
        ├── train/                             # (dir.) training images (from Google Drive)
        └── test/                               # (dir.) testing images (from Google Drive)

Training

Choose apporopriate lines in train.sh file.

sh train.sh
  • '--base_epochs' can be modified to control the initial accuracy ('Our' vs 'Our*' in our paper).
  • Training takes approx. several hours until convergence (trained with one 2080 Ti or 3090 GPUs).

Testing

1. Download pretrained models to the 'pre' folder.

Pretrained models are available on our [Google Drive].

2. Test

Choose apporopriate lines in train.sh file.

sh test.sh 

Main Results

The experimental results with 'test.sh 'for three datasets are shown below.

1. CIFAR-100

Model 1 2 3 4 5 6 7 8 9
iCaRL 64.10 53.28 41.69 34.13 27.93 25.06 20.41 15.48 13.73
TOPIC 64.10 56.03 47.89 42.99 38.02 34.60 31.67 28.35 25.86
Ours 63.97 65.86 61.31 57.6 53.39 50.93 48.27 45.36 43.32

2. MiniImageNet

Model 1 2 3 4 5 6 7 8 9
iCaRL 61.31 46.32 42.94 37.63 30.49 24.00 20.89 18.80 17.21
TOPIC 61.31 45.58 43.77 37.19 32.38 29.67 26.44 25.18 21.80
Ours 61.45 63.80 59.53 55.53 52.50 49.60 46.69 43.79 41.92

3. CUB200

Model 1 2 3 4 5 6 7 8 9 10 11
iCaRL 68.68 52.65 48.61 44.16 36.62 29.52 27.83 26.26 24.01 23.89 21.16
TOPIC 68.68 61.01 55.35 50.01 42.42 39.07 35.47 32.87 30.04 25.91 24.85
Ours 68.05 62.01 57.61 53.67 50.77 46.76 45.43 44.53 41.74 39.93 38.45

The presented results are slightly different from those in the paper, which are the average results of multiple tests.

BibTeX

If you use this code for your research, please consider citing:

@inproceedings{zhu2021self,
  title={Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning},
  author={Zhu, Kai and Cao, Yang and Zhai, Wei and Cheng, Jie and Zha, Zheng-Jun},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={6801--6810},
  year={2021}
}
Owner
Kai Zhu
Kai Zhu
PyTorch implementation of MoCo: Momentum Contrast for Unsupervised Visual Representation Learning

MoCo: Momentum Contrast for Unsupervised Visual Representation Learning This is a PyTorch implementation of the MoCo paper: @Article{he2019moco, aut

Meta Research 3.7k Jan 02, 2023
PyTorch implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels"

Contrast to Divide: self-supervised pre-training for learning with noisy labels This is an official implementation of "Contrast to Divide: self-superv

55 Nov 23, 2022
An OpenAI Gym environment for multi-agent car racing based on Gym's original car racing environment.

Multi-Car Racing Gym Environment This repository contains MultiCarRacing-v0 a multiplayer variant of Gym's original CarRacing-v0 environment. This env

Igor Gilitschenski 56 Nov 01, 2022
SPTAG: A library for fast approximate nearest neighbor search

SPTAG: A library for fast approximate nearest neighbor search SPTAG SPTAG (Space Partition Tree And Graph) is a library for large scale vector approxi

Microsoft 4.3k Jan 01, 2023
A PyTorch implementation of " EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks."

EfficientNet A PyTorch implementation of EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. [arxiv] [Official TF Repo] Implemen

AhnDW 298 Dec 10, 2022
The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"

Hierarchical Token Semantic Audio Transformer Introduction The Code Repository for "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound

Knut(Ke) Chen 134 Jan 01, 2023
Deep generative modeling for time-stamped heterogeneous data, enabling high-fidelity models for a large variety of spatio-temporal domains.

Neural Spatio-Temporal Point Processes [arxiv] Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel Abstract. We propose a new class of parameterizations

Facebook Research 75 Dec 19, 2022
This project is a loose implementation of paper "Algorithmic Financial Trading with Deep Convolutional Neural Networks: Time Series to Image Conversion Approach"

Stock Market Buy/Sell/Hold prediction Using convolutional Neural Network This repo is an attempt to implement the research paper titled "Algorithmic F

Asutosh Nayak 136 Dec 28, 2022
RMTD: Robust Moving Target Defence Against False Data Injection Attacks in Power Grids

RMTD: Robust Moving Target Defence Against False Data Injection Attacks in Power Grids Real-time detection performance. This repo contains the code an

0 Nov 10, 2021
OOD Dataset Curator and Benchmark for AI-aided Drug Discovery

🔥 DrugOOD 🔥 : OOD Dataset Curator and Benchmark for AI Aided Drug Discovery This is the official implementation of the DrugOOD project, this is the

108 Dec 17, 2022
code for ICCV 2021 paper 'Generalized Source-free Domain Adaptation'

G-SFDA Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper]. Dataset preparing Download

Shiqi Yang 84 Dec 26, 2022
CS5242_2021 - Neural Networks and Deep Learning, NUS CS5242, 2021

CS5242_2021 Neural Networks and Deep Learning, NUS CS5242, 2021 Cloud Machine #1 : Google Colab (Free GPU) Follow this Notebook installation : https:/

Xavier Bresson 165 Oct 25, 2022
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

Junjie Hu 13 Dec 10, 2022
Multiple custom object count and detection using YOLOv3-Tiny method

Electronic-Component-YOLOv3 Introduce This project created to detect, count, and recognize multiple custom object using YOLOv3-Tiny method. The target

Derwin Mahardika 2 Nov 14, 2022
Ranger deep learning optimizer rewrite to use newest components

Ranger21 - integrating the latest deep learning components into a single optimizer Ranger deep learning optimizer rewrite to use newest components Ran

Less Wright 266 Dec 28, 2022
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

The official code for the paper "Inverse Problems Leveraging Pre-trained Contrastive Representations" (to appear in NeurIPS 2021).

Sriram Ravula 26 Dec 10, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
PyTorch implementation of normalizing flow models

PyTorch implementation of normalizing flow models

Vincent Stimper 242 Jan 02, 2023
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.

Bridging Multi-Task Learning and Meta-Learning Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Trainin

AI Secure 57 Dec 15, 2022