Official repository of the paper 'Essentials for Class Incremental Learning'

Overview

Essentials for Class Incremental Learning

Official repository of the paper 'Essentials for Class Incremental Learning'

This Pytorch repository contains the code for our work Essentials for Class Incremental Learning.

This work presents a straightforward class-incrmental learning system that focuses on the essential components and already exceeds the state of the art without integrating sophisticated modules.

Requirements

To install requirements:

pip install -r requirements.txt

Training and Evaluation (CIFAR-100, ImageNet-100, ImageNet-1k)

Following scripts contain both training and evaluation codes. Model is evaluated after each phase in class-IL.

with Knowledge-distillation (KD)

To train the base CCIL model:

bash ./scripts/run_cifar.sh
bash ./scripts/run_imagenet100.sh
bash ./scripts/run_imagenet1k.sh

To train CCIL + Self-distillation

bash ./scripts/run_cifar_w_sd.sh
bash ./scripts/run_imagenet100_w_sd.sh
bash ./scripts/run_imagenet1k_w_sd.sh

Results (CIFAR-100)

Model name Avg Acc (5 iTasks) Avg Acc (10 iTasks)
CCIL 66.44 64.86
CCIL + SD 67.17 65.86

Results (ImageNet-100)

Model name Avg Acc (5 iTasks) Avg Acc (10 iTasks)
CCIL 77.99 75.99
CCIL + SD 79.44 76.77

Results (ImageNet)

Model name Avg Acc (5 iTasks) Avg Acc (10 iTasks)
CCIL 67.53 65.61
CCIL + SD 68.04 66.25

List of Arguments

  • Distillation Methods

    • Knowledge Distillation (--kd, --w-kd X), X is the weightage for KD loss, default=1.0
    • Representation Distillation (--rd, --w-rd X), X is the weightage for cos-RD loss, default=0.05
    • Contrastive Representation Distillation (--nce, --w-nce X), only valid for CIFAR-100, X is the weightage of NCE loss
  • Regularization for the first task

    • Self-distillation (--num-sd X, --epochs-sd Y), X is number of generations, Y is number of self-distillation epochs
    • Mixup (--mixup, --mixup-alpha X), X is mixup alpha value, default=0.1
    • Heavy Augmentation (--aug)
    • Label Smoothing (--label-smoothing, --smoothing-alpha X), X is a alpha value, default=0.1
  • Incremental class setting

    • No. of base classes (--start-classes 50)
    • 5-phases (--new-classes 10)
    • 10-phases (--new-classes 5)
  • Cosine learning rate decay (--cosine)

  • Save and Load

    • Experiment Name (--exp-name X)
    • Save checkpoints (--save)
    • Resume checkpoints (--resume, --resume-path X), only to resume from first snapshot

Citation

@article{ccil_mittal,
    Author = {Sudhanshu Mittal and Silvio Galesso and Thomas Brox},
    Title = {Essentials for Class Incremental Learning},
    journal = {arXiv preprint arXiv:2102.09517},
    Year = {2021},
}
Codebase for the Summary Loop paper at ACL2020

Summary Loop This repository contains the code for ACL2020 paper: The Summary Loop: Learning to Write Abstractive Summaries Without Examples. Training

Canny Lab @ The University of California, Berkeley 44 Nov 04, 2022
Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

MGANs Training & Testing code (torch), pre-trained models and supplementary materials for "Precomputed Real-Time Texture Synthesis with Markovian Gene

290 Nov 15, 2022
This provides the R code and data to replicate results in "The USS Trustee’s risky strategy"

USSBriefs2021 This provides the R code and data to replicate results in "The USS Trustee’s risky strategy" by Neil M Davies, Jackie Grant and Chin Yan

1 Oct 30, 2021
Quantum-enhanced transformer neural network

Example of a Quantum-enhanced transformer neural network Get the code: git clone https://github.com/rdisipio/qtransformer.git cd qtransformer Create

Riccardo Di Sipio 61 Nov 08, 2022
Convert ONNX model graph to Keras model format.

Convert ONNX model graph to Keras model format.

Grigory Malivenko 175 Dec 28, 2022
Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery (ICCV 2021 Oral) Run this model on Replicate Optimization: Global directions: Mapper: Check ou

3.3k Jan 05, 2023
Detecting Blurred Ground-based Sky/Cloud Images

Detecting Blurred Ground-based Sky/Cloud Images With the spirit of reproducible research, this repository contains all the codes required to produce t

1 Oct 20, 2021
UltraGCN: An Ultra Simplification of Graph Convolutional Networks for Recommendation

UltraGCN This is our Pytorch implementation for our CIKM 2021 paper: Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: A

XUEPAI 93 Jan 03, 2023
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

VAENAR-TTS - PyTorch Implementation PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

Keon Lee 67 Nov 14, 2022
Social Fabric: Tubelet Compositions for Video Relation Detection

Social-Fabric Social Fabric: Tubelet Compositions for Video Relation Detection This repository contains the code and results for the following paper:

Shuo Chen 7 Aug 09, 2022
Codebase for testing whether hidden states of neural networks encode discrete structures.

structural-probes Codebase for testing whether hidden states of neural networks encode discrete structures. Based on the paper A Structural Probe for

John Hewitt 349 Dec 17, 2022
[CVPR'21 Oral] Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning

Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning [CVPR'21, Oral] By Zhicheng Huang*, Zhaoyang Zeng*, Yupan H

Multimedia Research 196 Dec 13, 2022
Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

ACE Please find the preliminary version published at BMVC 2020 in the folder BMVC_version, and its extended journal version in Journal_version. Datase

28 Dec 25, 2022
Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"

Photo-Realistic-Super-Resoluton Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" [Paper]

Harry Yang 199 Dec 01, 2022
Compact Bidirectional Transformer for Image Captioning

Compact Bidirectional Transformer for Image Captioning Requirements Python 3.8 Pytorch 1.6 lmdb h5py tensorboardX Prepare Data Please use git clone --

YE Zhou 19 Dec 12, 2022
Py-faster-rcnn - Faster R-CNN (Python implementation)

py-faster-rcnn has been deprecated. Please see Detectron, which includes an implementation of Mask R-CNN. Disclaimer The official Faster R-CNN code (w

Ross Girshick 7.8k Jan 03, 2023
Graph-total-spanning-trees - A Python script to get total number of Spanning Trees in a Graph

Total number of Spanning Trees in a Graph This is a python script just written f

Mehdi I. 0 Jul 18, 2022
DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time

DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time Introduction This is official implementation for DR-GAN (IEEE TCS

Kang Liao 18 Dec 23, 2022
Learn about Spice.ai with in-depth samples

Samples Learn about Spice.ai with in-depth samples ServerOps - Learn when to run server maintainance during periods of low load Gardener - Intelligent

Spice.ai 16 Mar 23, 2022
How the Deep Q-learning method works and discuss the new ideas that makes the algorithm work

Deep Q-Learning Recommend papers The first step is to read and understand the method that you will implement. It was first introduced in a 2013 paper

1 Jan 25, 2022