PyTorch implementation of the REMIND method from our ECCV-2020 paper "REMIND Your Neural Network to Prevent Catastrophic Forgetting"

Related tags

Deep LearningREMIND
Overview

REMIND Your Neural Network to Prevent Catastrophic Forgetting

This is a PyTorch implementation of the REMIND algorithm from our ECCV-2020 paper. An arXiv pre-print of our paper is available.

REMIND (REplay using Memory INDexing) is a novel brain-inspired streaming learning model that uses tensor quantization to efficiently store hidden representations (e.g., CNN feature maps) for later replay. REMIND implements this compression using Product Quantization (PQ) and outperforms existing models on the ImageNet and CORe50 classification datasets. Further, we demonstrate REMIND's robustness by pioneering streaming Visual Question Answering (VQA), in which an agent must answer questions about images.

Formally, REMIND takes an input image and passes it through frozen layers of a network to obtain tensor representations (feature maps). It then quantizes the tensors via PQ and stores the indices in memory for replay. The decoder reconstructs a previous subset of tensors from stored indices to train the plastic layers of the network before inference. We restrict the size of REMIND's replay buffer and use a uniform random storage policy.

REMIND

Dependencies

⚠️ ⚠️ For unknown reasons, our code does not reproduce results in PyTorch versions greater than PyTorch 1.3.1. Please follow our instructions below to ensure reproducibility.

We have tested the code with the following packages and versions:

  • Python 3.7.6
  • PyTorch (GPU) 1.3.1
  • torchvision 0.4.2
  • NumPy 1.18.5
  • FAISS (CPU) 1.5.2
  • CUDA 10.1 (also works with CUDA 10.0)
  • Scikit-Learn 0.23.1
  • Scipy 1.1.0
  • NVIDIA GPU

We recommend setting up a conda environment with these same package versions:

conda create -n remind_proj python=3.7
conda activate remind_proj
conda install numpy=1.18.5
conda install pytorch=1.3.1 torchvision=0.4.2 cudatoolkit=10.1 -c pytorch
conda install faiss-cpu=1.5.2 -c pytorch

Setup ImageNet-2012

The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset has 1000 categories and 1.2 million images. The images do not need to be preprocessed or packaged in any database, but the validation images need to be moved into appropriate subfolders. See link.

  1. Download the images from http://image-net.org/download-images

  2. Extract the training data:

    mkdir train && mv ILSVRC2012_img_train.tar train/ && cd train
    tar -xvf ILSVRC2012_img_train.tar && rm -f ILSVRC2012_img_train.tar
    find . -name "*.tar" | while read NAME ; do mkdir -p "${NAME%.tar}"; tar -xvf "${NAME}" -C "${NAME%.tar}"; rm -f "${NAME}"; done
    cd ..
  3. Extract the validation data and move images to subfolders:

    mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xvf ILSVRC2012_img_val.tar
    wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash

Repo Structure & Descriptions

Training REMIND on ImageNet (Classification)

We have provided the necessary files to train REMIND on the exact same ImageNet ordering used in our paper (provided in imagenet_class_order.txt). We also provide steps for running REMIND on an alternative ordering.

To train REMIND on the ImageNet ordering from our paper, follow the steps below:

  1. Run run_imagenet_experiment.sh to train REMIND on the ordering from our paper. Note, this will use our ordering and associated files provided in imagenet_files.

To train REMIND on a different ImageNet ordering, follow the steps below:

  1. Generate a text file containing one class name per line in the desired order.
  2. Run make_numpy_imagenet_label_files.py to generate the necessary numpy files for the desired ordering using the text file from step 1.
  3. Run train_base_init_network.sh to train an offline model using the desired ordering and label files generated in step 2 on the base init data.
  4. Run run_imagenet_experiment.sh using the label files from step 2 and the ckpt file from step 3 to train REMIND on the desired ordering.

Files generated from the streaming experiment:

  • *.json files containing incremental top-1 and top-5 accuracies
  • *.pth files containing incremental model predictions/probabilities
  • *.pth files containing incremental REMIND classifier (F) weights
  • *.pkl files containing PQ centroids and incremental buffer data (e.g., latent codes)

To continue training REMIND from a previous ckpt:

We save out incremental weights and associated data for REMIND after each evaluation cycle. This enables REMIND to continue training from these saved files (in case of a computer crash etc.). This can be done as follows in run_imagenet_experiment.sh:

  1. Set the --resume_full_path argument to the path where the previous REMIND model was saved.
  2. Set the --streaming_min_class argument to the class REMIND left off on.
  3. Run run_imagenet_experiment.sh

Training REMIND on VQA Datasets

We use the gensen library for question features. Execute the following steps to set it up:

cd ${GENSENPATH} 
git clone [email protected]:erobic/gensen.git
cd ${GENSENPATH}/data/embedding
chmod +x glove25.sh && ./glove2h5.sh    
cd ${GENSENPATH}/data/models
chmod +x download_models.sh && ./download_models.sh

Training REMIND on CLEVR

Note: For convenience, we pre-extract all the features including the PQ encoded features. This requires 140 GB of free space, assuming images are deleted after feature extraction.

  1. Download and extract CLEVR images+annotations:

    wget https://dl.fbaipublicfiles.com/clevr/CLEVR_v1.0.zip
    unzip CLEVR_v1.0.zip
  2. Extract question features

    • Clone the gensen repository and download glove features:
    cd ${GENSENPATH} 
    git clone [email protected]:erobic/gensen.git
    cd ${GENSENPATH}/data/embedding
    chmod +x glove25.sh && ./glove2h5.sh    
    cd ${GENSENPATH}/data/models
    chmod +x download_models.sh && ./download_models.sh
    
    • Edit vqa_experiments/clevr/extract_question_features_clevr.py, changing the DATA_PATH variable to point to CLEVR dataset and GENSEN_PATH to point to gensen repository and extract features: python vqa_experiments/clevr/extract_question_features_clevr.py

    • Pre-process the CLEVR questions Edit $PATH variable in vqa_experiments/clevr/preprocess_clevr.py file, pointing it to the directory where CLEVR was extracted

  3. Extract image features, train PQ encoder and extract encoded features

    • Extract image features: python -u vqa_experiments/clevr/extract_image_features_clevr.py --path /path/to/CLEVR
    • In pq_encoding_clevr.py, change the value of PATH and streaming_type (as either 'iid' or 'qtype')
    • Train PQ encoder and extract features: python vqa_experiments/clevr/pq_encoding_clevr.py
  4. Train REMIND

    • Edit data_path in vqa_experiments/configs/config_CLEVR_streaming.py
    • Run ./vqa_experiments/run_clevr_experiment.sh (Set DATA_ORDER to either qtype or iid to define the data order)

Training REMIND on TDIUC

Note: For convenience, we pre-extract all the features including the PQ encoded features. This requires around 170 GB of free space, assuming images are deleted after feature extraction.

  1. Download TDIUC

    cd ${TDIUC_PATH}
    wget https://kushalkafle.com/data/TDIUC.zip && unzip TDIUC.zip
    cd TDIUC && python setup.py --download Y # You may need to change print '' statements to print('')
    
  2. Extract question features

    • Edit vqa_experiments/clevr/extract_question_features_tdiuc.py, changing the DATA_PATH variable to point to TDIUC dataset and GENSEN_PATH to point to gensen repository and extract features: python vqa_experiments/tdiuc/extract_question_features_tdiuc.py

    • Pre-process the TDIUC questions Edit $PATH variable in vqa_experiments/clevr/preprocess_tdiuc.py file, pointing it to the directory where TDIUC was extracted

  3. Extract image features, train PQ encoder and extract encoded features

    • Extract image features: python -u vqa_experiments/tdiuc/extract_image_features_tdiuc.py --path /path/to/TDIUC
    • In pq_encoding_tdiuc.py, change the value of PATH and streaming_type (as either 'iid' or 'qtype')
    • Train PQ encoder and extract features: python vqa_experiments/clevr/pq_encoding_clevr.py
  4. Train REMIND

    • Edit data_path in vqa_experiments/configs/config_TDIUC_streaming.py
    • Run ./vqa_experiments/run_tdiuc_experiment.sh (Set DATA_ORDER to either qtype or iid to define the data order)

Citation

If using this code, please cite our paper.

@inproceedings{hayes2020remind,
  title={REMIND Your Neural Network to Prevent Catastrophic Forgetting},
  author={Hayes, Tyler L and Kafle, Kushal and Shrestha, Robik and Acharya, Manoj and Kanan, Christopher},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2020}
}
Owner
Tyler Hayes
I am a PhD candidate at the Rochester Institute of Technology (RIT). My current research is on lifelong machine learning.
Tyler Hayes
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
Deep learning model for EEG artifact removal

DeepSeparator Introduction Electroencephalogram (EEG) recordings are often contaminated with artifacts. Various methods have been developed to elimina

23 Dec 21, 2022
Kernel Point Convolutions

Created by Hugues THOMAS Introduction Update 27/04/2020: New PyTorch implementation available. With SemanticKitti, and Windows supported. This reposit

Hugues THOMAS 584 Jan 07, 2023
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥

face.evoLVe: High-Performance Face Recognition Library based on PaddlePaddle & PyTorch Evolve to be more comprehensive, effective and efficient for fa

Zhao Jian 3.1k Jan 02, 2023
EvoJAX is a scalable, general purpose, hardware-accelerated neuroevolution toolkit

EvoJAX: Hardware-Accelerated Neuroevolution EvoJAX is a scalable, general purpose, hardware-accelerated neuroevolution toolkit. Built on top of the JA

Google 598 Jan 07, 2023
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022
KwaiRec: A Fully-observed Dataset for Recommender Systems (Density: Almost 100%)

KuaiRec: A Fully-observed Dataset for Recommender Systems (Density: Almost 100%) KuaiRec is a real-world dataset collected from the recommendation log

Chongming GAO (高崇铭) 70 Dec 28, 2022
RoMA: Robust Model Adaptation for Offline Model-based Optimization

RoMA: Robust Model Adaptation for Offline Model-based Optimization Implementation of RoMA: Robust Model Adaptation for Offline Model-based Optimizatio

9 Oct 31, 2022
EMNLP'2021: SimCSE: Simple Contrastive Learning of Sentence Embeddings

SimCSE: Simple Contrastive Learning of Sentence Embeddings This repository contains the code and pre-trained models for our paper SimCSE: Simple Contr

Princeton Natural Language Processing 2.5k Dec 29, 2022
Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Vansh Wassan 15 Jun 17, 2021
RoMa: A lightweight library to deal with 3D rotations in PyTorch.

RoMa: A lightweight library to deal with 3D rotations in PyTorch. RoMa (which stands for Rotation Manipulation) provides differentiable mappings betwe

NAVER 90 Dec 27, 2022
Neighborhood Reconstructing Autoencoders

Neighborhood Reconstructing Autoencoders The official repository for Neighborhood Reconstructing Autoencoders (Lee, Kwon, and Park, NeurIPS 2021). T

Yonghyeon Lee 24 Dec 14, 2022
RoFormer_pytorch

PyTorch RoFormer 原版Tensorflow权重(https://github.com/ZhuiyiTechnology/roformer) chinese_roformer_L-12_H-768_A-12.zip (提取码:xy9x) 已经转化为PyTorch权重 chinese_r

yujun 283 Dec 12, 2022
Official implementation for "Low-light Image Enhancement via Breaking Down the Darkness"

Low-light Image Enhancement via Breaking Down the Darkness by Qiming Hu, Xiaojie Guo. 1. Dependencies Python3 PyTorch=1.0 OpenCV-Python, TensorboardX

Qiming Hu 30 Jan 01, 2023
Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR)

Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR) This is the official implementation of our paper Personalized Tran

Yongchun Zhu 81 Dec 29, 2022
Optimized code based on M2 for faster image captioning training

Transformer Captioning This repository contains the code for Transformer-based image captioning. Based on meshed-memory-transformer, we further optimi

lyricpoem 16 Dec 16, 2022
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

30 Dec 24, 2022
This repository contains python code necessary to replicated the experiments performed in our paper "Invariant Ancestry Search"

InvariantAncestrySearch This repository contains python code necessary to replicated the experiments performed in our paper "Invariant Ancestry Search

Phillip Bredahl Mogensen 0 Feb 02, 2022
SIEM Logstash parsing for more than hundred technologies

LogIndexer Pipeline Logstash Parsing Configurations for Elastisearch SIEM and OpenDistro for Elasticsearch SIEM Why this project exists The overhead o

146 Dec 29, 2022
Pytorch implementation of ICASSP 2022 paper Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

IIGROUP 6 Sep 21, 2022