Official Repository for our ICCV2021 paper: Continual Learning on Noisy Data Streams via Self-Purified Replay

Related tags

Deep LearningSPR
Overview

Continual Learning on Noisy Data Streams via Self-Purified Replay

This repository contains the official PyTorch implementation for our ICCV2021 paper.

  • Chris Dongjoo Kim*, Jinseo Jeong*, Sangwoo Moon, Gunhee Kim. Continual Learning on Noisy Data Streams via Self-Purified Replay. In ICCV, 2021 (* equal contribution).

[Paper Link][Slides][Poster]

System Dependencies

  • Python >= 3.6.1
  • CUDA >= 9.0 supported GPU

Installation

Using virtual env is recommended.

$ conda create --name SPR python=3.6

Install pytorch==1.7.0 and torchvision==0.8.1. Then, install the rest of the requirements.

$ pip install -r requirements.txt

Data and Log directory set-up

create checkpoints and data directories. We recommend symbolic links as below.

$ mkdir data
$ ln -s [MNIST Data Path] data/mnist
$ ln -s [CIFAR10 Data Path] data/cifar10
$ ln -s [CIFAR100 Data Path] data/cifar100
$ ln -s [Webvision Data Path] data/webvision

$ ln -s [log directory path] checkpoints

Run

Specify parameters in config yaml, episodes yaml files.

python main.py --log-dir [log directory path] --c [config file path] --e [episode file path] --override "|" --random_seed [seed]

# e.g. to run mnist symmetric noise 40% experiment,
python main.py --log-dir [log directory path] --c configs/mnist_spr.yaml --e episodes/mnist-split_epc1_a.yaml --override "corruption_percent=0.4";

# e.g. to run cifar10 asymmetric noise 40% experiment,
python main.py --log-dir [log directory path] --c configs/cifar10_spr.yaml --e episodes/cifar10-split_epc1_asym_a.yaml --override "asymmetric_nosie=False|corruption_percent=0.4";

# e.g. to run cifar100 superclass symmetric noise 40% experiment,
python main.py --log-dir [log directory path] --c configs/cifar100_spr.yaml --e episodes/cifar100sup-split_epc1_a.yaml --override "superclass_nosie=True|corruption_percent=0.4";

Expert Parallel Training

If you use slurm environment, training expert models in advance is possible.

# e.g. to run mnist symmetric noise 40% experiment,
python meta-main.py --log-dir [log directory path] -c configs/mnist_spr.yaml -e episodes/mnist-split_epc1_a.yaml --random_seed [seed] --override "corruption_percent=0.4" --njobs 10 --jobs_per_gpu 3

# also, you can only train experts for later use by adding an --expert_train_only option.
python meta-main.py --log-dir [log directory path] -c configs/mnist_spr.yaml -e episodes/mnist-split_epc1_a.yaml --random_seed [seed] --override "corruption_percent=0.4" --ngpu 10 --jobs_per_gpu 3 --expert_train_only

## to use the trained experts, set the same [log directory path] and [seed].
python main.py --log-dir [log directory path] --c configs/mnist_spr.yaml --e episodes/mnist-split_epc1_a.yaml --random_seed [seed] --override "corruption_percent=0.4";

Citation

The code and dataset are free to use for academic purposes only. If you use any of the material in this repository as part of your work, we ask you to cite:

@inproceedings{kim-ICCV-2021,
    author    = {Chris Dongjoo Kim and Jinseo Jeong and Sangwoo Moon and Gunhee Kim},
    title     = "{Continual Learning on Noisy Data Streams via Self-Purified Replay}"
    booktitle = {ICCV},
    year      = 2021
}

Last edit: Oct 12, 2021

Owner
Jinseo Jeong
graduate student @ vision & learning lab, Seoul National Univ.
Jinseo Jeong
Active Offline Policy Selection With Python

Active Offline Policy Selection This is supporting example code for NeurIPS 2021 paper Active Offline Policy Selection by Ksenia Konyushkova*, Yutian

DeepMind 27 Oct 15, 2022
An official PyTorch implementation of the TKDE paper "Self-Supervised Graph Representation Learning via Topology Transformations".

Self-Supervised Graph Representation Learning via Topology Transformations This repository is the official PyTorch implementation of the following pap

Hsiang Gao 2 Oct 31, 2022
existing and custom freqtrade strategies supporting the new hyperstrategy format.

freqtrade-strategies Description Existing and self-developed strategies, rewritten to support the new HyperStrategy format from the freqtrade-develop

39 Aug 20, 2021
Kaggle competition: Springleaf Marketing Response

PruebaEnel Prueba Kaggle-Springleaf-master Prueba Kaggle-Springleaf Kaggle competition: Springleaf Marketing Response Competencia de Kaggle: Marketing

1 Feb 09, 2022
SmartSim Infrastructure Library.

Home Install Documentation Slack Invite Cray Labs SmartSim SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and Ten

Cray Labs 139 Jan 01, 2023
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

郭飞 3.7k Jan 03, 2023
Structured Edge Detection Toolbox

################################################################### # # # Structure

Piotr Dollar 779 Jan 02, 2023
Source code for Fixed-Point GAN for Cloud Detection

FCD: Fixed-Point GAN for Cloud Detection PyTorch source code of Nyborg & Assent (2020). Abstract The detection of clouds in satellite images is an ess

Joachim Nyborg 8 Dec 22, 2022
Some bravo or inspiring research works on the topic of curriculum learning.

Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN Official code for NeurIPS 2021 paper "Towards Scalable Unpaired Virtu

131 Jan 07, 2023
A playable implementation of Fully Convolutional Networks with Keras.

keras-fcn A re-implementation of Fully Convolutional Networks with Keras Installation Dependencies keras tensorflow Install with pip $ pip install git

JihongJu 202 Sep 07, 2022
Norm-based Analysis of Transformer

Norm-based Analysis of Transformer Implementations for 2 papers introducing to analyze Transformers using vector norms: Kobayashi+'20 Attention is Not

Goro Kobayashi 52 Dec 05, 2022
Datasets, Transforms and Models specific to Computer Vision

vision Datasets, Transforms and Models specific to Computer Vision Installation First install the nightly version of OneFlow python3 -m pip install on

OneFlow 68 Dec 07, 2022
Code for Learning to Segment The Tail (LST)

Learning to Segment the Tail [arXiv] In this repository, we release code for Learning to Segment The Tail (LST). The code is directly modified from th

47 Nov 07, 2022
Code & Data for the Paper "Time Masking for Temporal Language Models", WSDM 2022

Time Masking for Temporal Language Models This repository provides a reference implementation of the paper: Time Masking for Temporal Language Models

Guy Rosin 12 Jan 06, 2023
A solution to the 2D Ising model of ferromagnetism, implemented using the Metropolis algorithm

Solving the Ising model on a 2D lattice using the Metropolis Algorithm Introduction The Ising model is a simplified model of ferromagnetism, the pheno

Rohit Prabhu 5 Nov 13, 2022
This package contains a PyTorch Implementation of IB-GAN of the submitted paper in AAAI 2021

The PyTorch implementation of IB-GAN model of AAAI 2021 This package contains a PyTorch implementation of IB-GAN presented in the submitted paper (IB-

Insu Jeon 9 Mar 30, 2022
Official code for the ICLR 2021 paper Neural ODE Processes

Neural ODE Processes Official code for the paper Neural ODE Processes (ICLR 2021). Abstract Neural Ordinary Differential Equations (NODEs) use a neura

Cristian Bodnar 50 Oct 28, 2022
Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback

Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback This is our Pytorch implementation for the paper: Yinwei Wei,

17 Jun 10, 2022
Official PyTorch Implementation of paper "NeLF: Neural Light-transport Field for Single Portrait View Synthesis and Relighting", EGSR 2021.

NeLF: Neural Light-transport Field for Single Portrait View Synthesis and Relighting Official PyTorch Implementation of paper "NeLF: Neural Light-tran

Ken Lin 38 Dec 26, 2022
A self-supervised learning framework for audio-visual speech

AV-HuBERT (Audio-Visual Hidden Unit BERT) Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction Robust Self-Supervised A

Meta Research 431 Jan 07, 2023