Random Erasing Data Augmentation. Experiments on CIFAR10, CIFAR100 and Fashion-MNIST

Overview

Random Erasing Data Augmentation

===============================================================

Examples

black white random
i1 i2 i3
i4 i5 i6

This code has the source code for the paper "Random Erasing Data Augmentation".

If you find this code useful in your research, please consider citing:

@inproceedings{zhong2020random,
title={Random Erasing Data Augmentation},
author={Zhong, Zhun and Zheng, Liang and Kang, Guoliang and Li, Shaozi and Yang, Yi},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},
year={2020}
}

Other re-implementations

[Official Torchvision in Transform]

[Pytorch: Random Erasing for ImageNet]

[Python Augmentor]

[Person_reID CamStyle]

[Person_reID_baseline + Random Erasing + Re-ranking]

[Keras re-implementation]

Installation

Requirements for Pytorch (see Pytorch installation instructions)

Examples:

CIFAR10

ResNet-20 baseline on CIFAR10: python cifar.py --dataset cifar10 --arch resnet --depth 20

ResNet-20 + Random Erasing on CIFAR10: python cifar.py --dataset cifar10 --arch resnet --depth 20 --p 0.5

CIFAR100

ResNet-20 baseline on CIFAR100: python cifar.py --dataset cifar100 --arch resnet --depth 20

ResNet-20 + Random Erasing on CIFAR100: python cifar.py --dataset cifar100 --arch resnet --depth 20 --p 0.5

Fashion-MNIST

ResNet-20 baseline on Fashion-MNIST: python fashionmnist.py --dataset fashionmnist --arch resnet --depth 20

ResNet-20 + Random Erasing on Fashion-MNIST: python fashionmnist.py --dataset fashionmnist --arch resnet --depth 20 --p 0.5

Other architectures

For ResNet: --arch resnet --depth (20, 32, 44, 56, 110)

For WRN: --arch wrn --depth 28 --widen-factor 10

Our results

You can reproduce the results in our paper:

 CIFAR10 CIFAR10 CIFAR100 CIFAR100 Fashion-MNIST Fashion-MNIST
Models  Base. +RE Base. +RE Base. +RE
ResNet-20  7.21 6.73 30.84 29.97 4.39 4.02
ResNet-32  6.41 5.66 28.50 27.18 4.16 3.80
ResNet-44  5.53 5.13 25.27 24.29 4.41 4.01
ResNet-56  5.31 4.89 24.82 23.69 4.39 4.13
ResNet-110  5.10 4.61 23.73 22.10 4.40 4.01
WRN-28-10  3.80 3.08 18.49 17.73 4.01 3.65

NOTE THAT, if you use the latest released Fashion-MNIST, the performance of Baseline and RE will slightly lower than the results reported in our paper. Please refer to the issue.

If you have any questions about this code, please do not hesitate to contact us.

Zhun Zhong

Liang Zheng

Owner
Zhun Zhong
Zhun Zhong
BLEND: A Fast, Memory-Efficient, and Accurate Mechanism to Find Fuzzy Seed Matches

BLEND is a mechanism that can efficiently find fuzzy seed matches between sequences to significantly improve the performance and accuracy while reducing the memory space usage of two important applic

SAFARI Research Group at ETH Zurich and Carnegie Mellon University 19 Dec 26, 2022
a Pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in 2021"

A pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in 2021" 1. Notes This is a pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in

91 Dec 26, 2022
The final project of "Applying AI to 3D Medical Imaging Data" from "AI for Healthcare" nanodegree - Udacity.

Quantifying Hippocampus Volume for Alzheimer's Progression Background Alzheimer's disease (AD) is a progressive neurodegenerative disorder that result

Omar Laham 1 Jan 14, 2022
Image reconstruction done with untrained neural networks.

PyTorch Deep Image Prior An implementation of image reconstruction methods from Deep Image Prior (Ulyanov et al., 2017) in PyTorch. The point of the p

Atiyo Ghosh 192 Nov 30, 2022
Differentiable rasterization applied to 3D model simplification tasks

nvdiffmodeling Differentiable rasterization applied to 3D model simplification tasks, as described in the paper: Appearance-Driven Automatic 3D Model

NVIDIA Research Projects 336 Dec 30, 2022
Dataset and codebase for NeurIPS 2021 paper: Exploring Forensic Dental Identification with Deep Learning

Repository under construction. Example dataset, checkpoints, and training/testing scripts will be avaible soon! 💡 Collated best practices from most p

4 Jun 26, 2022
Implementation of Self-supervised Graph-level Representation Learning with Local and Global Structure (ICML 2021).

Self-supervised Graph-level Representation Learning with Local and Global Structure Introduction This project is an implementation of ``Self-supervise

MilaGraph 50 Dec 09, 2022
HarDNeXt: Official HarDNeXt repository

HarDNeXt-Pytorch HarDNeXt: A Stage Receptive Field and Connectivity Aware Convolution Neural Network HarDNeXt-MSEG for Medical Image Segmentation in 0

5 May 26, 2022
Official Pytorch implementation of 6DRepNet: 6D Rotation representation for unconstrained head pose estimation.

6D Rotation Representation for Unconstrained Head Pose Estimation (Pytorch) Paper Thorsten Hempel and Ahmed A. Abdelrahman and Ayoub Al-Hamadi, "6D Ro

Thorsten Hempel 284 Dec 23, 2022
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

Yige-Li 84 Jan 04, 2023
Embeds a story into a music playlist by sorting the playlist so that the order of the music follows a narrative arc.

playlist-story-builder This project attempts to embed a story into a music playlist by sorting the playlist so that the order of the music follows a n

Dylan R. Ashley 0 Oct 28, 2021
Pytorch Implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension)

DiffSinger - PyTorch Implementation PyTorch implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension). Status

Keon Lee 152 Jan 02, 2023
YOLOX_AUDIO is an audio event detection model based on YOLOX

YOLOX_AUDIO is an audio event detection model based on YOLOX, an anchor-free version of YOLO. This repo is an implementated by PyTorch. Main goal of YOLOX_AUDIO is to detect and classify pre-defined

intflow Inc. 77 Dec 19, 2022
Sequence-to-Sequence learning using PyTorch

Seq2Seq in PyTorch This is a complete suite for training sequence-to-sequence models in PyTorch. It consists of several models and code to both train

Elad Hoffer 514 Nov 17, 2022
[WACV21] Code for our paper: Samuel, Atzmon and Chechik, "From Generalized zero-shot learning to long-tail with class descriptors"

DRAGON: From Generalized zero-shot learning to long-tail with class descriptors Paper Project Website Video Overview DRAGON learns to correct the bias

Dvir Samuel 25 Dec 06, 2022
Supplemental learning materials for "Fourier Feature Networks and Neural Volume Rendering"

Fourier Feature Networks and Neural Volume Rendering This repository is a companion to a lecture given at the University of Cambridge Engineering Depa

Matthew A Johnson 133 Dec 26, 2022
Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination

Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination (ICCV 2021) Dataset License This work is l

DongYoung Kim 33 Jan 04, 2023
Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer.

DocEnTR Description Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer. This model is implemented on to

Mohamed Ali Souibgui 74 Jan 07, 2023
Finetune the base 64 px GLIDE-text2im model from OpenAI on your own image-text dataset

Finetune the base 64 px GLIDE-text2im model from OpenAI on your own image-text dataset

Clay Mullis 82 Oct 13, 2022
Datasets and source code for our paper Webly Supervised Fine-Grained Recognition: Benchmark Datasets and An Approach

Introduction Datasets and source code for our paper Webly Supervised Fine-Grained Recognition: Benchmark Datasets and An Approach Datasets: WebFG-496

21 Sep 30, 2022