This is the pytorch re-implementation of the IterNorm

Overview

IterNorm-pytorch

Pytorch reimplementation of the IterNorm methods, which is described in the following paper:

Iterative Normalization: Beyond Standardization towards Efficient Whitening

Lei Huang, Yi Zhou, Fan Zhu, Li Liu, Ling Shao

IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019 (accepted). arXiv:1904.03441

This project also provide the pytorch implementation of Decorrelated Batch Normalization (CVPR 2018, arXiv:1804.08450), more details please refer to the Torch project.

Requirements and Dependency

  • Install PyTorch with CUDA (for GPU). (Experiments are validated on python 3.6.8 and pytorch-nightly 1.0.0)
  • (For visualization if needed), install the dependency visdom by:
pip install visdom

Experiments

1. VGG-network on Cifar-10 datasets:

run the scripts in the ./cifar10/experiments/vgg. Note that the dataset root dir should be altered by setting the para '--dataset-root', and the dataset style is described as:

-<dataset-root>
|-cifar10-batches-py
||-data_batch_1
||-data_batch_2
||-data_batch_3
||-data_batch_4
||-data_batch_5
||-test_batch

If the dataset is not exist, the script will download it, under the conditioning that the dataset-root dir is existed

2. Wide-Residual-Network on Cifar-10 datasets:

run the scripts in the ./cifar10/experiments/wrn.

3. ImageNet experiments.

run the scripts in the ./ImageNet/experiment. Note that resnet18 experimetns are run on one GPU, and resnet-50/101 are run on 4 GPU in the scripts.

Note that the dataset root dir should be altered by setting the para '--dataset-root'. and the dataset style is described as:

-<dataset-root>
|-train
||-class1
||-...
||-class1000  
|-var
||-class1
||-...
||-class1000  

Using IterNorm in other projects/tasks

(1) copy ./extension/normalization/iterative_normalization.py to the respective dir.

(2) import the IterNorm class in iterative_normalization.py

(3) generally speaking, replace the BatchNorm layer by IterNorm, or add it in any place if you want to the feature/channel decorrelated. Considering the efficiency (Note that BatchNorm is intergrated in cudnn while IterNorm is based on the pytorch script without optimization), we recommend 1) replace the first BatchNorm; 2) insert extra IterNorm before the first skip connection in resnet; 3) inserted before the final linear classfier as described in the paper.

(4) Some tips related to the hyperparamters (Group size G and Iterative Number T). We recommend G=64 (i.e., the channel number in per group is 64) and T=5 by default. If you run on large batch size (e.g.>1024), you can either increase G or T. For fine tunning, fix G=64 or G=32, and search T={3,4,5,6,7,8} may help.

Owner
Lei Huang
Ph.D in BeiHang University, research interest: deep learning, semi-supervised learning, active learning and their application to visual and textual data.
Lei Huang
Code to reproduce the experiments in the paper "Transformer Based Multi-Source Domain Adaptation" (EMNLP 2020)

Transformer Based Multi-Source Domain Adaptation Dustin Wright and Isabelle Augenstein To appear in EMNLP 2020. Read the preprint: https://arxiv.org/a

CopeNLU 36 Dec 05, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 5 Jul 14, 2022
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight

dimensions Estimating the instrinsic dimensionality of image datasets Code for: The Intrinsic Dimensionaity of Images and Its Impact On Learning - Phi

Phil Pope 41 Dec 10, 2022
Self-attentive task GAN for space domain awareness data augmentation.

SATGAN TODO: update the article URL once published. Article about this implemention The self-attentive task generative adversarial network (SATGAN) le

Nathan 2 Mar 24, 2022
Pytorch implementation of RED-SDS (NeurIPS 2021).

Recurrent Explicit Duration Switching Dynamical Systems (RED-SDS) This repository contains a reference implementation of RED-SDS, a non-linear state s

Abdul Fatir 10 Dec 02, 2022
TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain

TCNN Pandey A, Wang D L. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain[C]//ICASSP 2019-2019 IEEE Int

凌逆战 16 Dec 30, 2022
Towards Long-Form Video Understanding

Towards Long-Form Video Understanding Chao-Yuan Wu, Philipp Krähenbühl, CVPR 2021 [Paper] [Project Page] [Dataset] Citation @inproceedings{lvu2021,

Chao-Yuan Wu 69 Dec 26, 2022
Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

6 Dec 19, 2022
This is the formal code implementation of the CVPR 2022 paper 'Federated Class Incremental Learning'.

Official Pytorch Implementation for GLFC [CVPR-2022] Federated Class-Incremental Learning This is the official implementation code of our paper "Feder

Race Wang 57 Dec 27, 2022
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

24 Nov 02, 2022
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Jia Research Lab 115 Dec 23, 2022
Reproduce partial features of DeePMD-kit using PyTorch.

DeePMD-kit on PyTorch For better understand DeePMD-kit, we implement its partial features using PyTorch and expose interface consuing descriptors. Tec

Shaochen Shi 8 Dec 17, 2022
Face2webtoon - Despite its importance, there are few previous works applying I2I translation to webtoon.

Despite its importance, there are few previous works applying I2I translation to webtoon. I collected dataset from naver webtoon 연애혁명 and tried to transfer human faces to webtoon domain.

이상윤 64 Oct 19, 2022
Neural Dynamic Policies for End-to-End Sensorimotor Learning

This is a PyTorch based implementation for our NeurIPS 2020 paper on Neural Dynamic Policies for end-to-end sensorimotor learning.

Shikhar Bahl 47 Dec 11, 2022
MBPO (paper: When to trust your model: Model-based policy optimization) in offline RL settings

offline-MBPO This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settings Pape

LxzGordon 1 Oct 24, 2021
这是一个利用facenet和retinaface实现人脸识别的库,可以进行在线的人脸识别。

Facenet+Retinaface:人脸识别模型在Pytorch当中的实现 目录 注意事项 Attention 所需环境 Environment 文件下载 Download 预测步骤 How2predict 参考资料 Reference 注意事项 该库中包含了两个网络,分别是retinaface和

Bubbliiiing 102 Dec 30, 2022
Deep learning model, heat map, data prepo

deep learning model, heat map, data prepo

Pamela Dekas 1 Jan 14, 2022
LBBA-boosted WSOD

LBBA-boosted WSOD Summary Our code is based on ruotianluo/pytorch-faster-rcnn and WSCDN Sincerely thanks for your resources. Newer version of our code

Martin Dong 20 Sep 19, 2022
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 08, 2023
Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models

Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models, under review at ICLR 2017 requirements: T

Shuangfei Zhai 18 Mar 05, 2022