An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

Overview

Automatic Augmentation Zoo

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

We will post updates regularly so you can star 🌟 or watch 👓 this repository for the latest.

Introduction

This repository provides the official implementations of OHL and AWS, and will also integrate some other popular auto-aug methods (like Auto Augment, Fast AutoAugment and Adversarial autoaugment) in pure PyTorch. We use torch.distributed to conduct the distributed training. The model checkpoints will be upload to GoogleDrive or OneDrive soon.

Dependencies

It would be recommended to conduct experiments under:

  • python 3.6.3
  • pytorch 1.1.0, torchvision 0.2.1

All the dependencies are listed in requirements.txt. You may use commands like pip install -r requirements.txt to install them.

Running

  1. Create the directory for your experiment.
cd /path/to/this/repo
mkdir -p exp/aws_search1
  1. Copy configurations into your workspace.
cp scripts/search.sh configs/aws.yaml exp/aws_search1
cd exp/aws_search1
  1. Start searching
# sh ./search.sh  
sh ./search.sh Test 8

An instance of yaml:

version: 0.1.0

dist:
    type: torch
    kwargs:
        node0_addr: auto
        node0_port: auto
        mp_start_method: fork   # fork or spawn; spawn would be too slow for Dalaloader

pipeline:
    type: aws
    common_kwargs:
        dist_training: &dist_training False
#        job_name:         [will be assigned in runtime]
#        exp_root:         [will be assigned in runtime]
#        meta_tb_lg_root:  [will be assigned in runtime]

        data:
            type: cifar100               # case-insensitive (will be converted to lower case in runtime)
#            dataset_root: /path/to/dataset/root   # default: ~/datasets/[type]
            train_set_size: 40000
            val_set_size: 10000
            batch_size: 256
            dist_training: *dist_training
            num_workers: 3
            cutout: True
            cutlen: 16

        model_grad_clip: 3.0
        model:
            type: WRN
            kwargs:
#                num_classes: [will be assigned in runtime]
                bn_mom: 0.5

        agent:
            type: ppo           # ppo or REINFORCE
            kwargs:
                initial_baseline_ratio: 0
                baseline_mom: 0.9
                clip_epsilon: 0.2
                max_training_times: 5
                early_stopping_kl: 0.002
                entropy_bonus: 0
                op_cfg:
                    type: Adam         # any type in torch.optim
                    kwargs:
#                        lr: [will be assigned in runtime] (=sc.kwargs.base_lr)
                        betas: !!python/tuple [0.5, 0.999]
                        weight_decay: 0
                sc_cfg:
                    type: Constant
                    kwargs:
                        base_lr_divisor: 8      # base_lr = warmup_lr / base_lr_divisor
                        warmup_lr: 0.1          # lr at the end of warming up
                        warmup_iters: 10      # warmup_epochs = epochs / warmup_divisor
                        iters: &finetune_lp 350
        
        criterion:
            type: LSCE
            kwargs:
                smooth_ratio: 0.05


    special_kwargs:
        pretrained_ckpt_path: ~ # /path/to/pretrained_ckpt.pth.tar
        pretrain_ep: &pretrain_ep 200
        pretrain_op: &sgd
            type: SGD       # any type in torch.optim
            kwargs:
#                lr: [will be assigned in runtime] (=sc.kwargs.base_lr)
                nesterov: True
                momentum: 0.9
                weight_decay: 0.0001
        pretrain_sc:
            type: Cosine
            kwargs:
                base_lr_divisor: 4      # base_lr = warmup_lr / base_lr_divisor
                warmup_lr: 0.2          # lr at the end of warming up
                warmup_divisor: 200     # warmup_epochs = epochs / warmup_divisor
                epochs: *pretrain_ep
                min_lr: &finetune_lr 0.001

        finetuned_ckpt_path: ~  # /path/to/finetuned_ckpt.pth.tar
        finetune_lp: *finetune_lp
        finetune_ep: &finetune_ep 10
        rewarded_ep: 2
        finetune_op: *sgd
        finetune_sc:
            type: Constant
            kwargs:
                base_lr: *finetune_lr
                warmup_lr: *finetune_lr
                warmup_iters: 0
                epochs: *finetune_ep

        retrain_ep: &retrain_ep 300
        retrain_op: *sgd
        retrain_sc:
            type: Cosine
            kwargs:
                base_lr_divisor: 4      # base_lr = warmup_lr / base_lr_divisor
                warmup_lr: 0.4          # lr at the end of warming up
                warmup_divisor: 200     # warmup_epochs = epochs / warmup_divisor
                epochs: *retrain_ep
                min_lr: 0

Citation

If you're going to to use this code in your research, please consider citing our papers (OHL and AWS).

@inproceedings{lin2019online,
  title={Online Hyper-parameter Learning for Auto-Augmentation Strategy},
  author={Lin, Chen and Guo, Minghao and Li, Chuming and Yuan, Xin and Wu, Wei and Yan, Junjie and Lin, Dahua and Ouyang, Wanli},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={6579--6588},
  year={2019}
}

@article{tian2020improving,
  title={Improving Auto-Augment via Augmentation-Wise Weight Sharing},
  author={Tian, Keyu and Lin, Chen and Sun, Ming and Zhou, Luping and Yan, Junjie and Ouyang, Wanli},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

Contact for Issues

References & Opensources

MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモ

Tokyo2020-Pictogram-using-MediaPipe MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモです。 Tokyo2020Pictgram02.mp4 Requirement mediapipe 0.8.6 or later O

KazuhitoTakahashi 295 Dec 26, 2022
OBBDetection: an oriented object detection toolbox modified from MMdetection

OBBDetection note: If you have questions or good suggestions, feel free to propose issues and contact me. introduction OBBDetection is an oriented obj

MIXIAOXIN_HO 3 Nov 11, 2022
Model serving at scale

Run inference at scale Cortex is an open source platform for large-scale machine learning inference workloads. Workloads Realtime APIs - respond to pr

Cortex Labs 7.9k Jan 06, 2023
Drone Task1 - Drone Task1 With Python

Drone_Task1 Matching Results 3.mp4 1.mp4

MLV Lab (Machine Learning and Vision Lab at Korea University) 11 Nov 14, 2022
Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR)

Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR) This is the official implementation of our paper Personalized Tran

Yongchun Zhu 81 Dec 29, 2022
Code repository for the paper "Tracking People with 3D Representations"

Tracking People with 3D Representations Code repository for the paper "Tracking People with 3D Representations" (paper link) (project site). Jathushan

Jathushan Rajasegaran 77 Dec 03, 2022
Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646

[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization LPN [Paper] NEWs Prerequisites Python 3.6 GPU Memory = 8G Numpy 1.

46 Dec 14, 2022
Simulation of moving particles under microscopic imaging

Simulation of moving particles under microscopic imaging Install scipy numpy scikit-image tiffile Run python simulation.py Read result https://imagej

Zehao Wang 2 Dec 14, 2021
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise

45 Dec 08, 2022
CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability

This is the official repository of the paper: CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability A private copy of the

Fadi Boutros 33 Dec 31, 2022
This repo provides the base code for pytorch-lightning and weight and biases simultaneous integration.

Write your model faster with pytorch-lightning-wadb-code-backbone This repository provides the base code for pytorch-lightning and weight and biases s

9 Mar 29, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection, CVPR 2021. Installation A Linux pla

Tianning Yuan 269 Dec 21, 2022
Official PyTorch implementation of SyntaSpeech (IJCAI 2022)

SyntaSpeech: Syntax-Aware Generative Adversarial Text-to-Speech | | | | 中文文档 This repository is the official PyTorch implementation of our IJCAI-2022

Zhenhui YE 116 Nov 24, 2022
All the code and files related to the MI-Lab of UE19CS305 course in sem 5

Machine-Intelligence-Lab-CS305 The compilation of all the code an drelated files from MI-Lab UE19CS305 (of batch 2019-2023) offered by PES University

Arvind Krishna 3 Nov 10, 2022
RLDS stands for Reinforcement Learning Datasets

RLDS RLDS stands for Reinforcement Learning Datasets and it is an ecosystem of tools to store, retrieve and manipulate episodic data in the context of

Google Research 135 Jan 01, 2023
Tutorial: Introduction to Graph Machine Learning, with Jupyter notebooks

GraphMLTutorialNLDL22 Tutorial NLDL22: Introduction to Graph Machine Learning, with Jupyter notebooks This tutorial takes place during the conference

UiT Machine Learning Group 3 Jan 10, 2022
Ray tracing of a Schwarzschild black hole written entirely in TensorFlow.

TensorGeodesic Ray tracing of a Schwarzschild black hole written entirely in TensorFlow. Dependencies: Python 3 TensorFlow 2.x numpy matplotlib About

5 Jan 15, 2022
General Vision Benchmark, a project from OpenGVLab

Introduction We build GV-B(General Vision Benchmark) on Classification, Detection, Segmentation and Depth Estimation including 26 datasets for model e

174 Dec 27, 2022
Img-process-manual - Utilize Python Numpy and Matplotlib to realize OpenCV baisc image processing function

Img-process-manual - Opencv Library basic graphic processing algorithm coding reproduction based on Numpy and Matplotlib library

Jack_Shaw 2 Dec 12, 2022
ServiceX Transformer that converts flat ROOT ntuples into columnwise data

ServiceX_Uproot_Transformer ServiceX Transformer that converts flat ROOT ntuples into columnwise data Usage You can invoke the transformer from the co

Vis 0 Jan 20, 2022