Based on the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral

Overview

Geometry-aware Instance-reweighted Adversarial Training

This repository provides codes for Geometry-aware Instance-reweighted Adversarial Training (https://openreview.net/forum?id=iAX0l6Cz8ub) (ICLR oral)
Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama and Mohan Kankanhalli

What is the nature of adversarial training?

Adversarial training employs adversarial data for updating the models. For more details of the nature of adversarial training, refer to this FAT's GitHub for the preliminary.
In this repo, you will know:

FACT 1: Model Capacity is NOT enough for adversarial training.

We plot standard training error (Natural) and adversarial training error (PGD-10) over the training epochs of the standard AT (Madry's) on CIFAR-10 dataset. *Left panel*: AT on different sizes of network (blue lines) and standard training (ST, the red line) on ResNet-18. *Right panel*: AT on ResNet-18 under different perturbation bounds eps_train.

Refer to FAT's GitHub for the standard AT by setting

python FAT.py --epsilon 0.031 --net 'resnet18' --tau 10 --dynamictau False

OR using codes in this repo by setting

python GAIRAT.py --epsilon 0.031 --net 'resnet18' --Lambda 'inf'

to recover the standard AT (Madry's).

The over-parameterized models that fit nataral data entirely in the standard training (ST) are still far from enough for fitting adversarial data in adversarial training (AT). Compared with ST fitting the natural data points, AT smooths the neighborhoods of natural data, so that adversarial data consume significantly more model capacity than natural data.

The volume of this neighborhood is exponentially large w.r.t. the input dimension , even if is small.

Under the computational budget of 100 epochs, the networks hardly reach zero error on the adversarial training data.

FACT 2: Data points are inherently different.

More attackable data are closer to the decision boundary.

More guarded data are farther away from the decision boundary.

More attackable data (lighter red and blue) are closer to the decision boundary; more guarded data (darker red and blue) are farther away from the decision boundary. *Left panel*: Two toy examples. *Right panel*: The model’s output distribution of two randomly selected classes from the CIFAR-10 dataset. The degree of robustness (denoted by the color gradient) of a data point is calculated based on the least number of iterations κ that PGD needs to find its misclassified adversarial variant.

Therefore, given the limited model capacity, we should treat data differently for updating the model in adversarial training.

IDEA: Geometrically speaking, a natural data point closer to/farther from the class boundary is less/more robust, and the corresponding adversarial data point should be assigned with larger/smaller weight for updating the model.
To implement the idea, we propose geometry-aware instance-reweighted adversarial training (GAIRAT), where the weights are based on how difficult it is to attack a natural data point.
"how difficult it is to attack a natural data point" is approximated by the number of PGD steps that the PGD method requires to generate its misclassified adversarial variant.

The illustration of GAIRAT. GAIRAT explicitly gives larger weights on the losses of adversarial data (larger red), whose natural counterparts are closer to the decision boundary (lighter blue). GAIRAT explicitly gives smaller weights on the losses of adversarial data (smaller red), whose natural counterparts are farther away from the decision boundary (darker blue).

GAIRAT's Implementation

For updating the model, GAIRAT assigns instance dependent weight (reweight) on the loss of the adversarial data (found in GAIR.py).
The instance dependent weight depends on num_steps, which indicates the least PGD step numbers needed for the misclassified adversarial variant.

Preferred Prerequisites

  • Python (3.6)
  • Pytorch (1.2.0)
  • CUDA
  • numpy
  • foolbox

Running GAIRAT, GAIR-FAT on benchmark datasets (CIFAR-10 and SVHN)

Here are examples:

  • Train GAIRAT and GAIR-FAT on WRN-32-10 model on CIFAR-10 and compare our results with AT, FAT
CUDA_VISIBLE_DEVICES='0' python GAIRAT.py 
CUDA_VISIBLE_DEVICES='0' python GAIR_FAT.py 
  • How to recover the original FAT and AT using our code?
CUDA_VISIBLE_DEVICES='0' python GAIRAT.py --Lambda 'inf' --output_dir './AT_results' 
CUDA_VISIBLE_DEVICES='0' python GAIR_FAT.py --Lambda 'inf' --output_dir './FAT_results' 
  • Evaluations After running, you can find ./GAIRAT_result/log_results.txt and ./GAIR_FAT_result/log_results.txt for checking Natural Acc. and PGD-20 test Acc.
    We also evaluate our models using PGD+. PGD+ is the same as PG_ours in RST repo. PG_ours is PGD with 5 random starts, and each start has 40 steps with step size 0.01 (It has 40 × 5 = 200 iterations for each test data). Since PGD+ is computational defense, we only evaluate the best checkpoint bestpoint.pth.tar and the last checkpoint checkpoint.pth.tar in the folders GAIRAT_result and GAIR_FAT_result respectively.
CUDA_VISIBLE_DEVICES='0' python eval_PGD_plus.py --model_path './GAIRAT_result/bestpoint.pth.tar' --output_suffix='./GAIRAT_PGD_plus'
CUDA_VISIBLE_DEVICES='0' python eval_PGD_plus.py --model_path './GAIR_FAT_result/bestpoint.pth.tar' --output_suffix='./GAIR_FAT_PGD_plus'

White-box evaluations on WRN-32-10

Defense (best checkpoint) Natural Acc. PGD-20 Acc. PGD+ Acc.
AT(Madry) 86.92 % 0.24% 51.96% 0.21% 51.28% 0.23%
FAT 89.16% 0.15% 51.24% 0.14% 46.14% 0.19%
GAIRAT 85.75% 0.23% 57.81% 0.54% 55.61% 0.61%
GAIR-FAT 88.59% 0.12% 56.21% 0.52% 53.50% 0.60%
Defense (last checkpoint) Natural Acc. PGD-20 Acc. PGD+ Acc.
AT(Madry) 86.62 % 0.22% 46.73% 0.08% 46.08% 0.07%
FAT 88.18% 0.19% 46.79% 0.34% 45.80% 0.16%
GAIRAT 85.49% 0.25% 53.76% 0.49% 50.32% 0.48%
GAIR-FAT 88.44% 0.10% 50.64% 0.56% 47.51% 0.51%

For more details, refer to Table 1 and Appendix C.8 in the paper.

Benchmarking robustness with additional 500K unlabeled data on CIFAR-10 dataset.

In this repo, we unleash the full power of our geometry-aware instance-reweighted methods by incorporating 500K unlabeled data (i.e., GAIR-RST). In terms of both evaluation metrics, i.e., generalization and robustness, we can obtain the best WRN-28-10 model among all public available robust models.

  • How to create the such the superior model from scratch?
  1. Download ti_500K_pseudo_labeled.pickle containing our 500K pseudo-labeled TinyImages from this link (Auxillary data provided by Carmon et al. 2019). Store ti_500K_pseudo_labeled.pickle into the folder ./data
  2. You may need mutilple GPUs for running this.
chmod +x ./GAIR_RST/run_training.sh
./GAIR_RST/run_training.sh
  1. We evaluate the robust model using natural test accuracy on natural test data and roubust test accuracy by Auto Attack. Auto Attack is combination of two white box attacks and two black box attacks.
chmod +x ./GAIR_RST/autoattack/examples/run_eval.sh

White-box evaluations on WRN-28-10

We evaluate the robustness on CIFAR-10 dataset under auto-attack (Croce & Hein, 2020).

Here we list the results using WRN-28-10 on the leadboard and our results. In particular, we use the test eps = 0.031 which keeps the same as the training eps of our GAIR-RST.

CIFAR-10 - Linf

The robust accuracy is evaluated at eps = 8/255, except for those marked with * for which eps = 0.031, where eps is the maximal Linf-norm allowed for the adversarial perturbations. The eps used is the same set in the original papers.
Note: ‡ indicates models which exploit additional data for training (e.g. unlabeled data, pre-training).

# method/paper model architecture clean report. AA
1 (Gowal et al., 2020) authors WRN-28-10 89.48 62.76 62.80
2 (Wu et al., 2020b) available WRN-28-10 88.25 60.04 60.04
- GAIR-RST (Ours)* available WRN-28-10 89.36 59.64 59.64
3 (Carmon et al., 2019) available WRN-28-10 89.69 62.5 59.53
4 (Sehwag et al., 2020) available WRN-28-10 88.98 - 57.14
5 (Wang et al., 2020) available WRN-28-10 87.50 65.04 56.29
6 (Hendrycks et al., 2019) available WRN-28-10 87.11 57.4 54.92
7 (Moosavi-Dezfooli et al., 2019) authors WRN-28-10 83.11 41.4 38.50
8 (Zhang & Wang, 2019) available WRN-28-10 89.98 60.6 36.64
9 (Zhang & Xu, 2020) available WRN-28-10 90.25 68.7 36.45

The results show our GAIR-RST method can facilitate a competitive model by utilizing additional unlabeled data.

Wanna download our superior model for other purposes? Sure!

We welcome various attack methods to attack our defense models. For cifar-10 dataset, we normalize all images into [0,1].

Download our pretrained models checkpoint-epoch200.pt into the folder ./GAIR_RST/GAIR_RST_results through this Google Drive link.

You can evaluate this pretrained model through ./GAIR_RST/autoattack/examples/run_eval.sh

Reference

@inproceedings{
zhang2021_GAIRAT,
title={Geometry-aware Instance-reweighted Adversarial Training},
author={Jingfeng Zhang and Jianing Zhu and Gang Niu and Bo Han and Masashi Sugiyama and Mohan Kankanhalli},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=iAX0l6Cz8ub}
}

Contact

Please contact [email protected] or [email protected] and [email protected] if you have any question on the codes.

Owner
Jingfeng
Jingfeng Zhang, Postdoc (2021 - ) at RIKEN-AIP, Tokyo; Ph.D. (2016- 2020) from the School of Computing at the National University of Singapore.
Jingfeng
The devkit of the nuPlan dataset.

The devkit of the nuPlan dataset.

Motional 264 Jan 03, 2023
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation Created by Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas from Sta

Charles R. Qi 4k Dec 30, 2022
Urban mobility simulations with Python3, RLlib (Deep Reinforcement Learning) and Mesa (Agent-based modeling)

Deep Reinforcement Learning for Smart Cities Documentation RLlib: https://docs.ray.io/en/master/rllib.html Mesa: https://mesa.readthedocs.io/en/stable

1 May 15, 2022
PyTorch original implementation of Cross-lingual Language Model Pretraining.

XLM NEW: Added XLM-R model. PyTorch original implementation of Cross-lingual Language Model Pretraining. Includes: Monolingual language model pretrain

Facebook Research 2.7k Dec 27, 2022
Re-implementation of 'Grokking: Generalization beyond overfitting on small algorithmic datasets'

Re-implementation of the paper 'Grokking: Generalization beyond overfitting on small algorithmic datasets' Paper Original paper can be found here Data

Tom Lieberum 38 Aug 09, 2022
[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining

COCO-LM This repository contains the scripts for fine-tuning COCO-LM pretrained models on GLUE and SQuAD 2.0 benchmarks. Paper: COCO-LM: Correcting an

Microsoft 106 Dec 12, 2022
Codes for paper "Towards Diverse Paragraph Captioning for Untrimmed Videos". CVPR 2021

Towards Diverse Paragraph Captioning for Untrimmed Videos This repository contains PyTorch implementation of our paper Towards Diverse Paragraph Capti

Yuqing Song 61 Oct 11, 2022
Official PyTorch implementation of Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval.

Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval PyTorch This is the PyTorch implementation of Retrieve in Style: Unsupervised Fa

60 Oct 12, 2022
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
Implementation of the paper ''Implicit Feature Refinement for Instance Segmentation''.

Implicit Feature Refinement for Instance Segmentation This repository is an official implementation of the ACM Multimedia 2021 paper Implicit Feature

Lufan Ma 17 Dec 28, 2022
An implementation of the methods presented in Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data.

An implementation of the methods presented in Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data.

Andrew Jesson 9 Apr 04, 2022
Keras implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Erik Linder-Norén 8.9k Jan 04, 2023
NitroFE is a Python feature engineering engine which provides a variety of modules designed to internally save past dependent values for providing continuous calculation.

NitroFE is a Python feature engineering engine which provides a variety of modules designed to internally save past dependent values for providing continuous calculation.

100 Sep 28, 2022
An LSTM for time-series classification

Update 10-April-2017 And now it works with Python3 and Tensorflow 1.1.0 Update 02-Jan-2017 I updated this repo. Now it works with Tensorflow 0.12. In

Rob Romijnders 391 Dec 27, 2022
So-ViT: Mind Visual Tokens for Vision Transformer

So-ViT: Mind Visual Tokens for Vision Transformer        Introduction This repository contains the source code under PyTorch framework and models trai

Jiangtao Xie 44 Nov 24, 2022
Introducing neural networks to predict stock prices

IntroNeuralNetworks in Python: A Template Project IntroNeuralNetworks is a project that introduces neural networks and illustrates an example of how o

Vivek Palaniappan 637 Jan 04, 2023
PyTorch implementation of ICLR 2022 paper PiCO: Contrastive Label Disambiguation for Partial Label Learning

PiCO: Contrastive Label Disambiguation for Partial Label Learning This is a PyTorch implementation of ICLR 2022 paper PiCO: Contrastive Label Disambig

王皓波 147 Jan 07, 2023
PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"

Non-Autoregressive Transformer Code release for Non-Autoregressive Neural Machine Translation by Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K.

Salesforce 261 Nov 12, 2022
Official PyTorch implementation of Data-free Knowledge Distillation for Object Detection, WACV 2021.

Introduction This repository is the official PyTorch implementation of Data-free Knowledge Distillation for Object Detection, WACV 2021. Data-free Kno

NVIDIA Research Projects 50 Jan 05, 2023
Pure python PEMDAS expression solver without using built-in eval function

pypemdas Pure python PEMDAS expression solver without using built-in eval function. Supports nested parenthesis. Supported operators: + - * / ^ Exampl

1 Dec 22, 2021