Learnable Boundary Guided Adversarial Training (ICCV2021)

Overview

Learnable Boundary Guided Adversarial Training

This repository contains the implementation code for the ICCV2021 paper:
Learnable Boundary Guided Adversarial Training (https://arxiv.org/pdf/2011.11164.pdf)

If you find this code or idea useful, please consider citing our work:

@article{cui2020learnable,
  title={Learnable boundary guided adversarial training},
  author={Cui, Jiequan and Liu, Shu and Wang, Liwei and Jia, Jiaya},
  journal={arXiv preprint arXiv:2011.11164},
  year={2020}
}

Overview

In this paper, we proposed the "Learnable Boundary Guided Adversarial Training" to preserve high natural accuracy while enjoy strong robustness for deep models. An interesting phenomenon in our exploration shows that natural classifier boundary can benefit model robustness to some degree, which is different from the previous work that the improved robustness is at cost of performance degradation on natural data. Our method creates new state-of-the-art model robustness on CIFAR-100 without extra real or Synthetic data under auto-attack benchmark.

image

Results and Pretrained models

`
Models are evaluated under the strongest AutoAttack(https://github.com/fra31/auto-attack) with epsilon 0.031.

Our CIFAR-100 models:
CIFAR-100-LBGAT0-wideresnet-34-10 70.25 vs 27.16
CIFAR-100-LBGAT6-wideresnet-34-10 60.64 vs 29.33
CIFAR-100-LBGAT6-wideresnet-34-20 62.55 vs 30.20

Our CIFAR-10 models:
CIFAR-10-LBGAT0-wideresnet-34-10 88.22 vs 52.86
CIFAR-10-LBGAT0-wideresnet-34-20 88.70 vs 53.57

CIFAR-100 L-inf

Note: this is one partial results list for comparisons with methods without using additional data up to 2020/11/25. Full list can be found at https://github.com/fra31/auto-attack. TRADES (alpha=6) is trained with official open-source code at https://github.com/yaodongyu/TRADES.

# Method Model Natural Acc Robust Acc (AutoAttack)
1 LBGAT (Ours) WRN-34-20 62.55 30.20
2 (Gowal et al. 2020) WRN-70-16 60.86 30.03
3 LBGAT (Ours) WRN-34-10 60.64 29.33
4 (Wu et al. 2020) WRN-34-10 60.38 28.86
5 LBGAT (Ours) WRN-34-10 70.25 27.16
6 (Chen et al. 2020) WRN-34-10 62.15 26.94
7 (Zhang et al. 2019) TRADES (alpha=6) WRN-34-10 56.50 26.87
8 (Sitawarin et al. 2020) WRN-34-10 62.82 24.57
9 (Rice et al. 2020) RN-18 53.83 18.95

CIFAR-10 L-inf

Note: this is one partial results list for comparisons with previous published methods without using additional data up to 2020/11/25. Full list can be found at https://github.com/fra31/auto-attack. TRADES (alpha=6) is trained with official open-source code at https://github.com/yaodongyu/TRADES. “*” denotes methods aiming to speed up adversarial training.

# Method Model Natural Acc Robust Acc (AutoAttack)
1 LBGAT (Ours) WRN-34-20 88.70 53.57
2 (Zhang et al.) WRN-34-10 84.52 53.51
3 (Rice et al. 2020) WRN-34-20 85.34 53.42
4 LBGAT (Ours) WRN-34-10 88.22 52.86
5 (Qin et al., 2019) WRN-40-8 86.28 52.84
6 (Zhang et al. 2019) TRADES (alpha=6) WRN-34-10 84.92 52.64
7 (Chen et al., 2020b) WRN-34-10 85.32 51.12
8 (Sitawarin et al., 2020) WRN-34-10 86.84 50.72
9 (Engstrom et al., 2019) RN-50 87.03 49.25
10 (Kumari et al., 2019) WRN-34-10 87.80 49.12
11 (Mao et al., 2019) WRN-34-10 86.21 47.41
12 (Zhang et al., 2019a) WRN-34-10 87.20 44.83
13 (Madry et al., 2018) AT WRN-34-10 87.14 44.04
14 (Shafahi et al., 2019)* WRN-34-10 86.11 41.47
14 (Wang & Zhang, 2019)* WRN-28-10 92.80 29.35

Get Started

Befor the training, please create the directory 'Logs' via the command 'mkdir Logs'.

Training

bash sh/train_lbgat0_cifar100.sh

Evaluation

before running the evaluation, please download the pretrained model.

bash sh/eval_autoattack.sh

Acknowledgements

This code is partly based on the TRADES and autoattack.

Contact

If you have any questions, feel free to contact us through email ([email protected]) or Github issues. Enjoy!

🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022

🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022

Advanced Image Manipulation Lab @ Samsung AI Center Moscow 4.7k Dec 31, 2022
Applying PVT to Semantic Segmentation

Applying PVT to Semantic Segmentation Here, we take MMSegmentation v0.13.0 as an example, applying PVTv2 to SemanticFPN. For details see Pyramid Visio

35 Nov 30, 2022
Pytorch implementation for RelTransformer

RelTransformer Our Architecture This is a Pytorch implementation for RelTransformer The implementation for Evaluating on VG200 can be found here Requi

Vision CAIR Research Group, KAUST 21 Nov 22, 2022
Codes for CIKM'21 paper 'Self-Supervised Graph Co-Training for Session-based Recommendation'.

COTREC Codes for CIKM'21 paper 'Self-Supervised Graph Co-Training for Session-based Recommendation'. Requirements: Python 3.7, Pytorch 1.6.0 Best Hype

Xin Xia 42 Dec 09, 2022
Official Implementation for HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing

HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing Yuval Alaluf*, Omer Tov*, Ron Mokady, Rinon Gal, Amit H. Bermano *Denotes equ

885 Jan 06, 2023
Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"

Hold me tight! Influence of discriminative features on deep network boundaries This is the source code to reproduce the experiments of the NeurIPS 202

EPFL LTS4 19 Dec 10, 2021
The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient.

You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient (paper) @misc{zhang2021compress,

46 Dec 07, 2022
Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

FFD Source Code Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face M

88 Nov 22, 2022
Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021) This repository is the official PyTorc

Jingyun Liang 139 Dec 29, 2022
Create images and texts with the First Order Generative Adversarial Networks

First Order Divergence for training GANs This repository contains code accompanying the paper First Order Generative Advesarial Netoworks The majority

Zalando Research 35 Dec 11, 2021
A big endian Gentoo port developed on a Pine64.org RockPro64

Gentoo-aarch64_be A big endian Gentoo port developed on a Pine64.org RockPro64 The endian wars are over... little endian won. As a result, it is incre

Rory Bolt 6 Dec 07, 2022
Programming with Neural Surrogates of Programs

Programming with Neural Surrogates of Programs

0 Dec 12, 2021
[NeurIPS 2020] Semi-Supervision (Unlabeled Data) & Self-Supervision Improve Class-Imbalanced / Long-Tailed Learning

Rethinking the Value of Labels for Improving Class-Imbalanced Learning This repository contains the implementation code for paper: Rethinking the Valu

Yuzhe Yang 656 Dec 28, 2022
Graph WaveNet apdapted for brain connectivity analysis.

Graph WaveNet for brain network analysis This is the implementation of the Graph WaveNet model used in our manuscript: S. Wein , A. Schüller, A. M. To

4 Dec 17, 2022
Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation

Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation Prerequisites This repo is built upon a local copy of transfo

Jixuan Wang 10 Sep 28, 2022
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

51 Dec 03, 2022
LineBoard - Python+React+MySQL-白板即時系統改善人群行為

LineBoard-白板即時系統改善人群行為 即時顯示實驗室的使用狀況,並遠端預約排隊,以此來改善人們的工作效率 程式架構 運作流程 使用者先至該實驗室網站預約

Bo-Jyun Huang 1 Feb 22, 2022
Convert ONNX model graph to Keras model format.

Convert ONNX model graph to Keras model format.

Grigory Malivenko 175 Dec 28, 2022
Final report with code for KAIST Course KSE 801.

Orthogonal collocation is a method for the numerical solution of partial differential equations

Chuanbo HUA 4 Apr 06, 2022
State-of-the-art language models can match human performance on many tasks

Status: Archive (code is provided as-is, no updates expected) Grade School Math [Blog Post] [Paper] State-of-the-art language models can match human p

OpenAI 259 Jan 08, 2023