Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)

Overview

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

Pouya Samangouei*, Maya Kabkab*, Rama Chellappa

[*: authors contributed equally]

This repository contains the implementation of our ICLR-18 paper: Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

If you find this code or the paper useful, please consider citing:

@inproceedings{defensegan,
  title={Defense-GAN: Protecting classifiers against adversarial attacks using generative models},
  author={Samangouei, Pouya and Kabkab, Maya and Chellappa, Rama},
  booktitle={International Conference on Learning Representations},
  year={2018}
}

alt text alt text

Contents

  1. Installation
  2. Usage

Installation

  1. Clone this repository:
git clone --recursive https://github.com/kabkabm/defensegan
cd defensegan
git submodule update --init --recursive
  1. Install requirements:
pip install -r requirements.txt

Note: if you don't have a GPU install the cpu version of TensorFlow 1.7.

  1. Download the dataset and prepare data directory:
python download_dataset.py [mnist|f-mnist|celeba]
  1. Create or link output and debug directories:
mkdir output
mkdir debug

or

ln -s <path-to-output> output
ln -s <path-to-debug> debug

Usage

Train a GAN model

python train.py --cfg <path> --is_train <extra-args>
  • --cfg This can be set to either a .yml configuration file like the ones in experiments/cfgs, or an output directory path.
  • <extra-args> can be any parameter that is defined in the config file.

The training will create a directory in the output directory per experiment with the same name as to save the model checkpoints. If <extra-args> are different from the ones that are defined in <config>, the output directory name will reflect the difference.

A config file is saved into each experiment directory so that they can be loaded if <path> is the address to that directory.

Example

After running

python train.py --cfg experiments/cfgs/gans/mnist.yml --is_train

output/gans/mnist will be created.

[optional] Save reconstructions and datasets into cache:

python train.py --cfg experiments/cfgs/<config> --save_recs
python train.py --cfg experiments/cfgs/<config> --save_ds

Example

After running the training code for mnist, the reconstructions and the dataset can be saved with:

python train.py --cfg output/gans/mnist --save_recs
python train.py --cfg output/gans/mnist --save_ds

As training goes on, sample outputs of the generator are written to debug/gans/<model_config>.

Black-box attacks

To perform black-box experiments run blackbox.py [Table 1 and 2 of the paper]:

python blackbox.py --cfg <path> \
    --results_dir <results_path> \
    --bb_model {A, B, C, D, E} \
    --sub_model {A, B, C, D, E} \
    --fgsm_eps <epsilon> \
    --defense_type {none|defense_gan|adv_tr}
    [--train_on_recs or --online_training]
    <optional-arguments>
  • --cfg is the path to the config file for training the iWGAN. This can also be the path to the output directory of the model.

  • --results_dir The path where the final results are saved in text files.

  • --bb_model The black-box model architectures that are used in Table 1 and Table 2.

  • --sub_model The substitute model architectures that are used in Table 1 and Table 2.

  • --defense_type specifies the type of defense to protect the classifier.

  • --train_on_recs or --online_training These parameters are optional. If they are set, the classifier will be trained on the reconstructions of Defense-GAN (e.g. in column Defense-GAN-Rec of Table 1 and 2). Otherwise, the results are for Defense-GAN-Orig. Note --online_training will take a while if --rec_iters, or L in the paper, is set to a large value.

  • <optional-arguments> A list of --<arg_name> <arg_val> that are the same as the hyperparemeters that are defined in config files (all lower case), and also a list of flags in blackbox.py. The most important ones are:

    • --rec_iters The number of GD reconstruction iterations for Defense-GAN, or L in the paper.
    • --rec_lr The learning rate of the reconstruction step.
    • --rec_rr The number of random restarts for the reconstruction step, or R in the paper.
    • --num_train The number of images to train the black-box model on. For debugging purposes set this to a small value.
    • --num_test The number of images to test on. For debugging purposes set this to a small value.
    • --debug This will save qualitative attack and reconstruction results in debug directory and will not run the adversarial attack part of the code.
  • Refer to blackbox.py for more flag descriptions.

Example

  • Row 1 of Table 1 Defense-GAN-Orig:
python blackbox.py --cfg output/gans/mnist \
    --results_dir defensegan \
    --bb_model A \
    --sub_model B \
    --fgsm_eps 0.3 \
    --defense_type defense_gan
  • If you set --nb_epochs 1 --nb_epochs_s 1 --data_aug 1 you will get a quick glance of how the script works.

White-box attacks

To test Defense-GAN for white-box attacks run whitebox.py [Tables 4, 5, 12 of the paper]:

python whitebox.py --cfg <path> \
       --results_dir <results-dir> \
       --attack_type {fgsm, rand_fgsm, cw} \
       --defense_type {none|defense_gan|adv_tr} \
       --model {A, B, C, D} \
       [--train_on_recs or --online_training]
       <optional-arguments>
  • --cfg is the path to the config file for training the iWGAN. This can also be the path to the output directory of the model.
  • --results_dir The path where the final results are saved in text files.
  • --defense_type specifies the type of defense to protect the classifier.
  • --train_on_recs or --online_training These parameters are optional. If they are set, the classifier will be trained on the reconstructions of Defense-GAN (e.g. in column Defense-GAN-Rec of Table 1 and 2). Otherwise, the results are for Defense-GAN-Orig. Note --online_training will take a while if --rec_iters, or L in the paper, is set to a large value.
  • <optional-arguments> A list of --<arg_name> <arg_val> that are the same as the hyperparemeters that are defined in config files (all lower case), and also a list of flags in whitebox.py. The most important ones are:
    • --rec_iters The number of GD reconstruction iterations for Defense-GAN, or L in the paper.
    • --rec_lr The learning rate of the reconstruction step.
    • --rec_rr The number of random restarts for the reconstruction step, or R in the paper.
    • --num_test The number of images to test on. For debugging purposes set this to a small value.
  • Refer to whitebox.py for more flag descriptions.

Example

First row of Table 4:

python whitebox.py --cfg <path> \
       --results_dir whitebox \
       --attack_type fgsm \
       --defense_type defense_gan \
       --model A
  • If you want to quickly see how the scripts work, add the following flags:
--nb_epochs 1 --num_tests 400
Owner
Maya Kabkab
Maya Kabkab
Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"

Photo-Realistic-Super-Resoluton Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" [Paper]

Harry Yang 199 Dec 01, 2022
[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

FedBN: Federated Learning on Non-IID Features via Local Batch Normalization This is the PyTorch implemention of our paper FedBN: Federated Learning on

<a href=[email protected]"> 156 Dec 15, 2022
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

DV Lab 115 Dec 23, 2022
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

Yonglong Tian 2.2k Jan 08, 2023
Unofficial implementation of the Involution operation from CVPR 2021

involution_pytorch Unofficial PyTorch implementation of "Involution: Inverting the Inherence of Convolution for Visual Recognition" by Li et al. prese

Rishabh Anand 46 Dec 07, 2022
Multi-tool reverse engineering collaboration solution.

CollaRE v0.3 Intorduction CollareRE is a tool for collaborative reverse engineering that aims to allow teams that do need to use more then one tool du

105 Nov 27, 2022
Training vision models with full-batch gradient descent and regularization

Stochastic Training is Not Necessary for Generalization -- Training competitive vision models without stochasticity This repository implements trainin

Jonas Geiping 32 Jan 06, 2023
Husein pet projects in here!

project-suka-suka Husein pet projects in here! List of projects mysejahtera-density. Generate resolution points using meshgrid and request each points

HUSEIN ZOLKEPLI 47 Dec 09, 2022
DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Object Detection

DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Object Detection Code for our Paper DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Obje

Steven Lang 58 Dec 19, 2022
NeurIPS-2021: Neural Auto-Curricula in Two-Player Zero-Sum Games.

NAC Official PyTorch implementation of NAC from the paper: Neural Auto-Curricula in Two-Player Zero-Sum Games. We release code for: Gradient based ora

Xidong Feng 19 Nov 11, 2022
Mesh TensorFlow: Model Parallelism Made Easier

Mesh TensorFlow - Model Parallelism Made Easier Introduction Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying

1.3k Dec 26, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

63 Oct 17, 2022
K-Nearest Neighbor in Pytorch

Pytorch KNN CUDA 2019/11/02 This repository will no longer be maintained as pytorch supports sort() and kthvalue on tensors. git clone https://github.

Chris Choy 65 Dec 01, 2022
Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"

Memory Compressed Attention Implementation of the Self-Attention layer of the proposed Memory-Compressed Attention, in Pytorch. This repository offers

Phil Wang 47 Dec 23, 2022
This is the 3D Implementation of 《Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical Image Segmentation》

CoraNet This is the 3D Implementation of 《Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical Image Segmentation》 Environment pytor

25 Nov 08, 2022
The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track.

ISC21-Descriptor-Track-1st The 1st Place Solution of the Facebook AI Image Similarity Challenge (ISC21) : Descriptor Track. You can check our solution

lyakaap 73 Dec 24, 2022
Building blocks for uncertainty-aware cycle consistency presented at NeurIPS'21.

UncertaintyAwareCycleConsistency This repository provides the building blocks and the API for the work presented in the NeurIPS'21 paper Robustness vi

EML Tübingen 19 Dec 12, 2022
Code for the ICCV'21 paper "Context-aware Scene Graph Generation with Seq2Seq Transformers"

ICCV'21 Context-aware Scene Graph Generation with Seq2Seq Transformers Authors: Yichao Lu*, Himanshu Rai*, Cheng Chang*, Boris Knyazev†, Guangwei Yu,

Layer6 Labs 37 Dec 18, 2022
Attempt at implementation of a simple GAN using Keras

Simple GAN This is my attempt to make a wrapper class for a GAN in keras which can be used to abstract the whole architecture process. Simple GAN Over

Deven96 7 May 23, 2019
A Tensorflow implementation of the Text Conditioned Auxiliary Classifier Generative Adversarial Network for Generating Images from text descriptions

A Tensorflow implementation of the Text Conditioned Auxiliary Classifier Generative Adversarial Network for Generating Images from text descriptions

Ayushman Dash 93 Aug 04, 2022