Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)

Overview

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation

This is a pytorch project for the paper Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation by Xiaogang Xu, Hengshuang Zhao and Jiaya Jia presented at ICCV2021.

paper link, arxiv

Introduction

Adversarial training is promising for improving the robustness of deep neural networks towards adversarial perturbations, especially on the classification task. The effect of this type of training on semantic segmentation, contrarily, just commences. We make the initial attempt to explore the defense strategy on semantic segmentation by formulating a general adversarial training procedure that can perform decently on both adversarial and clean samples. We propose a dynamic divide-and-conquer adversarial training (DDC-AT) strategy to enhance the defense effect, by setting additional branches in the target model during training, and dealing with pixels with diverse properties towards adversarial perturbation. Our dynamical division mechanism divides pixels into multiple branches automatically. Note all these additional branches can be abandoned during inference and thus leave no extra parameter and computation cost. Extensive experiments with various segmentation models are conducted on PASCAL VOC 2012 and Cityscapes datasets, in which DDC-AT yields satisfying performance under both white- and black-box attacks.

Project Setup

For multiprocessing training, we use apex, tested with pytorch 1.0.1.

First install Python 3. We advise you to install Python 3 and PyTorch with Anaconda:

conda create --name py36 python=3.6
source activate py36

Clone the repo and install the complementary requirements:

cd $HOME
git clone --recursive [email protected]:dvlab-research/Robust_Semantic_Segmentation.git
cd Robust_Semantic_Segmentation
pip install -r requirements.txt

The environment of our experiments is CUDA10.2 and TITAN V. And you should install apex for training.

Requirement

  • Hardware: 4-8 GPUs (better with >=11G GPU memory)

Train

  • Download related datasets and you should modify the relevant paths specified in folder "config"
  • Download ImageNet pre-trained models and put them under folder initmodel for weight initialization.

Cityscapes

  • Train the baseline model with no defense on Cityscapes with PSPNet
    sh tool_train/cityscapes/psp_train.sh
    
  • Train the baseline model with no defense on Cityscapes with DeepLabv3
    sh tool_train/cityscapes/aspp_train.sh
    
  • Train the model with SAT on Cityscapes with PSPNet
    sh tool_train/cityscapes/psp_train_sat.sh
    
  • Train the model with SAT on Cityscapes with DeepLabv3
    sh tool_train/cityscapes/aspp_train_sat.sh
    
  • Train the model with DDCAT on Cityscapes with PSPNet
    sh tool_train/cityscapes/psp_train_ddcat.sh
    
  • Train the model with DDCAT on Cityscapes with DeepLabv3
    sh tool_train/cityscapes/aspp_train_ddcat.sh
    

VOC2012

  • Train the baseline model with no defense on VOC2012 with PSPNet
    sh tool_train/voc2012/psp_train.sh
    
  • Train the baseline model with no defense on VOC2012 with DeepLabv3
    sh tool_train/voc2012/aspp_train.sh
    
  • Train the model with SAT on VOC2012 with PSPNet
    sh tool_train/voc2012/psp_train_sat.sh
    
  • Train the model with SAT on VOC2012 with DeepLabv3
    sh tool_train/voc2012/aspp_train_sat.sh
    
  • Train the model with DDCAT on VOC2012 with PSPNet
    sh tool_train/voc2012/psp_train_ddcat.sh
    
  • Train the model with DDCAT on VOC2012 with DeepLabv3
    sh tool_train/voc2012/aspp_train_ddcat.sh
    

You can use the tensorboardX to visualize the training loss, by

tensorboard --logdir=exp/path_to_log

Test

We provide the script for evaluation, reporting the miou on both clean and adversarial samples (the adversarial samples are obtained with attack whose n=2, epsilon=0.03 x 255, alpha=0.01 x 255)

Cityscapes

  • Evaluate the PSPNet trained with no defense on Cityscapes
    sh tool_test/cityscapes/psp_test.sh
    
  • Evaluate the PSPNet trained with SAT on Cityscapes
    sh tool_test/cityscapes/psp_test_sat.sh
    
  • Evaluate the PSPNet trained with DDCAT on Cityscapes
    sh tool_test/cityscapes/psp_test_ddcat.sh
    
  • Evaluate the DeepLabv3 trained with no defense on Cityscapes
    sh tool_test/cityscapes/aspp_test.sh
    
  • Evaluate the DeepLabv3 trained with SAT on Cityscapes
    sh tool_test/cityscapes/aspp_test_sat.sh
    
  • Evaluate the DeepLabv3 trained with DDCAT on Cityscapes
    sh tool_test/cityscapes/aspp_test_ddcat.sh
    

VOC2012

  • Evaluate the PSPNet trained with no defense on VOC2012
    sh tool_test/voc2012/psp_test.sh
    
  • Evaluate the PSPNet trained with SAT on VOC2012
    sh tool_test/voc2012/psp_test_sat.sh
    
  • Evaluate the PSPNet trained with DDCAT on VOC2012
    sh tool_test/voc2012/psp_test_ddcat.sh
    
  • Evaluate the DeepLabv3 trained with no defense on VOC2012
    sh tool_test/voc2012/aspp_test.sh
    
  • Evaluate the DeepLabv3 trained with SAT on VOC2012
    sh tool_test/voc2012/aspp_test_sat.sh
    
  • Evaluate the DeepLabv3 trained with DDCAT on VOC2012
    sh tool_test/voc2012/aspp_test_ddcat.sh
    

Pretrained Model

You can download the pretrained models from https://drive.google.com/file/d/120xLY_pGZlm3tqaLxTLVp99e06muBjJC/view?usp=sharing

Cityscapes with PSPNet

The model trained with no defense: pretrain/cityscapes/pspnet/no_defense
The model trained with SAT: pretrain/cityscapes/pspnet/sat
The model trained with DDCAT: pretrain/cityscapes/pspnet/ddcat

Cityscapes with DeepLabv3

The model trained with no defense: pretrain/cityscapes/deeplabv3/no_defense
The model trained with SAT: pretrain/cityscapes/deeplabv3/sat
The model trained with DDCAT: pretrain/cityscapes/deeplabv3/ddcat

VOC2012 with PSPNet

The model trained with no defense: pretrain/voc2012/pspnet/no_defense
The model trained with SAT: pretrain/voc2012/pspnet/sat
The model trained with DDCAT: pretrain/voc2012/pspnet/ddcat

VOC2012 with DeepLabv3

The model trained with no defense: pretrain/voc2012/deeplabv3/no_defense
The model trained with SAT: pretrain/voc2012/deeplabv3/sat
The model trained with DDCAT: pretrain/voc2012/deeplabv3/ddcat

Citation Information

If you find the project useful, please cite:

@inproceedings{xu2021ddcat,
  title={Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation},
  author={Xiaogang Xu, Hengshuang Zhao and Jiaya Jia},
  booktitle={ICCV},
  year={2021}
}

Acknowledgments

This source code is inspired by semseg.

Contributions

If you have any questions/comments/bug reports, feel free to e-mail the author Xiaogang Xu ([email protected]).

Owner
DV Lab
Deep Vision Lab
DV Lab
mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms. It provides easily interchangeable modeling and planning components, and a set of utility function

Facebook Research 724 Jan 04, 2023
Python codes for Lite Audio-Visual Speech Enhancement.

Lite Audio-Visual Speech Enhancement (Interspeech 2020) Introduction This is the PyTorch implementation of Lite Audio-Visual Speech Enhancement (LAVSE

Shang-Yi Chuang 85 Dec 01, 2022
Benchmarks for the Optimal Power Flow Problem

Power Grid Lib - Optimal Power Flow This benchmark library is curated and maintained by the IEEE PES Task Force on Benchmarks for Validation of Emergi

A Library of IEEE PES Power Grid Benchmarks 207 Dec 08, 2022
Deep Surface Reconstruction from Point Clouds with Visibility Information

Data, code and pretrained models for the paper Deep Surface Reconstruction from Point Clouds with Visibility Information.

Raphael Sulzer 23 Jan 04, 2023
Air Pollution Prediction System using Linear Regression and ANN

AirPollution Pollution Weather Prediction System: Smart Outdoor Pollution Monitoring and Prediction for Healthy Breathing and Living Publication Link:

Dr Sharnil Pandya, Associate Professor, Symbiosis International University 19 Feb 07, 2022
Cascading Feature Extraction for Fast Point Cloud Registration (BMVC 2021)

Cascading Feature Extraction for Fast Point Cloud Registration This repository contains the source code for the paper [Arxive link comming soon]. Meth

7 May 26, 2022
A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries.

Yolo-Powered-Detector A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries

Luke Wilson 1 Dec 03, 2021
People movement type classifier with YOLOv4 detection and SORT tracking.

Movement classification The goal of this project would be movement classification of people, in other words, walking (normal and fast) and running. Yo

4 Sep 21, 2021
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

35 Dec 06, 2022
A Large Scale Benchmark for Individual Treatment Effect Prediction and Uplift Modeling

large-scale-ITE-UM-benchmark This repository contains code and data to reproduce the results of the paper "A Large Scale Benchmark for Individual Trea

10 Nov 19, 2022
Dynamical movement primitives (DMPs), probabilistic movement primitives (ProMPs), spatially coupled bimanual DMPs.

Movement Primitives Movement primitives are a common group of policy representations in robotics. There are many different types and variations. This

DFKI Robotics Innovation Center 63 Jan 06, 2023
Controlling Hill Climb Racing with Hand Tacking

Controlling Hill Climb Racing with Hand Tacking Opened Palm for Gas Closed Palm for Brake

Rohit Ingole 3 Jan 18, 2022
FairFuzz: AFL extension targeting rare branches

FairFuzz An AFL extension to increase code coverage by targeting rare branches. FairFuzz has a particular advantage on programs with highly nested str

Caroline Lemieux 222 Nov 16, 2022
🐤 Nix-TTS: An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation

🐤 Nix-TTS An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji

Rendi Chevi 156 Jan 09, 2023
Explaining in Style: Training a GAN to explain a classifier in StyleSpace

Explaining in Style: Official TensorFlow Colab Explaining in Style: Training a GAN to explain a classifier in StyleSpace Oran Lang, Yossi Gandelsman,

Google 197 Nov 08, 2022
[TIP 2020] Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion

Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion Code for Multi-Temporal Scene Classification and Scene Ch

Lixiang Ru 33 Dec 12, 2022
Reimplementation of the paper `Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? (ACL2020)`

Human Attention for Text Classification Re-implementation of the paper Human Attention Maps for Text Classification: Do Humans and Neural Networks Foc

Shunsuke KITADA 15 Dec 13, 2021
Projects of Andfun Yangon

AndFunYangon Projects of Andfun Yangon First Commit We can use gsearch.py to sea

Htin Aung Lu 1 Dec 28, 2021
Pytorch implementation for "Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter".

Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter This is a pytorch-based implementation for paper Implicit Feature Alignme

wangtianwei 61 Nov 12, 2022
Gym-TORCS is the reinforcement learning (RL) environment in TORCS domain with OpenAI-gym-like interface.

Gym-TORCS Gym-TORCS is the reinforcement learning (RL) environment in TORCS domain with OpenAI-gym-like interface. TORCS is the open-rource realistic

naoto yoshida 400 Dec 27, 2022