IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID,

Related tags

Deep LearningIDM
Overview

Python >=3.7 PyTorch >=1.1

Intermediate Domain Module (IDM)

This repository is the official implementation for IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID, which is accepted by ICCV 2021 (Oral).

IDM achieves state-of-the-art performances on the unsupervised domain adaptation task for person re-ID.

Requirements

Installation

git clone https://github.com/SikaStar/IDM.git
cd IDM/idm/evaluation_metrics/rank_cylib && make all

Prepare Datasets

cd examples && mkdir data

Download the person re-ID datasets Market-1501, DukeMTMC-ReID, MSMT17, PersonX, and UnrealPerson. Then unzip them under the directory like

IDM/examples/data
├── dukemtmc
│   └── DukeMTMC-reID
├── market1501
│   └── Market-1501-v15.09.15
├── msmt17
│   └── MSMT17_V1
├── personx
│   └── PersonX
└── unreal
    ├── list_unreal_train.txt
    └── unreal_vX.Y

Prepare ImageNet Pre-trained Models for IBN-Net

When training with the backbone of IBN-ResNet, you need to download the ImageNet-pretrained model from this link and save it under the path of logs/pretrained/.

mkdir logs && cd logs
mkdir pretrained

The file tree should be

IDM/logs
└── pretrained
    └── resnet50_ibn_a.pth.tar

ImageNet-pretrained models for ResNet-50 will be automatically downloaded in the python script.

Training

We utilize 4 GTX-2080TI GPUs for training. Note that

  • The source and target domains are trained jointly.
  • For baseline methods, use -a resnet50 for the backbone of ResNet-50, and -a resnet_ibn50a for the backbone of IBN-ResNet.
  • For IDM, use -a resnet50_idm to insert IDM into the backbone of ResNet-50, and -a resnet_ibn50a_idm to insert IDM into the backbone of IBN-ResNet.
  • For strong baseline, use --use-xbm to implement XBM (a variant of Memory Bank).

Baseline Methods

To train the baseline methods in the paper, run commands like:

# Naive Baseline
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_naive_baseline.sh ${source} ${target} ${arch}

# Strong Baseline
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_strong_baseline.sh ${source} ${target} ${arch}

Some examples:

### market1501 -> dukemtmc ###

# ResNet-50
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_strong_baseline.sh market1501 dukemtmc resnet50 

# IBN-ResNet-50
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/run_strong_baseline.sh market1501 dukemtmc resnet_ibn50a

Training with IDM

To train the models with our IDM, run commands like:

# Naive Baseline + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm.sh ${source} ${target} ${arch} ${stage} ${mu1} ${mu2} ${mu3}

# Strong Baseline + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm_xbm.sh ${source} ${target} ${arch} ${stage} ${mu1} ${mu2} ${mu3}
  • Defaults: --stage 0 --mu1 0.7 --mu2 0.1 --mu3 1.0

Some examples:

### market1501 -> dukemtmc ###

# ResNet-50 + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm_xbm.sh market1501 dukemtmc resnet50_idm 0 0.7 0.1 1.0 

# IBN-ResNet-50 + IDM
CUDA_VISIBLE_DEVICES=0,1,2,3 \
sh scripts/run_idm_xbm.sh market1501 dukemtmc resnet_ibn50a_idm 0 0.7 0.1 1.0

Evaluation

We utilize 1 GTX-2080TI GPU for testing. Note that

  • use --dsbn for domain adaptive models, and add --test-source if you want to test on the source domain;
  • use -a resnet50 for the backbone of ResNet-50, and -a resnet_ibn50a for the backbone of IBN-ResNet.
  • use -a resnet50_idm for ResNet-50 + IDM, and -a resnet_ibn50a_idm for IBN-ResNet + IDM.

To evaluate the baseline model on the target-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn -d ${dataset} -a ${arch} --resume ${resume} 

To evaluate the baseline model on the source-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn --test-source -d ${dataset} -a ${arch} --resume ${resume} 

To evaluate the IDM model on the target-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn-idm -d ${dataset} -a ${arch} --resume ${resume} --stage ${stage} 

To evaluate the IDM model on the source-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn-idm --test-source -d ${dataset} -a ${arch} --resume ${resume} --stage ${stage} 

Some examples:

### market1501 -> dukemtmc ###

# evaluate the target domain "dukemtmc" on the strong baseline model
CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn  -d dukemtmc -a resnet50 \
--resume logs/resnet50_strong_baseline/market1501-TO-dukemtmc/model_best.pth.tar 

# evaluate the source domain "market1501" on the strong baseline model
CUDA_VISIBLE_DEVICES=0 \
python3 examples/test.py --dsbn --test-source  -d market1501 -a resnet50 \
--resume logs/resnet50_strong_baseline/market1501-TO-dukemtmc/model_best.pth.tar 

# evaluate the target domain "dukemtmc" on the IDM model (after stage-0)
python3 examples/test.py --dsbn-idm  -d dukemtmc -a resnet50_idm \
--resume logs/resnet50_idm_xbm/market1501-TO-dukemtmc/model_best.pth.tar --stage 0

# evaluate the target domain "dukemtmc" on the IDM model (after stage-0)
python3 examples/test.py --dsbn-idm --test-source  -d market1501 -a resnet50_idm \
--resume logs/resnet50_idm_xbm/market1501-TO-dukemtmc/model_best.pth.tar --stage 0

Acknowledgement

Our code is based on MMT and SpCL. Thanks for Yixiao's wonderful works.

Citation

If you find our work is useful for your research, please kindly cite our paper

@inproceedings{dai2021idm,
  title={IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID},
  author={Dai, Yongxing and Liu, Jun and Sun, Yifan and Tong, Zekun and Zhang, Chi and Duan, Ling-Yu},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2021}
}

If you have any questions, please leave an issue or contact me: [email protected]

Owner
Yongxing Dai
I am now a fourth-year PhD student at National Engineering Lab for Video Technology in Peking University, Beijing, China
Yongxing Dai
Nested cross-validation is necessary to avoid biased model performance in embedded feature selection in high-dimensional data with tiny sample sizes

Pruner for nested cross-validation - Sphinx-Doc Nested cross-validation is necessary to avoid biased model performance in embedded feature selection i

1 Dec 15, 2021
Dynamic Capacity Networks using Tensorflow

Dynamic Capacity Networks using Tensorflow Dynamic Capacity Networks (DCN; http://arxiv.org/abs/1511.07838) implementation using Tensorflow. DCN reduc

Taeksoo Kim 8 Feb 23, 2021
A PyTorch Implementation of Single Shot Scale-invariant Face Detector.

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector. Eval python wider_eval_pytorch.

carwin 235 Jan 07, 2023
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Welcome to AirSim AirSim is a simulator for drones, cars and more, built on Unreal Engine (we now also have an experimental Unity release). It is open

Microsoft 13.8k Jan 03, 2023
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayes

Intel Labs 210 Jan 04, 2023
Repo for parser tensorflow(.pb) and tflite(.tflite)

tfmodel_parser .pb file is the format of tensorflow model .tflite file is the format of tflite model, which usually used in mobile devices before star

1 Dec 23, 2021
Pydantic models for pywttr and aiopywttr.

Pydantic models for pywttr and aiopywttr.

Almaz 2 Dec 08, 2022
TLXZoo - Pre-trained models based on TensorLayerX

Pre-trained models based on TensorLayerX. TensorLayerX is a multi-backend AI fra

TensorLayer Community 13 Dec 07, 2022
CarND-LaneLines-P1 - Lane Finding Project for Self-Driving Car ND

Finding Lane Lines on the Road Overview When we drive, we use our eyes to decide where to go. The lines on the road that show us where the lanes are a

Udacity 769 Dec 27, 2022
MAU: A Motion-Aware Unit for Video Prediction and Beyond, NeurIPS2021

MAU (NeurIPS2021) Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Yan Ye, Xinguang Xiang, Wen GAo. Official PyTorch Code for "MAU: A Motion-Aware

ZhengChang 20 Nov 25, 2022
FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

Google 208 Dec 14, 2022
Code for "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans" CVPR 2021 best paper candidate

News 05/17/2021 To make the comparison on ZJU-MoCap easier, we save quantitative and qualitative results of other methods at here, including Neural Vo

ZJU3DV 748 Jan 07, 2023
Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

41 Jan 04, 2023
PyTorch implementation of TSception V2 using DEAP dataset

TSception This is the PyTorch implementation of TSception V2 using DEAP dataset in our paper: Yi Ding, Neethu Robinson, Su Zhang, Qiuhao Zeng, Cuntai

Yi Ding 27 Dec 15, 2022
Official PyTorch implementation of "Improving Face Recognition with Large AgeGaps by Learning to Distinguish Children" (BMVC 2021)

Inter-Prototype (BMVC 2021): Official Project Webpage This repository provides the official PyTorch implementation of the following paper: Improving F

Jungsoo Lee 16 Jun 30, 2022
Local trajectory planner based on a multilayer graph framework for autonomous race vehicles.

Graph-Based Local Trajectory Planner The graph-based local trajectory planner is python-based and comes with open interfaces as well as debug, visuali

TUM - Institute of Automotive Technology 160 Jan 04, 2023
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
Reproduced Code for Image Forgery Detection papers.

Image Forgery Detection With over 4.5 billion active internet users, the amount of multimedia content being shared every day has surpassed everyone’s

Umar Masud 15 Dec 06, 2022
An intelligent, flexible grammar of machine learning.

An english representation of machine learning. Modify what you want, let us handle the rest. Overview Nylon is a python library that lets you customiz

Palash Shah 79 Dec 02, 2022