Original code for "Zero-Shot Domain Adaptation with a Physics Prior"

Related tags

Deep LearningCIConv
Overview

Zero-Shot Domain Adaptation with a Physics Prior

[arXiv] [sup. material] - ICCV 2021 Oral paper, by Attila Lengyel, Sourav Garg, Michael Milford and Jan van Gemert.

This repository contains the PyTorch implementation of Color Invariant Convolutions and all experiments and datasets described in the paper.

Abstract

We explore the zero-shot setting for day-night domain adaptation. The traditional domain adaptation setting is to train on one domain and adapt to the target domain by exploiting unlabeled data samples from the test set. As gathering relevant test data is expensive and sometimes even impossible, we remove any reliance on test data imagery and instead exploit a visual inductive prior derived from physics-based reflection models for domain adaptation. We cast a number of color invariant edge detectors as trainable layers in a convolutional neural network and evaluate their robustness to illumination changes. We show that the color invariant layer reduces the day-night distribution shift in feature map activations throughout the network. We demonstrate improved performance for zero-shot day to night domain adaptation on both synthetic as well as natural datasets in various tasks, including classification, segmentation and place recognition.

Getting started

All code and experiments have been tested with PyTorch 1.7.0.

Create a local clone of this repository:

git clone https://github.com/Attila94/CIConv

The method directory contains the color invariant convolution (CIConv) layer, as well as custom ResNet and VGG models using the CIConv layer. To use the CIConv layer in your own architecture, simply copy ciconv2d.py to the desired directory and add it as a regular PyTorch layer as

from ciconv2d import CIConv2d
ciconv = CIConv2d('W', k=3, scale=0.0)

See resnet.py and vgg.py for examples.

Datasets

Shapenet Illuminants

[Download link]

Shapenet Illuminants is used in the synthetic classification experiment. The images are rendered from a subset of the ShapeNet dataset using the physically based renderer Mitsuba. The scene is illuminated by a point light modeled as a black-body radiator with temperatures ranging between [1900, 20000] K and an ambient light source. The training set contains 1,000 samples for each of the 10 object classes recorded under "normal" lighting conditions (T = 6500 K). Multiple test sets with 300 samples per class are rendered for a variety of light source intensities and colors.

shapenet_illuminants

Common Objects Day and Night

[Download link]

Common Objects Day and Night (CODaN) is a natural day-night image classification dataset. More information can be found on the separate Github repository: https://github.com/Attila94/CODaN.

codan

Experiments

1. Synthetic classification

  1. Download [link] and unpack the Shapenet Illuminants dataset.
  2. In your local CIConv clone navigate to experiments/1_synthetic_classification and run
python train.py --root 'path/to/shapenet_illuminants' --hflip --seed 0 --invariant 'W'

This will train a ResNet-18 with the 'W' color invariant from scratch and evaluate on all test sets.

shapenet_illuminants_results

Classification accuracy of ResNet-18 with various color invariants. RGB (not invariant) performance degrades when illumination conditions differ between train and test set, while color invariants remain more stable. W performs best overall.

2. CODaN classification

  1. Download the Common Objects Day and Night (CODaN) dataset from https://github.com/Attila94/CODaN.
  2. In your local CIConv clone navigate to experiments/2_codan_classification and run
python train.py --root 'path/to/codan' --invariant 'W' --scale 0. --hflip --jitter 0.3 --rr 20 --seed 0

This will train a ResNet-18 with the 'W' color invariant from scratch and evaluate on all test sets.

Selected results from the paper:

Method Day (% accuracy) Night (% accuracy)
Baseline 80.39 +- 0.38 48.31 +- 1.33
E 79.79 +- 0.40 49.95 +- 1.60
W 81.49 +- 0.49 59.67 +- 0.93
C 78.04 +- 1.08 53.44 +- 1.28
N 77.44 +- 0.00 52.03 +- 0.27
H 75.20 +- 0.56 50.52 +- 1.34

3. Semantic segmentation

  1. Download and unpack the following public datasets: Cityscapes, Nighttime Driving, Dark Zurich.

  2. In your local CIConv clone navigate to experiments/3_segmentation.

  3. Set the proper dataset locations in train.py.

  4. Run

    python train.py --hflip --rc --jitter 0.3 --scale 0.3 --batch-size 6 --pretrained --invariant 'W'

Selected results from the paper:

Method Nighttime Driving (mIoU) Dark Zurich (mIoU)
RefineNet [baseline] 34.1 30.6
W-RefineNet [ours] 41.6 34.5

4. Visual place recognition

  1. Setup conda environment

    conda create -n ciconv python=3.9 mamba -c conda-forge
    conda activate ciconv
    mamba install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 scikit-image -c pytorch
  2. Navigate to experiments/4_visual_place_recognition/cnnimageretrieval-pytorch/.

  3. Run

    git submodule update --init # download a fork of cnnimageretrieval-pytorch
    sh cirtorch/utils/setup_tests.sh # download datasets and pre-trained models 
    python3 -m cirtorch.examples.test --network-path data/networks/retrieval-SfM-120k_w_resnet101_gem/model.path.tar --multiscale '[1, 1/2**(1/2), 1/2]' --datasets '247tokyo1k' --whitening 'retrieval-SfM-120k'
  4. Use --network-path retrievalSfM120k-resnet101-gem to compare against the vanilla method (without using the color invariant trained ResNet101).

  5. Use --datasets 'gp_dl_nr' to test on the GardensPointWalking dataset.

Selected results from the paper:

Method Tokyo 24/7 (mAP)
ResNet101 GeM [baseline] 85.0
W-ResNet101 GeM [ours] 88.3

Citation

If you find this repository useful for your work, please cite as follows:

@article{lengyel2021zeroshot,
      title={Zero-Shot Domain Adaptation with a Physics Prior}, 
      author={Attila Lengyel and Sourav Garg and Michael Milford and Jan C. van Gemert},
      year={2021},
      eprint={2108.05137},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Owner
Attila Lengyel
PhD candidate @ TU Delft Computer Vision Lab.
Attila Lengyel
Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration

CoGAIL Table of Content Overview Installation Dataset Training Evaluation Trained Checkpoints Acknowledgement Citations License Overview This reposito

Jeremy Wang 29 Dec 24, 2022
A High-Quality Real Time Upscaler for Anime Video

Anime4K Anime4K is a set of open-source, high-quality real-time anime upscaling/denoising algorithms that can be implemented in any programming langua

15.7k Jan 06, 2023
Fast, differentiable sorting and ranking in PyTorch

Torchsort Fast, differentiable sorting and ranking in PyTorch. Pure PyTorch implementation of Fast Differentiable Sorting and Ranking (Blondel et al.)

Teddy Koker 655 Jan 04, 2023
AdaSpeech 2: Adaptive Text to Speech with Untranscribed Data

AdaSpeech 2: Adaptive Text to Speech with Untranscribed Data [WIP] Unofficial Pytorch implementation of AdaSpeech 2. Requirements : All code written i

Rishikesh (ऋषिकेश) 63 Dec 28, 2022
Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

DynaBOA Code repositoty for the paper: Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation Shanyan Guan, Jingwei Xu, Michell

197 Jan 07, 2023
PointPillars inference with TensorRT

A project demonstrating how to use CUDA-PointPillars to deal with cloud points data from lidar.

NVIDIA AI IOT 315 Dec 31, 2022
A new video text spotting framework with Transformer

TransVTSpotter: End-to-end Video Text Spotter with Transformer Introduction A Multilingual, Open World Video Text Dataset and End-to-end Video Text Sp

weijiawu 67 Jan 03, 2023
Convert Python 3 code to CUDA code.

Py2CUDA Convert python code to CUDA. Usage To convert a python file say named py_file.py to CUDA, run python generate_cuda.py --file py_file.py --arch

Yuval Rosen 3 Jul 14, 2021
Viperdb - A tiny log-structured key-value database written in pure Python

ViperDB 🐍 ViperDB is a lightweight embedded key-value store written in pure Pyt

17 Oct 17, 2022
Tackling data scarcity in Speech Translation using zero-shot multilingual Machine Translation techniques

Tackling data scarcity in Speech Translation using zero-shot multilingual Machine Translation techniques This repository is derived from the NMTGMinor

Tu Anh Dinh 1 Sep 07, 2022
Code for the paper "Reinforced Active Learning for Image Segmentation"

Reinforced Active Learning for Image Segmentation (RALIS) Code for the paper Reinforced Active Learning for Image Segmentation Dependencies python 3.6

Arantxa Casanova 79 Dec 19, 2022
Pytorch implementation of the unsupervised object discovery method LOST.

LOST Pytorch implementation of the unsupervised object discovery method LOST. More details can be found in the paper: Localizing Objects with Self-Sup

Valeo.ai 189 Dec 25, 2022
Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"

Dataset Distillation by Matching Training Trajectories Project Page | Paper This repo contains code for training expert trajectories and distilling sy

George Cazenavette 256 Jan 05, 2023
Bridging Composite and Real: Towards End-to-end Deep Image Matting

Bridging Composite and Real: Towards End-to-end Deep Image Matting Please note that the official repository of the paper Bridging Composite and Real:

Jizhizi_Li 30 Oct 31, 2022
Spherical CNNs

Spherical CNNs Equivariant CNNs for the sphere and SO(3) implemented in PyTorch Overview This library contains a PyTorch implementation of the rotatio

Jonas Köhler 893 Dec 28, 2022
A modular domain adaptation library written in PyTorch.

A modular domain adaptation library written in PyTorch.

Kevin Musgrave 225 Dec 29, 2022
IGCN : Image-to-graph convolutional network

IGCN : Image-to-graph convolutional network IGCN is a learning framework for 2D/3D deformable model registration and alignment, and shape reconstructi

Megumi Nakao 7 Oct 27, 2022
ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning. In ICCV, 2021.

ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning This repository contains the code for our ICCV 202

sangho.lee 28 Nov 08, 2022
Little Ball of Fur - A graph sampling extension library for NetworKit and NetworkX (CIKM 2020)

Little Ball of Fur is a graph sampling extension library for Python. Please look at the Documentation, relevant Paper, Promo video and External Resour

Benedek Rozemberczki 619 Dec 14, 2022
Pytorch code for "DPFM: Deep Partial Functional Maps" - 3DV 2021 (Oral)

DPFM Code for "DPFM: Deep Partial Functional Maps" - 3DV 2021 (Oral) Installation This implementation runs on python = 3.7, use pip to install depend

Souhaib Attaiki 29 Oct 03, 2022