FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

Overview

FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

This repository contains the code (in PyTorch) for the "FADNet++" paper.

Contents

  1. Introduction
  2. Usage
  3. Results
  4. Acknowledgement
  5. Contacts

Introduction

We propose an efficient and accurate deep network for disparity estimation named FADNet with three main features:

  • It exploits efficient 2D based correlation layers with stacked blocks to preserve fast computation.
  • It combines the residual structures to make the deeper model easier to learn.
  • It contains multi-scale predictions so as to exploit a multi-scale weight scheduling training technique to improve the accuracy.

Usage

Dependencies

Package Installation

  • Execute "sh compile.sh" to compile libraries needed by GANet.
  • Enter "layers_package" and execute "sh install.sh" to install customized layers, including Channel Normalization layer and Resample layer.

We also release the docker version of this project, which has been configured completely and can be used directly. Please refer to this website for the image.

Usage of Scene Flow dataset
Download RGB cleanpass images and its disparity for three subset: FlyingThings3D, Driving, and Monkaa. Organize them as follows:
- FlyingThings3D_release/frames_cleanpass
- FlyingThings3D_release/disparity
- driving_release/frames_cleanpass
- driving_release/disparity
- monkaa_release/frames_cleanpass
- monkaa_release/disparity
Put them in the data/ folder (or soft link). The *train.sh* defaultly locates the data root path as data/.

Train

We use template scripts to configure the training task, which are stored in exp_configs. One sample "fadnet.conf" is as follows:

net=fadnet
loss=loss_configs/fadnet_sceneflow.json
outf_model=models/${net}-sceneflow
logf=logs/${net}-sceneflow.log

lr=1e-4
devices=0,1,2,3
dataset=sceneflow
trainlist=lists/SceneFlow.list
vallist=lists/FlyingThings3D_release_TEST.list
startR=0
startE=0
batchSize=16
maxdisp=-1
model=none
#model=fadnet_sceneflow.pth
Parameter Description Options
net network architecture name dispnets, dispnetc, dispnetcss, fadnet, psmnet, ganet
loss loss weight scheduling configuration file depends on the training scheme
outf_model folder name to store the model files \
logf log file name \
lr initial learning rate \
devices GPU device IDs to use depends on the hardware system
dataset dataset name to train sceneflow
train(val)list sample lists for training/validation \
startR the round index to start training (for restarting training from the checkpoint) \
startE the epoch index to start training (for restarting training from the checkpoint) \
batchSize the number of samples per batch \
maxdisp the maximum disparity that the model tries to predict \
model the model file path of the checkpoint \

We have integrated PSMNet and GANet for comparison. The sample configuration files are also given.

To start training, use the following command, dnn=CONFIG_FILE sh train.sh, such as:

dnn=fadnet sh train.sh

You do not need the suffix for CONFIG_FILE.

Evaluation

We have two modes for performance evaluation, test and detect, respectively. test requires that the testing samples should have ground truth of disparity and then reports the average End-point-error (EPE). detect does not require any ground truth for EPE computation. However, detect stores the disparity maps for each sample in the given list.

For the test mode, one can revise test.sh and run sh test.sh. The contents of test.sh are as follows:

net=fadnet
maxdisp=-1
dataset=sceneflow
trainlist=lists/SceneFlow.list
vallist=lists/FlyingThings3D_release_TEST.list

loss=loss_configs/test.json
outf_model=models/test/
logf=logs/${net}_test_on_${dataset}.log

lr=1e-4
devices=0,1,2,3
startR=0
startE=0
batchSize=8
model=models/fadnet.pth
python main.py --cuda --net $net --loss $loss --lr $lr \
               --outf $outf_model --logFile $logf \
               --devices $devices --batch_size $batchSize \
               --trainlist $trainlist --vallist $vallist \
               --dataset $dataset --maxdisp $maxdisp \
               --startRound $startR --startEpoch $startE \
               --model $model 

Most of the parameters in test.sh are similar to training. However, you can just ignore parameters, including trainlist, loss, outf_model, since they are not used in the test mode.

For the detect mode, one can revise detect.sh and run sh detect.sh. The contents of detect.sh are as follows:

net=fadnet
dataset=sceneflow

model=models/fadnet.pth
outf=detect_results/${net}-${dataset}/

filelist=lists/FlyingThings3D_release_TEST.list
filepath=data

CUDA_VISIBLE_DEVICES=0 python detecter.py --model $model --rp $outf --filelist $filelist --filepath $filepath --devices 0 --net ${net} 

You can revise the value of outf to change the folder that stores the predicted disparity maps.

Finetuning on KITTI datasets and result submission

We re-use the codes in PSMNet to finetune the pretrained models on KITTI datasets and generate disparity maps for submission. Use finetune.sh and submission.sh to do them respectively.

Pretrained Model

Update: 2020/2/6 We released the pre-trained Scene Flow model.

KITTI 2015 Scene Flow KITTI 2012
/ Google Drive /

Results

Results on Scene Flow dataset

Model EPE GPU Memory during inference (GB) Runtime (ms) on Tesla V100
FADNet 0.83 3.87 48.1
DispNetC 1.68 1.62 18.7
PSMNet 1.09 13.99 399.3
GANet 0.84 29.1 2251.1

Citation

If you find the code and paper is useful in your work, please cite our conference paper

@inproceedings{wang2020fadnet,
  title={{FADNet}: A Fast and Accurate Network for Disparity Estimation},
  author={Wang, Qiang and Shi, Shaohuai and Zheng, Shizhen and Zhao, Kaiyong and Chu, Xiaowen},
  booktitle={2020 {IEEE} International Conference on Robotics and Automation ({ICRA} 2020)},
  pages={101--107},
  year={2020}
}

Acknowledgement

We acknowledge the following repositories and papers since our project has used some codes of them.

Contacts

[email protected]

Any discussions or concerns are welcomed!

Owner
HKBU High Performance Machine Learning Lab
HKBU High Performance Machine Learning Lab
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥

face.evoLVe: High-Performance Face Recognition Library based on PaddlePaddle & PyTorch Evolve to be more comprehensive, effective and efficient for fa

Zhao Jian 3.1k Jan 04, 2023
A boosting-based Multiple Instance Learning (MIL) package that includes MIL-Boost and MCIL-Boost

A boosting-based Multiple Instance Learning (MIL) package that includes MIL-Boost and MCIL-Boost

Jun-Yan Zhu 27 Aug 08, 2022
Making Structure-from-Motion (COLMAP) more robust to symmetries and duplicated structures

SfM disambiguation with COLMAP About Structure-from-Motion generally fails when the scene exhibits symmetries and duplicated structures. In this repos

Computer Vision and Geometry Lab 193 Dec 26, 2022
An open-source project for applying deep learning to medical scenarios

Auto Vaidya An open source solution for creating end-end web app for employing the power of deep learning in various clinical scenarios like implant d

Smaranjit Ghose 18 May 29, 2022
MGFN: Multi-Graph Fusion Networks for Urban Region Embedding was accepted by IJCAI-2022.

Multi-Graph Fusion Networks for Urban Region Embedding (IJCAI-22) This is the implementation of Multi-Graph Fusion Networks for Urban Region Embedding

202 Nov 18, 2022
Video Instance Segmentation using Inter-Frame Communication Transformers (NeurIPS 2021)

Video Instance Segmentation using Inter-Frame Communication Transformers (NeurIPS 2021) Paper Video Instance Segmentation using Inter-Frame Communicat

Sukjun Hwang 81 Dec 29, 2022
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.

English | 简体中文 Documentation: https://mmtracking.readthedocs.io/ Introduction MMTracking is an open source video perception toolbox based on PyTorch.

OpenMMLab 2.7k Jan 08, 2023
A full pipeline AutoML tool for tabular data

HyperGBM Doc | 中文 We Are Hiring! Dear folks,we are offering challenging opportunities located in Beijing for both professionals and students who are k

DataCanvas 240 Jan 03, 2023
Code for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss"

PurNet Project for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss" Abstract Image-based salie

Jinming Su 4 Aug 25, 2022
Official implementation of the paper Label-Efficient Semantic Segmentation with Diffusion Models

Label-Efficient Semantic Segmentation with Diffusion Models Official implementation of the paper Label-Efficient Semantic Segmentation with Diffusion

Yandex Research 355 Jan 06, 2023
This repository contains the code for TACL2021 paper: SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization

SummaC: Summary Consistency Detection This repository contains the code for TACL2021 paper: SummaC: Re-Visiting NLI-based Models for Inconsistency Det

Philippe Laban 24 Jan 03, 2023
Code and data for paper "Deep Photo Style Transfer"

deep-photo-styletransfer Code and data for paper "Deep Photo Style Transfer" Disclaimer This software is published for academic and non-commercial use

Fujun Luan 9.9k Dec 29, 2022
Implementation of PersonaGPT Dialog Model

PersonaGPT An open-domain conversational agent with many personalities PersonaGPT is an open-domain conversational agent cpable of decoding personaliz

ILLIDAN Lab 42 Jan 01, 2023
An open-access benchmark and toolbox for electricity price forecasting

epftoolbox The epftoolbox is the first open-access library for driving research in electricity price forecasting. Its main goal is to make available a

97 Dec 05, 2022
Like Dirt-Samples, but cleaned up

Clean-Samples Like Dirt-Samples, but cleaned up, with clear provenance and license info (generally a permissive creative commons licence but check the

TidalCycles 39 Nov 30, 2022
This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). This codebase is implemented using JAX, buildin

naruya 132 Nov 21, 2022
Element selection for functional materials discovery by integrated machine learning of atomic contributions to properties

Element selection for functional materials discovery by integrated machine learning of atomic contributions to properties 8.11.2021 Andrij Vasylenko I

Leverhulme Research Centre for Functional Materials Design 4 Dec 20, 2022
Lucid library adapted for PyTorch

Lucent PyTorch + Lucid = Lucent The wonderful Lucid library adapted for the wonderful PyTorch! Lucent is not affiliated with Lucid or OpenAI's Clarity

Lim Swee Kiat 520 Dec 26, 2022
The first machine learning framework that encourages learning ML concepts instead of memorizing class functions.

SeaLion is designed to teach today's aspiring ml-engineers the popular machine learning concepts of today in a way that gives both intuition and ways of application. We do this through concise algori

Anish 324 Dec 27, 2022