MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction

Related tags

Deep LearningMVSDF
Overview

MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction

Intro

This is the official implementation for the ICCV 2021 paper Learning Signed Distance Field for Multi-view Surface Reconstruction

In this work, we introduce a novel neural surface reconstruction framework that leverages the knowledge of stereo matching and feature consistency to optimize the implicit surface representation. More specifically, we apply a signed distance field (SDF) and a surface light field to represent the scene geometry and appearance respectively. The SDF is directly supervised by geometry from stereo matching, and is refined by optimizing the multi-view feature consistency and the fidelity of rendered images. Our method is able to improve the robustness of geometry estimation and support reconstruction of complex scene topologies. Extensive experiments have been conducted on DTU, EPFL and Tanks and Temples datasets. Compared to previous state-of-the-art methods, our method achieves better mesh reconstruction in wide open scenes without masks as input.

How to Use

Environment Setup

The code is tested in the following environment (manually installed packages only). The newer version of the packages should also be fine.

dependencies:
  - cudatoolkit=10.2.89
  - numpy=1.19.2
  - python=3.8.8
  - pytorch=1.7.1
  - tqdm=4.60.0
  - pip:
    - cvxpy==1.1.12
    - gputil==1.4.0
    - imageio==2.9.0
    - open3d==0.13.0
    - opencv-python==4.5.1.48
    - pyhocon==0.3.57
    - scikit-image==0.18.3
    - scikit-learn==0.24.2
    - trimesh==3.9.13
    - pybind11==2.9.0

Data Preparation

Download preprocessed DTU datasets from here

Training

cd code
python training/exp_runner.py --data_dir <DATA_DIR>/scan<SCAN>/imfunc4 --batch_size 8 --nepoch 1800 --expname dtu_<SCAN>

The results will be written in exps/mvsdf_dtu_ .

Trained Models

Download trained models and put them in exps folder. This set of models achieve the following results.

Chamfer PSNR
24 0.846 24.67
37 1.894 20.15
40 0.895 25.15
55 0.435 23.19
63 1.067 26.24
65 0.903 26.9
69 0.746 26.54
83 1.241 25.15
97 1.009 25.71
105 1.320 26.48
106 0.867 28.81
110 0.842 23.16
114 0.340 27.51
118 0.472 28.46
122 0.466 27.71
Mean 0.890 25.72

Testing

python evaluation/eval.py --data_dir <DATA_DIR>/scan<SCAN>/imfunc4 --expname dtu_<SCAN> [--eval_rendering]

add --eval_rendering flag to generate and evaluate rendered images. The results will be written in evals/mvsdf_dtu_ .

Trimming

cd mesh_cut
python setup.py build_ext -i  # compile
python mesh_cut.py 
    
    
      [--thresh 15 --smooth 10]

    
   

Note that this part of code can only be used for research purpose. Please refer to mesh_cut/IBFS/license.txt

Evaluation

Apart from the official implementation, you can also use my re-implemented evaluation script.

Citation

If you find our work useful in your research, please kindly cite

@article{zhang2021learning,
	title={Learning Signed Distance Field for Multi-view Surface Reconstruction},
	author={Zhang, Jingyang and Yao, Yao and Quan, Long},
	journal={International Conference on Computer Vision (ICCV)},
	year={2021}
}
code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022

Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022 News (03/16/2022) upload retrieval checkpoints finetuned on COCO and Flickr T

187 Jan 02, 2023
[NeurIPS 2021] "Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems"

Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems Introduction Multi-agent control i

VITA 6 May 05, 2022
Pytorch Implementation of paper "Noisy Natural Gradient as Variational Inference"

Noisy Natural Gradient as Variational Inference PyTorch implementation of Noisy Natural Gradient as Variational Inference. Requirements Python 3 Pytor

Tony JiHyun Kim 119 Dec 02, 2022
Unified MultiWOZ evaluation scripts for the context-to-response task.

MultiWOZ Context-to-Response Evaluation Standardized and easy to use Inform, Success, BLEU ~ See the paper ~ Easy-to-use scripts for standardized eval

Tomáš Nekvinda 38 Dec 13, 2022
PyTorch implementation of Barlow Twins.

Barlow Twins: Self-Supervised Learning via Redundancy Reduction PyTorch implementation of Barlow Twins. @article{zbontar2021barlow, title={Barlow Tw

Facebook Research 839 Dec 29, 2022
DrNAS: Dirichlet Neural Architecture Search

This paper proposes a novel differentiable architecture search method by formulating it into a distribution learning problem. We treat the continuously relaxed architecture mixing weight as random va

Xiangning Chen 37 Jan 03, 2023
Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020

XDVioDet Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020. The proj

peng 64 Dec 12, 2022
Walk with fastai

Shield: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Walk with fastai What is this p

Walk with fastai 124 Dec 10, 2022
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch

PyTorch implementation of Continuous Augmented Positional Embeddings (CAPE), by Likhomanenko et al. Enhance your Transformer positional embeddings with easy-to-use augmentations!

Guillermo Cámbara 26 Dec 13, 2022
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

62 Dec 22, 2022
Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper

Evaluating the Factual Consistency of Abstractive Text Summarization Authors: Wojciech Kryściński, Bryan McCann, Caiming Xiong, and Richard Socher Int

Salesforce 165 Dec 21, 2022
Learning the Beauty in Songs: Neural Singing Voice Beautifier; ACL 2022 (Main conference); Official code

Learning the Beauty in Songs: Neural Singing Voice Beautifier Jinglin Liu, Chengxi Li, Yi Ren, Zhiying Zhu, Zhou Zhao Zhejiang University ACL 2022 Mai

Jinglin Liu 257 Dec 30, 2022
A Lightweight Experiment & Resource Monitoring Tool 📺

Lightweight Experiment & Resource Monitoring 📺 "Did I already run this experiment before? How many resources are currently available on my cluster?"

170 Dec 28, 2022
PyTorch Implementation of CycleGAN and SSGAN for Domain Transfer (Minimal)

MNIST-to-SVHN and SVHN-to-MNIST PyTorch Implementation of CycleGAN and Semi-Supervised GAN for Domain Transfer. Prerequites Python 3.5 PyTorch 0.1.12

Yunjey Choi 401 Dec 30, 2022
[CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers

TubeDETR: Spatio-Temporal Video Grounding with Transformers Website • STVG Demo • Paper This repository provides the code for our paper. This includes

Antoine Yang 108 Dec 27, 2022
Library for machine learning stacking generalization.

stacked_generalization Implemented machine learning *stacking technic[1]* as handy library in Python. Feature weighted linear stacking is also availab

114 Jul 19, 2022
Related resources for our EMNLP 2021 paper

Plan-then-Generate: Controlled Data-to-Text Generation via Planning Authors: Yixuan Su, David Vandyke, Sihui Wang, Yimai Fang, and Nigel Collier Code

Yixuan Su 61 Jan 03, 2023
Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation

Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation

NVIDIA Research Projects 4.8k Jan 09, 2023
An implementation of EWC with PyTorch

EWC.pytorch An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural

Ryuichiro Hataya 166 Dec 22, 2022
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning

H-Transformer-1D Implementation of H-Transformer-1D, Transformer using hierarchical Attention for sequence learning with subquadratic costs. For now,

Phil Wang 123 Nov 17, 2022