[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database

Overview

RetrievalFuse

Paper | Project Page | Video

RetrievalFuse: Neural 3D Scene Reconstruction with a Database
Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai
ICCV2021

This repository contains the code for the ICCV 2021 paper RetrievalFuse, a novel approach for 3D reconstruction from low resolution distance field grids and from point clouds.

In contrast to traditional generative learned models which encode the full generative process into a neural network and can struggle with maintaining local details at the scene level, we introduce a new method that directly leverages scene geometry from the training database.

File and Folders


Broad code structure is as follows:

File / Folder Description
config/super_resolution Super-resolution experiment configs
config/surface_reconstruction Surface reconstruction experiment configs
config/base Defaults for configurations
config/config_handler.py Config file parser
data/splits Training and validation splits for different datasets
dataset/scene.py SceneHandler class for managing access to scene data samples
dataset/patched_scene_dataset.py Pytorch dataset class for scene data
external/ChamferDistancePytorch For calculating rough chamfer distance between prediction and target while training
model/attention.py Attention, folding and unfolding modules
model/loss.py Loss functions
model/refinement.py Refinement network
model/retrieval.py Retrieval network
model/unet.py U-Net model used as a backbone in refinement network
runs/ Checkpoint and visualizations for experiments dumped here
trainer/train_retrieval.py Lightning module for training retrieval network
trainer/train_refinement.py Lightning module for training refinement network
util/arguments.py Argument parsing (additional arguments apart from those in config)
util/filesystem_logger.py For copying source code for each run in the experiment log directory
util/metrics.py Rough metrics for logging during training
util/mesh_metrics.py Final metrics on meshes
util/retrieval.py Script to dump retrievals once retrieval networks have been trained; needed for training refinement.
util/visualizations.py Utility scripts for visualizations

Further, the data/ directory has the following layout

data                    # root data directory
├── sdf_008             # low-res (8^3) distance fields
    ├── 
   
         
        ├── 
    
     
        ├── 
     
      
        ├── 
      
       
        ...
    ├── 
       
         ... ├── sdf_016 # low-res (16^3) distance fields ├── 
        
          ├── 
         
           ├── 
          
            ├── 
           
             ... ├── 
            
              ... ├── sdf_064 # high-res (64^3) distance fields ├── 
             
               ├── 
              
                ├── 
               
                 ├── 
                
                  ... ├── 
                 
                   ... ├── pc_20K # point cloud inputs ├── 
                  
                    ├── 
                   
                     ├── 
                    
                      ├── 
                     
                       ... ├── 
                      
                        ... ├── splits # train/val splits ├── size # data needed by SceneHandler class (autocreated on first run) ├── occupancy # data needed by SceneHandler class (autocreated on first run) 
                      
                     
                    
                   
                  
                 
                
               
              
             
            
           
          
         
        
       
      
     
    
   

Dependencies


Install the dependencies using pip ```bash pip install -r requirements.txt ``` Be sure that you pull the `ChamferDistancePytorch` submodule in `external`.

Data Preparation


For ShapeNetV2 and Matterport, get the appropriate meshes from the datasets. For 3DFRONT get the 3DFUTURE meshes and 3DFRONT scripts. For getting 3DFRONT meshes use our fork of 3D-FRONT-ToolBox to create room meshes.

Once you have the meshes, use our fork of sdf-gen to create distance field low-res inputs and high-res targets. For creating point cloud inputs simply use trimesh.sample.sample_surface (check util/misc/sample_scene_point_clouds). Place the processed data in appropriate directories:

  • data/sdf_008/ or data/sdf_016/ for low-res inputs

  • data/pc_20K/ for point clouds inputs

  • data/sdf_064/ for targets

Training the Retrieval Network


To train retrieval networks use the following command:

python trainer/train_retrieval.py --config config/<config> --val_check_interval 5 --experiment retrieval --wandb_main --sanity_steps 1

We provide some sample configurations for retrieval.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/retrieval_008_064.yaml
  • config/super_resolution/3DFront/retrieval_008_064.yaml
  • config/super_resolution/Matterport3D/retrieval_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/retrieval_128_064.yaml
  • config/surface_reconstruction/3DFront/retrieval_128_064.yaml
  • config/surface_reconstruction/Matterport3D/retrieval_128_064.yaml

Once trained, create the retrievals for train/validation set using the following commands:

python util/retrieval.py  --mode map --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config>
python util/retrieval.py --mode compose --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config> 

Training the Refinement Network


Use the following command to train the refinement network

python trainer/train_refinement.py --config <config> --val_check_interval 5 --experiment refinement --sanity_steps 1 --wandb_main --retrieval_ckpt <retrieval_ckpt>

Again, sample configurations for refinement are provided in the config directory.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/refinement_008_064.yaml
  • config/super_resolution/3DFront/refinement_008_064.yaml
  • config/super_resolution/Matterport3D/refinement_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/refinement_128_064.yaml
  • config/surface_reconstruction/3DFront/refinement_128_064.yaml
  • config/surface_reconstruction/Matterport3D/refinement_128_064.yaml

Visualizations and Logs


Visualizations and checkpoints are dumped in the `runs/` directory. Logs are uploaded to the user's [Weights&Biases](https://wandb.ai/site) dashboard.

Citation


If you find our work useful in your research, please consider citing:
@inproceedings{siddiqui2021retrievalfuse,
  title = {RetrievalFuse: Neural 3D Scene Reconstruction with a Database},
  author = {Siddiqui, Yawar and Thies, Justus and Ma, Fangchang and Shan, Qi and Nie{\ss}ner, Matthias and Dai, Angela},
  booktitle = {Proc. International Conference on Computer Vision (ICCV)},
  month = oct,
  year = {2021},
  doi = {},
  month_numeric = {10}
}

License


The code from this repository is released under the MIT license.
Owner
Yawar Nihal Siddiqui
Yawar Nihal Siddiqui
Real-time Neural Representation Fusion for Robust Volumetric Mapping

NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric Mapping Paper | Supplementary This repository contains the implementation of

ETHZ ASL 106 Dec 24, 2022
Distributed Deep learning with Keras & Spark

Elephas: Distributed Deep Learning with Keras & Spark Elephas is an extension of Keras, which allows you to run distributed deep learning models at sc

Max Pumperla 1.6k Jan 05, 2023
MODNet: Trimap-Free Portrait Matting in Real Time

MODNet is a model for real-time portrait matting with only RGB image input.

Zhanghan Ke 2.8k Dec 30, 2022
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and t

305 Dec 16, 2022
Code of Puregaze: Purifying gaze feature for generalizable gaze estimation, AAAI 2022.

PureGaze: Purifying Gaze Feature for Generalizable Gaze Estimation Description Our work is accpeted by AAAI 2022. Picture: We propose a domain-general

39 Dec 05, 2022
[CIKM 2021] Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning

Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning. This repo contains the PyTorch code and implementation for the paper E

Akuchi 18 Dec 22, 2022
Material related to the Principles of Cloud Computing course.

CloudComputingCourse Material related to the Principles of Cloud Computing course. This repository comprises material that I use to teach my Principle

Aniruddha Gokhale 15 Dec 02, 2022
Repository relating to the CVPR21 paper TimeLens: Event-based Video Frame Interpolation

TimeLens: Event-based Video Frame Interpolation This repository is about the High Speed Event and RGB (HS-ERGB) dataset, used in the 2021 CVPR paper T

Robotics and Perception Group 544 Dec 19, 2022
🇰🇷 Text to Image in Korean

KoDALLE Utilizing pretrained language model’s token embedding layer and position embedding layer as DALLE’s text encoder. Background Training DALLE mo

HappyFace 74 Sep 22, 2022
[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)

Are Transformers More Robust Than CNNs? Pytorch implementation for NeurIPS 2021 Paper: Are Transformers More Robust Than CNNs? Our implementation is b

Yutong Bai 145 Dec 01, 2022
An implementation of the paper "A Neural Algorithm of Artistic Style"

A Neural Algorithm of Artistic Style implementation - Neural Style Transfer This is an implementation of the research paper "A Neural Algorithm of Art

Srijarko Roy 27 Sep 20, 2022
Dilated Convolution with Learnable Spacings PyTorch

Dilated-Convolution-with-Learnable-Spacings-PyTorch Ismail Khalfaoui Hassani Dilated Convolution with Learnable Spacings (abbreviated to DCLS) is a no

15 Dec 09, 2022
Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021)

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021) Overview Prerequisites Linux Pytho

Shaojie Li 34 Mar 31, 2022
DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data.

DWIPrep: A Robust Preprocessing Pipeline for dMRI Data DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data. The transp

Gal Ben-Zvi 1 Jan 09, 2023
VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations

VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations 3D-aware Image Synthesis via Learning Structural and Textura

GenForce: May Generative Force Be with You 116 Dec 26, 2022
TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning

TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning Authors: Yixuan Su, Fangyu Liu, Zaiqiao Meng, Lei Shu, Ehsan Shareghi, and Nig

Yixuan Su 79 Nov 04, 2022
Learning from Synthetic Humans, CVPR 2017

Learning from Synthetic Humans (SURREAL) Gül Varol, Javier Romero, Xavier Martin, Naureen Mahmood, Michael J. Black, Ivan Laptev and Cordelia Schmid,

Gul Varol 538 Dec 18, 2022
Implementation of the Paper: "Parameterized Hypercomplex Graph Neural Networks for Graph Classification" by Tuan Le, Marco Bertolini, Frank Noé and Djork-Arné Clevert

Parameterized Hypercomplex Graph Neural Networks (PHC-GNNs) PHC-GNNs (Le et al., 2021): https://arxiv.org/abs/2103.16584 PHM Linear Layer Illustration

Bayer AG 26 Aug 11, 2022
Scaling Vision with Sparse Mixture of Experts

Scaling Vision with Sparse Mixture of Experts This repository contains the code for training and fine-tuning Sparse MoE models for vision (V-MoE) on I

Google Research 290 Dec 25, 2022
Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks

OnsagerNet Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks This is the original pyTorch implemenati

Haijun.Yu 3 Aug 24, 2022