The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Overview

Neural Deformation Graphs

Project Page | Paper | Video


Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction
Aljaž Božič, Pablo Palafox, Michael Zollhöfer, Justus Thies, Angela Dai, Matthias Nießner
CVPR 2021 (Oral Presentation)

This repository contains the code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Specifically, we implicitly model a deformation graph via a deep neural network and empose per-frame viewpoint consistency as well as inter-frame graph and surface consistency constraints in a self-supervised fashion.

That results in a differentiable construction of a deformation graph that is able to handle deformations present in the whole sequence.

Install all dependencies

  • Download the latest conda here.

  • To create a conda environment with all the required packages using conda run the following command:

conda env create -f resources/env.yml

The above command creates a conda environment with the name ndg.

  • Compile external dependencies inside external directory by executing:
conda activate ndg
./build_external.sh

The external dependencies are PyMarchingCubes, gaps and Eigen.

Generate data for visualization & training

In our experiments we use depth inputs from 4 camera views. These depth maps were captured with 4 Kinect Azure sensors. For quantitative evaluation we also used synthetic data, where 4 depth views were rendered from ground truth meshes. In both cases, screened Poisson reconstruction (implemented in MeshLab) was used to obtain meshes for data generation. An example sequence of meshes of a synthetic doozy sequence can be downloaded here.

To generate training data from these meshes, they need to be put into a directory out/meshes/doozy. Then the following code executes data generation, producing generated data samples in out/dataset/doozy:

./generate_data.sh

Visualize neural deformation graphs using pre-trained models

After data generation you can already check out the neural deformation graph estimation using a pre-trained model checkpoint. You need to place it into the out/models directory, and run visualization:

./viz.sh

Reconstruction visualization can take longer, if you want to check out graphs only, you can uncomment --viz_only_graph argument in viz.sh.

Within the Open3D viewer, you can navigate different settings using these keys:

  • N: toggle graph nodes and edges
  • G: toggle ground truth
  • D: show next
  • A: show previous
  • S: toggle smooth shading

Train a model from scratch

You can train a model from scratch using train_graph.sh and train_shape.sh scripts, in that order. The model checkpoints and tensorboard stats are going to be stored into out/experiments.

Optimize graph

To estimate a neural deformation graph from input observations, you need to specify the dataset to be used (inside out/dataset, should be generated before hand), and then training can be started using the following script:

./train_graph.sh

We ran all our experiments on NVidia 2080Ti GPU, for about 500k iterations. After the model has converged, you can visualize the optimized neural deformation graph using viz.sh script.

To check out convergence, you can visualize loss curves with tensorboard by running the following inside out/experiments directory:

tensorboard --logdir=.

Optimize shape

To optimize shape, you need to initialize the graph with a pre-trained graph model. That means that inside train_shape.sh you need to specify the graph_model_path, which should point to the converged checkpoint of the graph model (graph model usually converges at around 500k iterations). Multi-MLP model can then be optimized to reconstruct shape geometry by running:

./train_shape.sh

Similar to graph optimization also shape optimization converges in about 500k iterations.

Citation

If you find our work useful in your research, please consider citing:

@article{bozic2021neuraldeformationgraphs,
title={Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction},
author={Bo{\v{z}}i{\v{c}}, Alja{\v{z}} and Palafox, Pablo and Zollh{\"o}fer, Michael and Dai, Angela and Thies, Justus and Nie{\ss}ner, Matthias},
journal={CVPR},
year={2021}
}

Related work

Some other related works on non-rigid reconstruction by our group:

License

The code from this repository is released under the MIT license, except where otherwise stated (i.e., Eigen).

Owner
Aljaz Bozic
PhD Student at Visual Computing Group
Aljaz Bozic
ICML 21 - Voice2Series: Reprogramming Acoustic Models for Time Series Classification

Voice2Series-Reprogramming Voice2Series: Reprogramming Acoustic Models for Time Series Classification International Conference on Machine Learning (IC

49 Jan 03, 2023
Source code for "Progressive Transformers for End-to-End Sign Language Production" (ECCV 2020)

Progressive Transformers for End-to-End Sign Language Production Source code for "Progressive Transformers for End-to-End Sign Language Production" (B

58 Dec 21, 2022
git《Investigating Loss Functions for Extreme Super-Resolution》(CVPR 2020) GitHub:

Investigating Loss Functions for Extreme Super-Resolution NTIRE 2020 Perceptual Extreme Super-Resolution Submission. Our method ranked first and secon

Sejong Yang 0 Oct 17, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

80 Dec 27, 2022
Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN

Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN If you use this code for your research, please cite ou

41 Dec 08, 2022
A crossplatform menu bar application using mpv as DLNA Media Renderer.

Macast Chinese README A menu bar application using mpv as DLNA Media Renderer. Install MacOS || Windows || Debian Download link: Macast release latest

4.4k Jan 01, 2023
Pytorch implementation of TailCalibX : Feature Generation for Long-tail Classification

TailCalibX : Feature Generation for Long-tail Classification by Rahul Vigneswaran, Marc T. Law, Vineeth N. Balasubramanian, Makarand Tapaswi [arXiv] [

Rahul Vigneswaran 34 Jan 02, 2023
Automated image registration. Registrationimation was too much of a mouthful.

alignimation Automated image registration. Registrationimation was too much of a mouthful. This repo contains the code used for my blog post Alignimat

Ethan Rosenthal 9 Oct 13, 2022
[ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators

AMOS This repository contains the scripts for fine-tuning AMOS pretrained models on GLUE and SQuAD 2.0 benchmarks. Paper: Pretraining Text Encoders wi

Microsoft 22 Sep 15, 2022
The Power of Scale for Parameter-Efficient Prompt Tuning

The Power of Scale for Parameter-Efficient Prompt Tuning Implementation of soft embeddings from https://arxiv.org/abs/2104.08691v1 using Pytorch and H

Kip Parker 208 Dec 30, 2022
(ICCV 2021) ProHMR - Probabilistic Modeling for Human Mesh Recovery

ProHMR - Probabilistic Modeling for Human Mesh Recovery Code repository for the paper: Probabilistic Modeling for Human Mesh Recovery Nikos Kolotouros

Nikos Kolotouros 209 Dec 13, 2022
External Attention Network

Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks paper : https://arxiv.org/abs/2105.02358 Jittor code will come soon

MenghaoGuo 357 Dec 11, 2022
Using deep learning model to detect breast cancer.

Breast-Cancer-Detection Breast cancer is the most frequent cancer among women, with around one in every 19 women at risk. The number of cases of breas

1 Feb 13, 2022
Automatically align face images 🙃→🙂. Can also do windowing and warping.

Automatic Face Alignment (AFA) Carl M. Gaspar & Oliver G.B. Garrod You have lots of photos of faces like this: But you want to line up all of the face

Carl Michael Gaspar 15 Dec 12, 2022
An addernet CUDA version

Training addernet accelerated by CUDA Usage cd adder_cuda python setup.py install cd .. python main.py Environment pytorch 1.10.0 CUDA 11.3 benchmark

LingXY 4 Jun 20, 2022
Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision

MLP-Mixer: An all-MLP Architecture for Vision This repo contains PyTorch implementation of MLP-Mixer: An all-MLP Architecture for Vision. Usage : impo

Rishikesh (ऋषिकेश) 175 Dec 23, 2022
Invariant Causal Prediction for Block MDPs

MISA Abstract Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challeng

Meta Research 41 Sep 17, 2022
Speech Emotion Recognition with Fusion of Acoustic- and Linguistic-Feature-Based Decisions

APSIPA-SER-with-A-and-T This code is the implementation of Speech Emotion Recognition (SER) with acoustic and linguistic features. The network model i

kenro515 3 Jan 04, 2023
PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)

PSTR (CVPR2022) This code is an official implementation of "PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)". End-to-end one-step

Jiale Cao 28 Dec 13, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022