NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go

Overview

NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go

This repository provides our implementation of the CVPR 2021 paper NeuroMorph. Our algorithm produces in one go, i.e., in a single feed forward pass, a smooth interpolation and point-to-point correspondences between two input 3D shapes. It is learned in a self-supervised manner from an unlabelled collection of deformable and heterogeneous shapes.

If you use our work, please cite:

@inproceedings{eisenberger2021neuromorph, 
  title={NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go}, 
  author={Eisenberger, Marvin and Novotny, David and Kerchenbaum, Gael and Labatut, Patrick and Neverova, Natalia and Cremers, Daniel and Vedaldi, Andrea}, 
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, 
  pages={7473--7483}, 
  year={2021}
}

Requirements

The code was tested on Python 3.8.10 with the PyTorch version 1.9.1 and CUDA 10.2. The code also requires the pytorch-geometric library (installation instructions) and matplotlib. Finally, MATLAB with the Statistics and Machine Learning Toolbox is used to pre-process ceratin datasets (we tested MATLAB versions 2019b and 2021b). The code should run on Linux, macOS and Windows.

Installing NeuroMorph

Using Anaconda, you can install the required dependencies as follows:

conda create -n neuromorph python=3.8
conda activate neuromorph
conda install pytorch cudatoolkit=10.2 -c pytorch
conda install matplotlib
conda install pyg -c pyg -c conda-forge

Running NeuroMorph

In order to run NeuroMorph:

  • Specify the location of datasets on your device under data_folder_ in param.py.
  • To use your own data, create a new dataset in data/data.py.
  • To train FAUST remeshed, run the main script main_train.py. Modify the script as needed to train on different data.

For a more detailed tutorial, see the next section.

Reproducing the experiments

We show below how to reproduce the experiments on the FAUST remeshed data.

Data download

You can download experimental mesh data from here from the authors of the Deep Geometric Functional Maps. Download the FAUST_r.zip file from this site, unzip it, and move the content of the directory to /data/mesh/FAUST_r .

Data preprocessing

Meshes must be subsampled and remeshed (for data augmentation during training) and geodesic distance matrices must be computed before the learning code runs. For this, we use the data_preprocessing/preprocess_dataset.m MATLAB scripts (we tested V2019b and V2021b).

Start MATLAB and do the following:

cd 
   
    /data_preprocessing
   
preprocess_dataset("../data/meshes/FAUST_r/", ".off")

The result should be a list of MATLAB mesh files in a mat subfolder (e.g., data/meshes/FAUST_r/mat ), plus additional data.

Model training

If you stored the data in the directory given above, you can train the model by running:

mkdir -p data/{checkpoint,out}
python main_train.py

The trained models will be saved in a series of checkpoints at /data/out/ . Otherwise, edit param.py to change the paths.

Model testing

Upon completion, evaluate the trained model with main_test.py . Specify the checkpoint folder name by running:

python main_test.py <TIME_STAMP_FAUST>

Here is any of the directories saved in /data/out/ . This automatically saves correspondences and interpolations on the FAUST remeshed test set to /data/out/ . For reference, on FAUST you should expect a validation error around 0.25 after 400 epochs.

Contributing

See the CONTRIBUTING file for how to help out.

License

NeuroMorph is MIT licensed, as described in the LICENSE file. NeuroMorph includes a few files from other open source projects, as further detailed in the same LICENSE file.

Owner
Meta Research
Meta Research
The most simple and minimalistic navigation dashboard.

Navigation This project follows a goal to have simple and lightweight dashboard with different links. I use it to have my own self-hosted service dash

Yaroslav 23 Dec 23, 2022
Keras community contributions

keras-contrib : Keras community contributions Keras-contrib is deprecated. Use TensorFlow Addons. The future of Keras-contrib: We're migrating to tens

Keras 1.6k Dec 21, 2022
SAT: 2D Semantics Assisted Training for 3D Visual Grounding, ICCV 2021 (Oral)

SAT: 2D Semantics Assisted Training for 3D Visual Grounding SAT: 2D Semantics Assisted Training for 3D Visual Grounding by Zhengyuan Yang, Songyang Zh

Zhengyuan Yang 22 Nov 30, 2022
[NeurIPS'20] Self-supervised Co-Training for Video Representation Learning. Tengda Han, Weidi Xie, Andrew Zisserman.

CoCLR: Self-supervised Co-Training for Video Representation Learning This repository contains the implementation of: InfoNCE (MoCo on videos) UberNCE

Tengda Han 271 Jan 02, 2023
CO-PILOT: COllaborative Planning and reInforcement Learning On sub-Task curriculum

CO-PILOT CO-PILOT: COllaborative Planning and reInforcement Learning On sub-Task curriculum, NeurIPS 2021, Shuang Ao, Tianyi Zhou, Guodong Long, Qingh

Shuang Ao 1 Feb 18, 2022
Full Resolution Residual Networks for Semantic Image Segmentation

Full-Resolution Residual Networks (FRRN) This repository contains code to train and qualitatively evaluate Full-Resolution Residual Networks (FRRNs) a

Toby Pohlen 274 Oct 27, 2022
Powerful and efficient Computer Vision Annotation Tool (CVAT)

Computer Vision Annotation Tool (CVAT) CVAT is free, online, interactive video and image annotation tool for computer vision. It is being used by our

OpenVINO Toolkit 8.6k Jan 01, 2023
Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU A Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/abs/211

Fuhang 5 Jan 18, 2022
PyTorch Code for the paper "VSE++: Improving Visual-Semantic Embeddings with Hard Negatives"

Improving Visual-Semantic Embeddings with Hard Negatives Code for the image-caption retrieval methods from VSE++: Improving Visual-Semantic Embeddings

Fartash Faghri 441 Dec 05, 2022
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"

FLASH - Pytorch Implementation of the Transformer variant proposed in the paper Transformer Quality in Linear Time Install $ pip install FLASH-pytorch

Phil Wang 209 Dec 28, 2022
The story of Chicken for Club Bing

Chicken Story tl;dr: The time when Microsoft banned my entire country for cheating at Club Bing. (A lot of the details are from memory so I've recreat

Eyal 142 May 16, 2022
HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images

HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images Histological Image Segmentation This

Saad Wazir 11 Dec 16, 2022
POCO: Point Convolution for Surface Reconstruction

POCO: Point Convolution for Surface Reconstruction by: Alexandre Boulch and Renaud Marlet Abstract Implicit neural networks have been successfully use

valeo.ai 93 Dec 29, 2022
SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021]

SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021] Pdf: https://openreview.net/forum?id=v5gjXpmR8J Code for our ICLR 2021 pape

Princeton INSPIRE Research Group 113 Nov 27, 2022
Numenta published papers code and data

Numenta research papers code and data This repository contains reproducible code for selected Numenta papers. It is currently under construction and w

Numenta 293 Jan 06, 2023
A Model for Natural Language Attack on Text Classification and Inference

TextFooler A Model for Natural Language Attack on Text Classification and Inference This is the source code for the paper: Jin, Di, et al. "Is BERT Re

Di Jin 418 Dec 16, 2022
Real time Human Detection Counting

In this python project, we are going to build the Human Detection and Counting System through Webcam or you can give your own video or images. This is a deep learning project on computer vision, whic

Mir Nawaz Ahmad 2 Jun 17, 2022
This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution

Trajectory Prediction using Equivariant Continuous Convolution (ECCO) This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivar

Spatiotemporal Machine Learning 45 Jul 22, 2022
Bytedance Inc. 2.5k Jan 06, 2023
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

ood-text-emnlp Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them" Files fine_tune.py is used to finetune the GPT-2 mo

Udit Arora 19 Oct 28, 2022