Official implementation of "MetaSDF: Meta-learning Signed Distance Functions"

Related tags

Deep Learningmetasdf
Overview

MetaSDF: Meta-learning Signed Distance Functions

Project Page | Paper | Data

Vincent Sitzmann*, Eric Ryan Chan*, Richard Tucker, Noah Snavely
Gordon Wetzstein
*denotes equal contribution

This is the official implementation of the paper "MetaSDF: Meta-Learning Signed Distance Functions".

In this paper, we show how we may effectively learn a prior over implicit neural representations using gradient-based meta-learning.

While in the paper, we show this for the special case of SDFs with the ReLU nonlinearity, this works formidably well with other types of neural implicit representations - such as our work "SIREN"!

We show you how in our Colab notebook:

Explore MetaSDF in Colab

DeepSDF

A large part of this codebase (directory "3D") is based on the code from the terrific paper "DeepSDF" - check them out!

Get started

If you only want to experiment with MetaSDF, we have written a colab that doesn't require installing anything, and goes through a few other interesting properties of MetaSDF as well - for instance, it turns out you can train SIREN to fit any image in only just three gradient descent steps!

If you want to reproduce all the experiments from the paper, you can then set up a conda environment with all dependencies like so:

conda env create -f environment.yml
conda activate metasdf

3D Experiments

Dataset Preprocessing

Before training a model, you'll first need to preprocess the training meshes. Please follow the preprocessing steps used by DeepSDF if using ShapeNet.

Define an Experiment

Next, you'll need to define the model and hyperparameters for your experiment. Examples are given in 3D/curriculums.py, but feel free to make modifications. Although not present in the original paper, we've included some curriculums with positional encodings and smaller models. These generally perform on par with the original models but require much less memory.

Train a Model

After you've preprocessed your data and have defined your curriculum, you're ready to start training! Navigate to the 3D/scripts directory and run

python run_train.py <curriculum name>.

If training is interupted, pass the flag --load flag to continue training from where you left off.

You should begin seeing printouts of loss, with a summary at every epoch. Checkpoints and Tensorboard summaries are saved to the 'output_dir' directory, as defined in your curriculum. We log raw loss, which is either the composite loss or L1 loss, depending on your experiment definition, as well as a 'Misclassified Percentage'. The 'Misclassified Percentage' is the percentage of samples that the model incorrectly classified as inside or outside the mesh.

Reconstructing Meshes

After training a model, recontruct some meshes using

python run_reconstruct.py <curriculum name> --checkpoint <checkpoint file name>.

The script will use the 'test_split' as defined in the curriculum.

Evaluating Reconstructions

After reconstructing meshes, calculate Chamfer Distances between reconstructions and ground-truth meshes by running

python run_eval.py <reconstruction dir>.

Torchmeta

We're using the excellent torchmeta to implement hypernetworks.

Citation

If you find our work useful in your research, please cite:

       @inproceedings{sitzmann2019metasdf,
            author = {Sitzmann, Vincent
                      and Chan, Eric R.
                      and Tucker, Richard
                      and Snavely, Noah
                      and Wetzstein, Gordon},
            title = {MetaSDF: Meta-Learning Signed
                     Distance Functions},
            booktitle = {Proc. NeurIPS},
            year={2020}
       }

Contact

If you have any questions, please feel free to email the authors.

Owner
Vincent Sitzmann
I'm researching 3D-structured neural scene representations. Ph.D. student in Stanford's Computational Imaging Group.
Vincent Sitzmann
Release of SPLASH: Dataset for semantic parse correction with natural language feedback in the context of text-to-SQL parsing

SPLASH: Semantic Parsing with Language Assistance from Humans SPLASH is dataset for the task of semantic parse correction with natural language feedba

Microsoft Research - Language and Information Technologies (MSR LIT) 35 Oct 31, 2022
Source code for our paper "Empathetic Response Generation with State Management"

Source code for our paper "Empathetic Response Generation with State Management" this repository is maintained by both Jun Gao and Yuhan Liu Model Ove

Yuhan Liu 3 Oct 08, 2022
Read number plates with https://platerecognizer.com/

HASS-plate-recognizer Read vehicle license plates with https://platerecognizer.com/ which offers free processing of 2500 images per month. You will ne

Robin 69 Dec 30, 2022
A baseline code for VSPW

A baseline code for VSPW Preparation Download VSPW dataset The VSPW dataset with extracted frames and masks is available here.

28 Aug 22, 2022
Code for the published paper : Learning to recognize rare traffic sign

Improving traffic sign recognition by active search This repo contains code for the paper : "Learning to recognise rare traffic signs" How to use this

samsja 4 Jan 05, 2023
Codes and pretrained weights for winning submission of 2021 Brain Tumor Segmentation (BraTS) Challenge

Winning submission to the 2021 Brain Tumor Segmentation Challenge This repo contains the codes and pretrained weights for the winning submission to th

94 Dec 28, 2022
LightningFSL: Pytorch-Lightning implementations of Few-Shot Learning models.

LightningFSL: Few-Shot Learning with Pytorch-Lightning In this repo, a number of pytorch-lightning implementations of FSL algorithms are provided, inc

Xu Luo 76 Dec 11, 2022
Computer vision - fun segmentation experience using classic and deep tools :)

Computer_Vision_Segmentation_Fun Segmentation of Images and Video. Tools: pytorch Models: Classic model - GrabCut Deep model - Deeplabv3_resnet101 Flo

Mor Ventura 1 Dec 18, 2021
PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

76 Dec 24, 2022
IAUnet: Global Context-Aware Feature Learning for Person Re-Identification

IAUnet This repository contains the code for the paper: IAUnet: Global Context-Aware Feature Learning for Person Re-Identification Ruibing Hou, Bingpe

30 Jul 14, 2022
Semantic graph parser based on Categorial grammars

Lambekseq "Everyone who failed Greek or Latin hates it." This package is for proving theorems in Categorial grammars (CG) and constructing semantic gr

10 Aug 19, 2022
HackBMU-5.0-Team-Ctrl-Alt-Elite - HackBMU 5.0 Team Ctrl Alt Elite

HackBMU-5.0-Team-Ctrl-Alt-Elite The search is over. We present to you ‘Health-A-

3 Feb 19, 2022
Functional deep learning

Pipeline abstractions for deep learning. Full documentation here: https://lf1-io.github.io/padl/ PADL: is a pipeline builder for PyTorch. may be used

LF1 101 Nov 09, 2022
TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection

TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection; Accepted by ICCV2021. Note: The complete code (including training and t

S.X.Zhang 84 Dec 13, 2022
CRNN With PyTorch

CRNN-PyTorch Implementation of https://arxiv.org/abs/1507.05717

Vadim 4 Sep 01, 2022
NeROIC: Neural Object Capture and Rendering from Online Image Collections

NeROIC: Neural Object Capture and Rendering from Online Image Collections This repository is for the source code for the paper NeROIC: Neural Object C

Snap Research 647 Dec 27, 2022
Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal"

Patch-wise Adversarial Removal Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal

4 Oct 12, 2022
Plugin adapted from Ultralytics to bring YOLOv5 into Napari

napari-yolov5 Plugin adapted from Ultralytics to bring YOLOv5 into Napari. Training and detection can be done using the GUI. Training dataset must be

2 May 05, 2022
Source Code for AAAI 2022 paper "Graph Convolutional Networks with Dual Message Passing for Subgraph Isomorphism Counting and Matching"

Graph Convolutional Networks with Dual Message Passing for Subgraph Isomorphism Counting and Matching This repository is an official implementation of

HKUST-KnowComp 13 Sep 08, 2022
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

Karan Desai 105 Nov 25, 2022