Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Overview

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks

Example 1 Example 2 Example 3

This repository contains the code that accompanies our CVPR 2021 paper Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks

You can find detailed usage instructions for training your own models and using our pretrained models below.

If you found this work influential or helpful for your research, please consider citing

@Inproceedings{Paschalidou2021CVPR,
     title = {Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks},
     author = {Paschalidou, Despoina and Katharopoulos, Angelos and Geiger, Andreas and Fidler, Sanja},
     booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
     year = {2021}
}

Installation & Dependencies

Our codebase has the following dependencies:

For the visualizations, we use simple-3dviz, which is our easy-to-use library for visualizing 3D data using Python and ModernGL and matplotlib for the colormaps. Note that simple-3dviz provides a lightweight and easy-to-use scene viewer using wxpython. If you wish you use our scripts for visualizing the reconstructed primitives, you will need to also install wxpython.

The simplest way to make sure that you have all dependencies in place is to use conda. You can create a conda environment called neural_parts using

conda env create -f environment.yaml
conda activate neural_parts

Next compile the extenstion modules. You can do this via

python setup.py build_ext --inplace
pip install -e .

Demo

Example Output Example Output

You can now test our code on various inputs. To this end, simply download some input samples together with our pretrained models on D-FAUAST humans, ShapeNet chairs and ShapeNet planes from here. Now extract the nerual_parts_demo.zip that you just downloaded in the demo folder. To run our demo on the D-FAUST humans simply run

python demo.py ../config/dfaust_6.yaml --we ../demo/model_dfaust_6 --model_tag 50027_jumping_jacks:00135 --camera_target='-0.030173788,-0.10342446,-0.0021887198' --camera_position='0.076685235,-0.14528269,1.2060229' --up='0,1,0' --with_rotating_camera

This script should create a folder demo/output, where the per-primitive meshes are stored as .obj files. Similarly, you can now also run the demo for the input airplane

python demo.py ../config/shapenet_5.yaml --we ../demo/model_planes_5 --model_tag 02691156:7b134f6573e7270fb0a79e28606cb167 --camera_target='-0.030173788,-0.10342446,-0.0021887198' --camera_position='0.076685235,-0.14528269,1.2060229' --up='0,1,0' --with_rotating_camera

Usage

As soon as you have installed all dependencies and have obtained the preprocessed data, you can now start training new models from scratch, evaluate our pre-trained models and visualize the recovered primitives using one of our pre-trained models.

Reconstruction

To generate meshes using a trained model, we provide the forward_pass.py and the visualize_predictions.py scripts. Their difference is that the first performs the forward pass and generates a per-primitive mesh that is saved as an .obj file. Similarly, the visualize_predictions.py script performs the forward pass and visualizes the predicted primitives using simple-3dviz. The forward_pass.py script is ideal for reconstructing inputs on a heeadless server and you can run it by executing

python forward_pass.py path_to_config_yaml path_to_output_dir --weight_file path_to_weight_file --model_tag MODEL_TAG

where the argument --weight_file specifies the path to a trained model and the argument --model_tag defines the model_tag of the input to be reconstructed.

To run the visualize_predictions.py script you need to run

python visualize_predictions.py path_to_config_yaml path_to_output_dir --weight_file path_to_weight_file --model_tag MODEL_TAG

Using this script, you can easily render the prediction into .png images or a .gif, as well as perform various animations by rotating the camera. Furthermore, you can also specify the camera position, the up vector and the camera target as well as visualize the target mesh together with the predicted primitives simply by adding the --mesh argument.

Evaluation

For evaluation of the models we provide the script evaluate.py. You can run it using:

python evaluate.py path_to_config_yaml path_to_output_dir

The script reconstructs the input and evaluates the generated meshes using a standardized protocol. For each input, the script generates a .npz file that contains the various metrics for that particular input. Note that this script can also be executed multiple times in order to speed up the evaluation process. For example, if you wish to run the evaluation on 6 nodes, you can simply run

for i in {1..6}; do python evaluate.py path_to_config_yaml path_to_output_dir & done
[1] 9489
[2] 9490
[3] 9491
[4] 9492
[5] 9493
[6] 9494

wait
Running code on cpu
Running code on cpu
Running code on cpu
Running code on cpu
Running code on cpu
Running code on cpu

Again the script generates a per-input file in the output directory with the computed metrics.

Training

Finally, to train a new network from scratch, we provide the train_network.py script. To execute this script, you need to specify the path to the configuration file you wish to use and the path to the output directory, where the trained models and the training statistics will be saved. Namely, to train a new model from scratch, you simply need to run

python train_network.py path_to_config_yaml path_to_output_dir

Note tha it is also possible to start from a previously trained model by specifying the --weight_file argument, which should contain the path to a previously trained model. Furthermore, by using the arguments --model_tag and --category_tag, you can also train your network on a particular model (e.g. a specific plane, car, human etc.) or a specific object category (e.g. planes, chairs etc.)

Note that, if you want to use the RAdam optimizer during training, you will have to also install to download and install the corresponding code from this repository.

License

Our code is released under the MIT license which practically allows anyone to do anything with it. MIT license found in the LICENSE file.

Relevant Research

Below we list some papers that are relevant to our work.

Ours:

  • Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image pdf,project-page
  • Superquadrics Revisited: Learning 3D Shape Parsing beyond Cuboids pdf,project-page

By Others:

  • Learning Shape Abstractions by Assembling Volumetric Primitives pdf
  • 3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks pdf
  • Im2Struct: Recovering 3D Shape Structure From a Single RGB Image pdf
  • Learning shape templates with structured implicit functions pdf
  • CvxNet: Learnable Convex Decomposition pdf
Single-Stage 6D Object Pose Estimation, CVPR 2020

Overview This repository contains the code for the paper Single-Stage 6D Object Pose Estimation. Yinlin Hu, Pascal Fua, Wei Wang and Mathieu Salzmann.

CVLAB @ EPFL 89 Dec 26, 2022
Pytorch implementation of the paper SPICE: Semantic Pseudo-labeling for Image Clustering

SPICE: Semantic Pseudo-labeling for Image Clustering By Chuang Niu and Ge Wang This is a Pytorch implementation of the paper. (In updating) SOTA on 5

Chuang Niu 154 Dec 15, 2022
NEG loss implemented in pytorch

Pytorch Negative Sampling Loss Negative Sampling Loss implemented in PyTorch. Usage neg_loss = NEG_loss(num_classes, embedding_size) optimizer =

Daniil Gavrilov 123 Sep 13, 2022
《Rethinking Sptil Dimensions of Vision Trnsformers》(2021)

Rethinking Spatial Dimensions of Vision Transformers Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh | Paper NAVER

NAVER AI 224 Dec 27, 2022
Parameterized Explainer for Graph Neural Network

PGExplainer This is a Tensorflow implementation of the paper: Parameterized Explainer for Graph Neural Network https://arxiv.org/abs/2011.04573 NeurIP

Dongsheng Luo 89 Dec 12, 2022
Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking

Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking We revisit and address issues with Oxford 5k and Paris 6k image retrieval benchm

Filip Radenovic 188 Dec 17, 2022
Metadata-Extractor - Metadata Extractor Script can be used to read in exif metadata

Metadata Extractor The exifextract script can be used to read in exif metadata f

1 Feb 16, 2022
A curated list of programmatic weak supervision papers and resources

A curated list of programmatic weak supervision papers and resources

Jieyu Zhang 118 Jan 02, 2023
🎁 3,000,000+ Unsplash images made available for research and machine learning

The Unsplash Dataset The Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of

Unsplash 2k Jan 03, 2023
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"

Adversarial Neuron Pruning Purifies Backdoored Deep Models Code for NeurIPS 2021 "Adversarial Neuron Pruning Purifies Backdoored Deep Models" by Dongx

Dongxian Wu 31 Dec 11, 2022
[NeurIPS 2021] The PyTorch implementation of paper "Self-Supervised Learning Disentangled Group Representation as Feature"

IP-IRM [NeurIPS 2021] The PyTorch implementation of paper "Self-Supervised Learning Disentangled Group Representation as Feature". Codes will be relea

Wang Tan 67 Dec 24, 2022
Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning. Code will be available soon.

Official-PyTorch-Implementation-of-TransMEF Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fu

117 Dec 27, 2022
Repo for "TableParser: Automatic Table Parsing with Weak Supervision from Spreadsheets" at [email protected]

TableParser Repo for "TableParser: Automatic Table Parsing with Weak Supervision from Spreadsheets" at DS3 Lab 11 Dec 13, 2022

A PyTorch implementation of "From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network" (ICCV2021)

From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network The official code of VisionLAN (ICCV2021). VisionLAN successfully a

81 Dec 12, 2022
The Fundamental Clustering Problems Suite (FCPS) summaries 54 state-of-the-art clustering algorithms, common cluster challenges and estimations of the number of clusters as well as the testing for cluster tendency.

FCPS Fundamental Clustering Problems Suite The package provides over sixty state-of-the-art clustering algorithms for unsupervised machine learning pu

9 Nov 27, 2022
GANsformer: Generative Adversarial Transformers Drew A

GANformer: Generative Adversarial Transformers Drew A. Hudson* & C. Lawrence Zitnick Update: We released the new GANformer2 paper! *I wish to thank Ch

Drew Arad Hudson 1.2k Jan 02, 2023
Face uncertainty quantification or estimation using PyTorch.

Face-uncertainty-pytorch This is a demo code of face uncertainty quantification or estimation using PyTorch. The uncertainty of face recognition is af

Kaen 3 Sep 16, 2022
Torch implementation of various types of GAN (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN, LSGAN)

gans-collection.torch Torch implementation of various types of GANs (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN). Note that EBGAN and

Minchul Shin 53 Jan 22, 2022
Interactive dimensionality reduction for large datasets

BlosSOM 🌼 BlosSOM is a graphical environment for running semi-supervised dimensionality reduction with EmbedSOM. You can use it to explore multidimen

19 Dec 14, 2022
Google Recaptcha solver.

byerecaptcha - Google Recaptcha solver. Model and some codes takes from embium's repository -Installation- pip install byerecaptcha -How to use- from

Vladislav Zenkevich 21 Dec 19, 2022