Training and Evaluation Code for Neural Volumes

Overview

Neural Volumes

This repository contains training and evaluation code for the paper Neural Volumes. The method learns a 3D volumetric representation of objects & scenes that can be rendered and animated from only calibrated multi-view video.

Neural Volumes

Citing Neural Volumes

If you use Neural Volumes in your research, please cite the paper:

@article{Lombardi:2019,
 author = {Stephen Lombardi and Tomas Simon and Jason Saragih and Gabriel Schwartz and Andreas Lehrmann and Yaser Sheikh},
 title = {Neural Volumes: Learning Dynamic Renderable Volumes from Images},
 journal = {ACM Trans. Graph.},
 issue_date = {July 2019},
 volume = {38},
 number = {4},
 month = jul,
 year = {2019},
 issn = {0730-0301},
 pages = {65:1--65:14},
 articleno = {65},
 numpages = {14},
 url = {http://doi.acm.org/10.1145/3306346.3323020},
 doi = {10.1145/3306346.3323020},
 acmid = {3323020},
 publisher = {ACM},
 address = {New York, NY, USA},
}

File Organization

The root directory contains several subdirectories and files:

data/ --- custom PyTorch Dataset classes for loading included data
eval/ --- utilities for evaluation
experiments/ --- location of input data and training and evaluation output
models/ --- PyTorch modules for Neural Volumes
render.py --- main evaluation script
train.py --- main training script

Requirements

  • Python (3.6+)
    • PyTorch (1.2+)
    • NumPy
    • Pillow
    • Matplotlib
  • ffmpeg (in PATH, needed to render videos)

How to Use

There are two main scripts in the root directory: train.py and render.py. The scripts take a configuration file for the experiment that defines the dataset used and the options for the model (e.g., the type of decoder that is used).

A sample set of input data is provided in the v0.1 release and can be downloaded here and extracted into the root directory of the repository. experiments/dryice1/data contains the input images and camera calibration data, and experiments/dryice1/experiment1 contains an example experiment configuration file (experiments/dryice1/experiment1/config.py).

To train the model:

python train.py experiments/dryice1/experiment1/config.py

To render a video of a trained model:

python render.py experiments/dryice1/experiment1/config.py Render

License

See the LICENSE file for details.

Comments
  • Training with our own data

    Training with our own data

    Hi,
    I have a few questions on how the data should be formatted and the data format of the provided dryice1.

    • The model expects world space coordinate in meters? i.e if my extrinsics are already in meters do I still need the world_scale=1/256. in config.py file?
    • The extrinsics are in world2cam and the rotation convention is like opencv? i.e, y-down,z-forward and x-right, assuming identity for pose.txt file?
    • how long do I need to train for about 200 frames? And in the config.py file it seems you are skipping some frames? This is ok to do for my own sequence as well?
    • in the KRT file, I see that there's 5 parameters above the RT matrix. This is the distortion correction in opencv format? But it is not used yes?
    • I did not visualize your cameras, so I am not sure how they are distributed. Is it gonna be a problem if I use 50 cameras equally distributed in a half-hemisphere and the subject is already at world origin and 3.5 meters from every cameras? My question is do I need to filter the training cameras so that the back side of subject that is not seen by input 3 cameras is excluded?
    • How do I choose the input cameras? I have a visualization of the cameras . Which camera config should I use? Is this more a question of which testing camera poses I intend to have, i.e narrower the testing cameras' range of view, the closer input training cameras can be? Config_0 is more orthogonal and Config_1 sees less of the backside.
    opened by zawlin 32
  • Some questions about coordination transformation

    Some questions about coordination transformation

    Hello, Thanks for releasing your code. I am impressed by your work. Now I hope to run your code with my our dataset. I have two questions.

    Firstly, I see the pose.txt is used in the code to put the objects in the center. If I use my own data, will the file still work?

    Secondly, I see the code set the raypos is among -1 and 1. Is it the matrix in this pose file that narrows the range to -1 to 1? My own dataset' range is different.

    Thirdly, does the code limit the scope of the template? Does it have to be between 0-255?

    Thanks a lot in advance!

    opened by maobenz 3
  • Location of the volume

    Location of the volume

    Hi there,

    I wonder whether the origin of the volume is (0,0,0)?

    I'm testing the method on a public dataset (http://people.csail.mit.edu/drdaniel/mesh_animation), and I know exactly where (0,0,0) is in the images. But the volume seems to float around the scene. This is the first preview for training process: prog_000001

    Each camera is pointing to the opposite side of the scene, so I expect the same for the volume location in images. But for some reason, they are on the same side in the images. Can you help?

    Thank you.

    opened by lochuynh1989 3
  • Any plan to release all data that presented in the paper?

    Any plan to release all data that presented in the paper?

    Hi @stephenlombardi ,

    Thanks for sharing this great work. I was wondering do you have any plan to release all the data that you used in the paper (apart from the dryice)?

    Best, Zirui

    opened by ziruiw-dev 2
  • Block-wise initialization scheme

    Block-wise initialization scheme

    Hi, is there any paper describing the used block-wise weight initialization scheme?

    https://github.com/facebookresearch/neuralvolumes/blob/8c5fad49b2b05b4b2e79917ee87299e7c1676d59/models/utils.py#L73

    opened by denkorzh 2
  • Is there a way to render a 3D file from this?

    Is there a way to render a 3D file from this?

    Hello, I was wondering if there is a way to export an .obj/,fbx file along with corresponding materials from this? If not, do you have any suggestions as to how to go about that if I were to try extend the code to incorporate that functionality?

    opened by arlorostirolla 1
  • How Can I train and render a Person Image

    How Can I train and render a Person Image

    Hi my name is Luan I am trying to render a Person Image but I am not being able to run can you create and for me a folder with the Setting setup to use a person image? Thank you.

    opened by LuanDalOrto 1
  • code for hybrid rendering (section 6.2) doesn't exist?

    code for hybrid rendering (section 6.2) doesn't exist?

    Hello,

    First of all, thank you for releasing the code for your seminal work. I really think neural volumes is one of the works that popularized differentiable rendering and inspired future works such as neural radiance fields.

    My question is whether this codebase includes the code for the hybrid rendering method outlined in section 6.2 of the paper. I'm trying to fit Neural Volumes to multi-view video of a full-body human being, similar to the 5th subfigure in Fig. 1 of the main paper, but after reading it more carefully it seems as though I would need to use hybrid rendering to be able to render the fine details of the human being.

    Could you

    1. confirm the existence of hybrid rendering in this codebase AND
    2. whether or not hybrid rendering was used to render the full-bodied human being in Fig. 1 of the main paper.

    Thank you in advance.

    opened by andrewsonga 1
  • Misaligned views in rendering

    Misaligned views in rendering

    Hi,

    I am working on MIT dataset to test the network. When I specify a camera to render, it looks fine throughout timeline. However, while rendering the rotating video, the cameras are misaligned as shown in attached screenshot. All cameras look like clustered at the center and views are spread around within the range cameras cover. Is it possible to be any error in KRT or configuration?

    Any suggestion is welcome. issue_MIT_5_cams

    opened by CorneliusHsiao 1
Releases(v0.1)
Owner
Meta Research
Meta Research
A framework for analyzing computer vision models with simulated data

3DB: A framework for analyzing computer vision models with simulated data Paper Quickstart guide Blog post Installation Follow instructions on: https:

3DB 112 Jan 01, 2023
[NeurIPS 2021] SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning

SSUL - Official Pytorch Implementation (NeurIPS 2021) SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning Sun

Clova AI Research 44 Dec 27, 2022
VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries

VACA Code repository for the paper "VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries (arXiv)". The impleme

Pablo Sánchez-Martín 16 Oct 10, 2022
Retrieval.pytorch - The code we used in [2020 DIGIX]

Retrieval.pytorch - The code we used in [2020 DIGIX]

Guo-Hua Wang 2 Feb 07, 2022
Boundary IoU API (Beta version)

Boundary IoU API (Beta version) Bowen Cheng, Ross Girshick, Piotr Dollár, Alexander C. Berg, Alexander Kirillov [arXiv] [Project] [BibTeX] This API is

Bowen Cheng 177 Dec 29, 2022
Exploring Classification Equilibrium in Long-Tailed Object Detection, ICCV2021

Exploring Classification Equilibrium in Long-Tailed Object Detection (LOCE, ICCV 2021) Paper Introduction The conventional detectors tend to make imba

52 Nov 21, 2022
ktrain is a Python library that makes deep learning and AI more accessible and easier to apply

Overview | Tutorials | Examples | Installation | FAQ | How to Cite Welcome to ktrain News and Announcements 2020-11-08: ktrain v0.25.x is released and

Arun S. Maiya 1.1k Jan 02, 2023
Modification of convolutional neural net "UNET" for image segmentation in Keras framework

ZF_UNET_224 Pretrained Model Modification of convolutional neural net "UNET" for image segmentation in Keras framework Requirements Python 3.*, Keras

209 Nov 02, 2022
这个开源项目主要是对经典的时间序列预测算法论文进行复现,模型主要参考自GluonTS,框架主要参考自Informer

Time Series Research with Torch 这个开源项目主要是对经典的时间序列预测算法论文进行复现,模型主要参考自GluonTS,框架主要参考自Informer。 建立原因 相较于mxnet和TF,Torch框架中的神经网络层需要提前指定输入维度: # 建立线性层 TensorF

Chi Zhang 85 Dec 29, 2022
Phy-Q: A Benchmark for Physical Reasoning

Phy-Q: A Benchmark for Physical Reasoning Cheng Xue*, Vimukthini Pinto*, Chathura Gamage* Ekaterina Nikonova, Peng Zhang, Jochen Renz School of Comput

29 Dec 19, 2022
Utility code for use with PyXLL

pyxll-utils There is no need to use this package as of PyXLL 5. All features from this package are now provided by PyXLL. If you were using this packa

PyXLL 10 Dec 18, 2021
Object detection (YOLO) with pytorch, OpenCV and python

Real Time Object/Face Detection Using YOLO-v3 This project implements a real time object and face detection using YOLO algorithm. You only look once,

1 Aug 04, 2022
Official code for "Towards An End-to-End Framework for Flow-Guided Video Inpainting" (CVPR2022)

E2FGVI (CVPR 2022) English | 简体中文 This repository contains the official implementation of the following paper: Towards An End-to-End Framework for Flo

Media Computing Group @ Nankai University 537 Jan 07, 2023
Crowd-Kit is a powerful Python library that implements commonly-used aggregation methods for crowdsourced annotation and offers the relevant metrics and datasets

Crowd-Kit: Computational Quality Control for Crowdsourcing Documentation Crowd-Kit is a powerful Python library that implements commonly-used aggregat

Toloka 125 Dec 30, 2022
Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable.

Diffrax Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. Diffrax is a JAX-based library providing numerical differe

Patrick Kidger 717 Jan 09, 2023
Simple Pixelbot for Diablo 2 Resurrected written in python and opencv.

Simple Pixelbot for Diablo 2 Resurrected written in python and opencv. Obviously only use it in offline mode as it is against the TOS of Blizzard to use it in online mode!

468 Jan 03, 2023
Official repository for Natural Image Matting via Guided Contextual Attention

GCA-Matting: Natural Image Matting via Guided Contextual Attention The source codes and models of Natural Image Matting via Guided Contextual Attentio

Li Yaoyi 349 Dec 26, 2022
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.

CycleGAN PyTorch | project page | paper Torch implementation for learning an image-to-image translation (i.e. pix2pix) without input-output pairs, for

Jun-Yan Zhu 11.5k Dec 30, 2022
Basics of 2D and 3D Human Pose Estimation.

Human Pose Estimation 101 If you want a slightly more rigorous tutorial and understand the basics of Human Pose Estimation and how the field has evolv

Sudharshan Chandra Babu 293 Dec 14, 2022
VisualGPT: Data-efficient Adaptation of Pretrained Language Models for Image Captioning

VisualGPT Our Paper VisualGPT: Data-efficient Adaptation of Pretrained Language Models for Image Captioning Main Architecture of Our VisualGPT Downloa

Vision CAIR Research Group, KAUST 140 Dec 28, 2022