PlenOctrees: NeRF-SH Training & Conversion

Overview

PlenOctrees Official Repo: NeRF-SH training and conversion

This repository contains code to train NeRF-SH and to extract the PlenOctree, constituting part of the code release for:

PlenOctrees for Real Time Rendering of Neural Radiance Fields
Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, Angjoo Kanazawa

https://alexyu.net/plenoctrees

Please see the following repository for our C++ PlenOctrees volume renderer: https://github.com/sxyu/volrend

Setup

Please use conda for a replicable environment.

conda env create -f environment.yml
conda activate plenoctree
pip install --upgrade pip

Or you can install the dependencies manually by:

conda install pytorch torchvision cudatoolkit=11.0 -c pytorch
conda install tqdm
pip install -r requirements.txt

[Optional] Install GPU and TPU support for Jax. This is useful for NeRF-SH training. Remember to change cuda110 to your CUDA version, e.g. cuda102 for CUDA 10.2.

pip install --upgrade jax jaxlib==0.1.65+cuda110 -f https://storage.googleapis.com/jax-releases/jax_releases.html

NeRF-SH Training

We release our trained NeRF-SH models as well as converted plenoctrees at Google Drive. You can also use the following commands to reproduce the NeRF-SH models.

Training and evaluation on the NeRF-Synthetic dataset (Google Drive):

export DATA_ROOT=./data/NeRF/nerf_synthetic/
export CKPT_ROOT=./data/Plenoctree/checkpoints/syn_sh16/
export SCENE=chair
export CONFIG_FILE=nerf_sh/config/blender

python -m nerf_sh.train \
    --train_dir $CKPT_ROOT/$SCENE/ \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/

python -m nerf_sh.eval \
    --chunk 4096 \
    --train_dir $CKPT_ROOT/$SCENE/ \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/

Note for SCENE=mic, we adopt a warmup learning rate schedule (--lr_delay_steps 50000 --lr_delay_mult 0.01) to avoid unstable initialization.

Training and evaluation on TanksAndTemple dataset (Download Link) from the NSVF paper:

export DATA_ROOT=./data/TanksAndTemple/
export CKPT_ROOT=./data/Plenoctree/checkpoints/tt_sh25/
export SCENE=Barn
export CONFIG_FILE=nerf_sh/config/tt

python -m nerf_sh.train \
    --train_dir $CKPT_ROOT/$SCENE/ \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/

python -m nerf_sh.eval \
    --chunk 4096 \
    --train_dir $CKPT_ROOT/$SCENE/ \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/

PlenOctrees Conversion and Optimization

Before converting the NeRF-SH models into plenoctrees, you should already have the NeRF-SH models trained/downloaded and placed at ./data/PlenOctree/checkpoints/{syn_sh16, tt_sh25}/. Also make sure you have the training data placed at ./data/{NeRF/nerf_synthetic, TanksAndTemple}.

To reproduce our results in the paper, you can simplly run:

# NeRF-Synthetic dataset
python -m octree.task_manager octree/config/syn_sh16.json --gpus="0 1 2 3"

# TanksAndTemple dataset
python -m octree.task_manager octree/config/tt_sh25.json --gpus="0 1 2 3"

The above command will parallel all scenes in the dataset across the gpus you set. The json files contain dedicated hyper-parameters towards better performance (PSNR, SSIM, LPIPS). So in this setting, a 24GB GPU is needed for each scene and in averange the process takes about 15 minutes to finish. The converted plenoctree will be saved to ./data/PlenOctree/checkpoints/{syn_sh16, tt_sh25}/$SCENE/octrees/.

Below is a more straight-forward script for demonstration purpose:

export DATA_ROOT=./data/NeRF/nerf_synthetic/
export CKPT_ROOT=./data/PlenOctree/checkpoints/syn_sh16
export SCENE=chair
export CONFIG_FILE=nerf_sh/config/blender

python -m octree.extraction \
    --train_dir $CKPT_ROOT/$SCENE/ --is_jaxnerf_ckpt \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/ \
    --output $CKPT_ROOT/$SCENE/octrees/tree.npz

python -m octree.optimization \
    --input $CKPT_ROOT/$SCENE/tree.npz \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/ \
    --output $CKPT_ROOT/$SCENE/octrees/tree_opt.npz

python -m octree.evaluation \
    --input $CKPT_ROOT/$SCENE/octrees/tree_opt.npz \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/

# [Optional] Only used for in-browser viewing.
python -m octree.compression \
    $CKPT_ROOT/$SCENE/octrees/tree_opt.npz \
    --out_dir $CKPT_ROOT/$SCENE/ \
    --overwrite

MISC

Project Vanilla NeRF to PlenOctree

A vanilla trained NeRF can also be converted to a plenoctree for fast inference. To mimic the view-independency propertity as in a NeRF-SH model, we project the vanilla NeRF model to SH basis functions by sampling view directions for every points in the space. Though this makes converting vanilla NeRF to a plenoctree possible, the projection process inevitability loses the quality of the model, even with a large amount of sampling view directions (which takes hours to finish). So we recommend to just directly train a NeRF-SH model end-to-end.

Below is a example of projecting a trained vanilla NeRF model from JaxNeRF repo (Download Link) to a plenoctree. After extraction, you can optimize & evaluate & compress the plenoctree just like usual:

export DATA_ROOT=./data/NeRF/nerf_synthetic/ 
export CKPT_ROOT=./data/JaxNeRF/jaxnerf_models/blender/ 
export SCENE=drums
export CONFIG_FILE=nerf_sh/config/misc/proj

python -m octree.extraction \
    --train_dir $CKPT_ROOT/$SCENE/ --is_jaxnerf_ckpt \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/ \
    --output $CKPT_ROOT/$SCENE/octrees/tree.npz \
    --projection_samples 100 \
    --radius 1.3

Note --projection_samples controls how many sampling view directions are used. More sampling view directions give better projection quality but takes longer time to finish. For example, for the drums scene in the NeRF-Synthetic dataset, 100 / 10000 sampling view directions takes about 2 mins / 2 hours to finish the plenoctree extraction. It produce raw plenoctrees with PSNR=22.49 / 23.84 (before optimization). Note that extraction from a NeRF-SH model produce a raw plenoctree with PSNR=25.01.

Owner
Alex Yu
Undergrad at UC Berkeley
Alex Yu
FcaNet: Frequency Channel Attention Networks

FcaNet: Frequency Channel Attention Networks PyTorch implementation of the paper "FcaNet: Frequency Channel Attention Networks". Simplest usage Models

327 Dec 27, 2022
Flexible Networks for Learning Physical Dynamics of Deformable Objects (2021)

Flexible Networks for Learning Physical Dynamics of Deformable Objects (2021) By Jinhyung Park, Dohae Lee, In-Kwon Lee from Yonsei University (Seoul,

Jinhyung Park 0 Jan 09, 2022
Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)

Learning to Simulate Dynamic Environments with GameGAN PyTorch code for GameGAN Learning to Simulate Dynamic Environments with GameGAN Seung Wook Kim,

199 Dec 26, 2022
Official PyTorch code for CVPR 2020 paper "Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision"

Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision https://arxiv.org/abs/2003.00393 Abstract Active learning (AL) aims to min

Denis 29 Nov 21, 2022
A Simple Key-Value Data-store written in Python

mercury-db This is a File Based Key-Value Datastore that supports basic CRUD (Create, Read, Update, Delete) operations developed using Python. The dat

Vaidhyanathan S M 1 Jan 09, 2022
Image-to-Image Translation in PyTorch

CycleGAN and pix2pix in PyTorch New: Please check out contrastive-unpaired-translation (CUT), our new unpaired image-to-image translation model that e

Jun-Yan Zhu 19k Jan 07, 2023
Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark Yong

19 Dec 17, 2022
Baselines for TrajNet++

TrajNet++ : The Trajectory Forecasting Framework PyTorch implementation of Human Trajectory Forecasting in Crowds: A Deep Learning Perspective TrajNet

VITA lab at EPFL 183 Jan 05, 2023
Differentiable Factor Graph Optimization for Learning Smoothers @ IROS 2021

Differentiable Factor Graph Optimization for Learning Smoothers Overview Status Setup Datasets Training Evaluation Acknowledgements Overview Code rele

Brent Yi 60 Nov 14, 2022
The code for "Deep Level Set for Box-supervised Instance Segmentation in Aerial Images".

Deep Levelset for Box-supervised Instance Segmentation in Aerial Images Wentong Li, Yijie Chen, Wenyu Liu, Jianke Zhu* Any questions or discussions ar

sunshine.lwt 112 Jan 05, 2023
[ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization

F8Net Fixed-Point 8-bit Only Multiplication for Network Quantization (ICLR 2022 Oral) OpenReview | arXiv | PDF | Model Zoo | BibTex PyTorch implementa

Snap Research 76 Dec 13, 2022
Standalone pre-training recipe with JAX+Flax

Sabertooth Sabertooth is standalone pre-training recipe based on JAX+Flax, with data pipelines implemented in Rust. It runs on CPU, GPU, and/or TPU, b

Nikita Kitaev 26 Nov 28, 2022
Open-Set Recognition: A Good Closed-Set Classifier is All You Need

Open-Set Recognition: A Good Closed-Set Classifier is All You Need Code for our paper: "Open-Set Recognition: A Good Closed-Set Classifier is All You

194 Jan 03, 2023
Losslandscapetaxonomy - Taxonomizing local versus global structure in neural network loss landscapes

Taxonomizing local versus global structure in neural network loss landscapes Int

Yaoqing Yang 8 Dec 30, 2022
Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning

Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning This repository provides an implementation of the paper Beta S

Yongchan Kwon 28 Nov 10, 2022
PyTorch implementation of probabilistic deep forecast applied to air quality.

Probabilistic Deep Forecast PyTorch implementation of a paper, titled: Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting

Abdulmajid Murad 13 Nov 16, 2022
Source code for 2021 ICCV paper "In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces"

In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces This is the PyTorch implementation for 2021 ICCV paper "In-the-Wild Single C

27 Dec 06, 2022
CROSS-LINGUAL ABILITY OF MULTILINGUAL BERT: AN EMPIRICAL STUDY

M-BERT-Study CROSS-LINGUAL ABILITY OF MULTILINGUAL BERT: AN EMPIRICAL STUDY Motivation Multilingual BERT (M-BERT) has shown surprising cross lingual a

CogComp 1 Feb 28, 2022
Using NumPy to solve the equations of fluid mechanics together with Finite Differences, explicit time stepping and Chorin's Projection methods

Computational Fluid Dynamics in Python Using NumPy to solve the equations of fluid mechanics ๐ŸŒŠ ๐ŸŒŠ ๐ŸŒŠ together with Finite Differences, explicit time

Felix Kรถhler 4 Nov 12, 2022
Official PyTorch implementation of the ICRA 2021 paper: Adversarial Differentiable Data Augmentation for Autonomous Systems.

Adversarial Differentiable Data Augmentation This repository provides the official PyTorch implementation of the ICRA 2021 paper: Adversarial Differen

Manli 3 Oct 15, 2022