BARF: Bundle-Adjusting Neural Radiance Fields 🤮 (ICCV 2021 oral)

Overview

BARF 🤮 : Bundle-Adjusting Neural Radiance Fields

Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey
IEEE International Conference on Computer Vision (ICCV), 2021 (oral presentation)

Project page: https://chenhsuanlin.bitbucket.io/bundle-adjusting-NeRF
arXiv preprint: https://arxiv.org/abs/2104.06405

We provide PyTorch code for the NeRF experiments on both synthetic (Blender) and real-world (LLFF) datasets.


Prerequisites

This code is developed with Python3 (python3). PyTorch 1.9+ is required.
It is recommended use Anaconda to set up the environment. Install the dependencies and activate the environment barf-env with

conda env create --file requirements.yaml python=3
conda activate barf-env

Initialize the external submodule dependencies with

git submodule update --init --recursive

Dataset

  • Synthetic data (Blender) and real-world data (LLFF)

    Both the Blender synthetic data and LLFF real-world data can be found in the NeRF Google Drive. For convenience, you can download them with the following script: (under this repo)
    # Blender
    gdown --id 18JxhpWD-4ZmuFKLzKlAw-w5PpzZxXOcG # download nerf_synthetic.zip
    unzip nerf_synthetic.zip
    rm -f nerf_synthetic.zip
    mv nerf_synthetic data/blender
    # LLFF
    gdown --id 16VnMcF1KJYxN9QId6TClMsZRahHNMW5g # download nerf_llff_data.zip
    unzip nerf_llff_data.zip
    rm -f nerf_llff_data.zip
    mv nerf_llff_data data/llff
    The data directory should contain the subdirectories blender and llff. If you already have the datasets downloaded, you can alternatively soft-link them within the data directory.
  • iPhone (TODO)


Running the code

  • BARF models

    To train and evaluate BARF:

    # <GROUP> and <NAME> can be set to your likes, while <SCENE> is specific to datasets
    
    # Blender (<SCENE>={chair,drums,ficus,hotdog,lego,materials,mic,ship})
    python3 train.py --group=<GROUP> --model=barf --yaml=barf_blender --name=<NAME> --data.scene=<SCENE> --barf_c2f=[0.1,0.5]
    python3 evaluate.py --group=<GROUP> --model=barf --yaml=barf_blender --name=<NAME> --data.scene=<SCENE> --data.val_sub= --resume
    
    # LLFF (<SCENE>={fern,flower,fortress,horns,leaves,orchids,room,trex})
    python3 train.py --group=<GROUP> --model=barf --yaml=barf_llff --name=<NAME> --data.scene=<SCENE> --barf_c2f=[0.1,0.5]
    python3 evaluate.py --group=<GROUP> --model=barf --yaml=barf_llff --name=<NAME> --data.scene=<SCENE> --resume

    All the results will be stored in the directory output/<GROUP>/<NAME>. You may want to organize your experiments by grouping different runs in the same group.

    To train baseline models:

    • Full positional encoding: omit the --barf_c2f argument.
    • No positional encoding: add --arch.posenc!.

    If you want to evaluate a checkpoint at a specific iteration number, use --resume=<ITER_NUMBER> instead of just --resume.

  • Training the original NeRF

    If you want to train the reference NeRF models (assuming known camera poses):

    # Blender
    python3 train.py --group=<GROUP> --model=nerf --yaml=nerf_blender --name=<NAME> --data.scene=<SCENE>
    python3 evaluate.py --group=<GROUP> --model=nerf --yaml=nerf_blender --name=<NAME> --data.scene=<SCENE> --data.val_sub= --resume
    
    # LLFF
    python3 train.py --group=<GROUP> --model=nerf --yaml=nerf_llff --name=<NAME> --data.scene=<SCENE>
    python3 evaluate.py --group=<GROUP> --model=nerf --yaml=nerf_llff --name=<NAME> --data.scene=<SCENE> --resume

    If you wish to replicate the results from the original NeRF paper, use --yaml=nerf_blender_repr or --yaml=nerf_llff_repr instead for Blender or LLFF respectively. There are some differences, e.g. NDC will be used for the LLFF forward-facing dataset. (The reference NeRF models considered in the paper do not use NDC to parametrize the 3D points.)

  • Visualizing the results

    We have included code to visualize the training over TensorBoard and Visdom. The TensorBoard events include the following:

    • SCALARS: the rendering losses and PSNR over the course of optimization. For BARF, the rotational/translational errors with respect to the given poses are also computed.
    • IMAGES: visualization of the RGB images and the RGB/depth rendering.

    We also provide visualization of 3D camera poses in Visdom. Run visdom -port 9000 to start the Visdom server.
    The Visdom host server is default to localhost; this can be overridden with --visdom.server (see options/base.yaml for details). If you want to disable Visdom visualization, add --visdom!.


Codebase structure

The main engine and network architecture in model/barf.py inherit those from model/nerf.py. This codebase is structured so that it is easy to understand the actual parts BARF is extending from NeRF. It is also simple to build your exciting applications upon either BARF or NeRF -- just inherit them again! This is the same for dataset files (e.g. data/blender.py).

To understand the config and command lines, take the below command as an example:

python3 train.py --group=<GROUP> --model=barf --yaml=barf_blender --name=<NAME> --data.scene=<SCENE> --barf_c2f=[0.1,0.5]

This will run model/barf.py as the main engine with options/barf_blender.yaml as the main config file. Note that barf hierarchically inherits nerf (which inherits base), making the codebase customizable.
The complete configuration will be printed upon execution. To override specific options, add --<key>=value or --<key1>.<key2>=value (and so on) to the command line. The configuration will be loaded as the variable opt throughout the codebase.

Some tips on using and understanding the codebase:

  • The computation graph for forward/backprop is stored in var throughout the codebase.
  • The losses are stored in loss. To add a new loss function, just implement it in compute_loss() and add its weight to opt.loss_weight.<name>. It will automatically be added to the overall loss and logged to Tensorboard.
  • If you are using a multi-GPU machine, you can add --gpu=<gpu_number> to specify which GPU to use. Multi-GPU training/evaluation is currently not supported.
  • To resume from a previous checkpoint, add --resume=<ITER_NUMBER>, or just --resume to resume from the latest checkpoint.
  • (to be continued....)

If you find our code useful for your research, please cite

@inproceedings{lin2021barf,
  title={BARF: Bundle-Adjusting Neural Radiance Fields},
  author={Lin, Chen-Hsuan and Ma, Wei-Chiu and Torralba, Antonio and Lucey, Simon},
  booktitle={IEEE International Conference on Computer Vision ({ICCV})},
  year={2021}
}

Please contact me ([email protected]) if you have any questions!

Owner
Chen-Hsuan Lin
Research scientist @NVIDIA, PhD in Robotics @ CMU
Chen-Hsuan Lin
Workshop Materials Delivered on 28/02/2022

intro-to-cnn-p1 Repo for hosting workshop materials delivered on 28/02/2022 Questions you will answer in this workshop Learning Objectives What are co

Beginners Machine Learning 5 Feb 28, 2022
sssegmentation is a general framework for our research on strongly supervised semantic segmentation.

sssegmentation is a general framework for our research on strongly supervised semantic segmentation.

445 Jan 02, 2023
Pseudo-rng-app - whos needs science to make a random number when you have pseudoscience?

Pseudo-random numbers with pseudoscience rng is so complicated! Why cant we have a horoscopic, vibe-y way of calculating a random number? Why cant rng

Andrew Blance 1 Dec 27, 2021
KITTI-360 Annotation Tool is a framework that developed based on python(cherrypy + jinja2 + sqlite3) as the server end and javascript + WebGL as the front end.

KITTI-360 Annotation Tool is a framework that developed based on python(cherrypy + jinja2 + sqlite3) as the server end and javascript + WebGL as the front end.

86 Dec 12, 2022
50-days-of-Statistics-for-Data-Science - This repository consist of a 50-day program

50-days-of-Statistics-for-Data-Science - This repository consist of a 50-day program. All the statistics required for the complete understanding of data science will be uploaded in this repository.

komal_lamba 22 Dec 09, 2022
Self-Supervised Monocular DepthEstimation with Internal Feature Fusion(arXiv), BMVC2021

DIFFNet This repo is for Self-Supervised Monocular Depth Estimation with Internal Feature Fusion(arXiv), BMVC2021 A new backbone for self-supervised d

Hang 94 Dec 25, 2022
ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs

ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs This is the code of paper ConE: Cone Embeddings for Multi-Hop Reasoning over Knowl

MIRA Lab 33 Dec 07, 2022
Redash reset for python

redash-reset This will use a default REDASH_SECRET_KEY key of c292a0a3aa32397cdb050e233733900f this allows you to reset the password of the user ID bu

Robert Wiggins 5 Nov 14, 2022
Have you ever wondered how cool it would be to have your own A.I

Have you ever wondered how cool it would be to have your own A.I. assistant Imagine how easier it would be to send emails without typing a single word, doing Wikipedia searches without opening web br

Harsh Gupta 1 Nov 09, 2021
A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.

Layer-wise Relevance Propagation (LRP) in PyTorch Basic unsupervised implementation of Layer-wise Relevance Propagation (Bach et al., Montavon et al.)

Kai Fabi 28 Dec 26, 2022
PyTorch Lightning + Hydra. A feature-rich template for rapid, scalable and reproducible ML experimentation with best practices. ⚡🔥⚡

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

Łukasz Zalewski 2.1k Jan 09, 2023
One line to host them all. Bootstrap your image search case in minutes.

One line to host them all. Bootstrap your image search case in minutes. Survey NOW gives the world access to customized neural image search in just on

Jina AI 403 Dec 30, 2022
PyTorch original implementation of Cross-lingual Language Model Pretraining.

XLM NEW: Added XLM-R model. PyTorch original implementation of Cross-lingual Language Model Pretraining. Includes: Monolingual language model pretrain

Facebook Research 2.7k Dec 27, 2022
Data Engineering ZoomCamp

Data Engineering ZoomCamp I'm partaking in a Data Engineering Bootcamp / Zoomcamp and will be tracking my progress here. I can't promise these notes w

Aaron 61 Jan 06, 2023
How Effective is Incongruity? Implications for Code-mix Sarcasm Detection.

Code for the paper: How Effective is Incongruity? Implications for Code-mix Sarcasm Detection - ICON ACL 2021

2 Jun 05, 2022
Fast methods to work with hydro- and topography data in pure Python.

PyFlwDir Intro PyFlwDir contains a series of methods to work with gridded DEM and flow direction datasets, which are key to many workflows in many ear

Deltares 27 Dec 07, 2022
NeurIPS-2021: Neural Auto-Curricula in Two-Player Zero-Sum Games.

NAC Official PyTorch implementation of NAC from the paper: Neural Auto-Curricula in Two-Player Zero-Sum Games. We release code for: Gradient based ora

Xidong Feng 19 Nov 11, 2022
Official implementation of the paper 'Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution' in CVPR 2022

LDL Paper | Supplementary Material Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution Jie Liang*, Hu

150 Dec 26, 2022
Segmentation Training Pipeline

Segmentation Training Pipeline This package is a part of Musket ML framework. Reasons to use Segmentation Pipeline Segmentation Pipeline was developed

Musket ML 52 Dec 12, 2022
[ICML 2021] Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data

Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data This repo provides the source code & data of our paper: Break-It-Fix-It: Unsupervised

Michihiro Yasunaga 86 Nov 30, 2022