Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

Related tags

Deep Learningkilonerf
Overview

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

Check out the paper on arXiv: https://arxiv.org/abs/2103.13744

KiloNeRF interactive demo

This repo contains the code for KiloNeRF, together with instructions on how to download pretrained models and datasets. Additionally, we provide a viewer for interactive visualization of KiloNeRF scenes. We further improved the implementation and KiloNeRF now runs ~5 times faster than the numbers we report in the first arXiv version of the paper. As a consequence the Lego scene can now be rendered at around 50 FPS.

Prerequisites

  • OS: Ubuntu 20.04.2 LTS
  • GPU: >= NVIDIA GTX 1080 Ti with >= 460.73.01 driver
  • Python package manager conda

Setup

Open a terminal in the root directory of this repo and execute export KILONERF_HOME=$PWD

Install OpenGL and GLUT development files
sudo apt install libgl-dev freeglut3-dev

Install Python packages
conda env create -f $KILONERF_HOME/environment.yml

Activate kilonerf environment
source activate kilonerf

CUDA extension installation

You can either install our pre-compiled CUDA extension or compile the extension yourself. Only compiling it yourself will allow you to make changes to the CUDA code but is more tedious.

Option A: Install pre-compiled CUDA extension

Install pre-compiled CUDA extension
pip install $KILONERF_HOME/cuda/dist/kilonerf_cuda-0.0.0-cp38-cp38-linux_x86_64.whl

Option B: Build CUDA extension yourself

Install CUDA development kit and restart your bash:

wget https://developer.download.nvidia.com/compute/cuda/11.1.1/local_installers/cuda_11.1.1_455.32.00_linux.run
sudo sh cuda_11.1.1_455.32.00_linux.run
echo -e "\nexport PATH=\"/usr/local/cuda/bin:\$PATH\"" >> ~/.bashrc
echo "export LD_LIBRARY_PATH=\"/usr/local/cuda/lib64:\$LD_LIBRARY_PATH\"" >> ~/.bashrc

Download magma from http://icl.utk.edu/projectsfiles/magma/downloads/magma-2.5.4.tar.gz then build and install to /usr/local/magma

sudo apt install gfortran libopenblas-dev
wget http://icl.utk.edu/projectsfiles/magma/downloads/magma-2.5.4.tar.gz
tar -zxvf magma-2.5.4.tar.gz
cd magma-2.5.4
cp make.inc-examples/make.inc.openblas make.inc
export GPU_TARGET="Maxwell Pascal Volta Turing Ampere"
export CUDADIR=/usr/local/cuda
export OPENBLASDIR="/usr"
make
sudo -E make install prefix=/usr/local/magma

For further information on installing magma see: http://icl.cs.utk.edu/projectsfiles/magma/doxygen/installing.html

Finally compile KiloNeRF's C++/CUDA code

cd $KILONERF_HOME/cuda
python setup.py develop

Download pretrained models

We provide pretrained KiloNeRF models for the following scenes: Synthetic_NeRF_Chair, Synthetic_NeRF_Lego, Synthetic_NeRF_Ship, Synthetic_NSVF_Palace, Synthetic_NSVF_Robot

cd $KILONERF_HOME
mkdir logs
cd logs
wget https://www.dropbox.com/s/eqvf3x23qbubr9p/kilonerf-pretrained.tar.gz?dl=1 --output-document=paper.tar.gz
tar -xf paper.tar.gz

Download NSVF datasets

Credit to NSVF authors for providing their datasets: https://github.com/facebookresearch/NSVF

cd $KILONERF_HOME/data/nsvf
wget https://dl.fbaipublicfiles.com/nsvf/dataset/Synthetic_NSVF.zip && unzip -n Synthetic_NSVF.zip
wget https://dl.fbaipublicfiles.com/nsvf/dataset/Synthetic_NeRF.zip && unzip -n Synthetic_NeRF.zip
wget https://dl.fbaipublicfiles.com/nsvf/dataset/BlendedMVS.zip && unzip -n BlendedMVS.zip
wget https://dl.fbaipublicfiles.com/nsvf/dataset/TanksAndTemple.zip && unzip -n TanksAndTemple.zip

Since we slightly adjusted the bounding boxes for some scenes, it is important that you use the provided unzip argument to avoid overwriting our bounding boxes.

Usage

To benchmark a trained model run:
bash benchmark.sh

You can launch the interactive viewer by running:
bash render_to_screen.sh

To train a model yourself run
bash train.sh

The default dataset is Synthetic_NeRF_Lego, you can adjust the dataset by setting the dataset variable in the respective script.

Owner
Christian Reiser
Christian Reiser
A python script to dump all the challenges locally of a CTFd-based Capture the Flag.

A python script to dump all the challenges locally of a CTFd-based Capture the Flag. Features Connects and logins to a remote CTFd instance. Dumps all

Podalirius 77 Dec 07, 2022
This repository contains the code for EMNLP-2021 paper "Word-Level Coreference Resolution"

Word-Level Coreference Resolution This is a repository with the code to reproduce the experiments described in the paper of the same name, which was a

79 Dec 27, 2022
Understanding and Overcoming the Challenges of Efficient Transformer Quantization

Transformer Quantization This repository contains the implementation and experiments for the paper presented in Yelysei Bondarenko1, Markus Nagel1, Ti

83 Dec 30, 2022
MAVE: : A Product Dataset for Multi-source Attribute Value Extraction

The dataset contains 3 million attribute-value annotations across 1257 unique categories on 2.2 million cleaned Amazon product profiles. It is a large, multi-sourced, diverse dataset for product attr

Google Research Datasets 89 Jan 08, 2023
Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting (ICCV, 2021)

DKPNet ICCV 2021 Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting Baseline of DKPNet is availa

19 Oct 14, 2022
CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator

CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator This is the official code repository for NeurIPS 2021 paper: CARMS: Categorica

Alek Dimitriev 1 Jul 09, 2022
Video lie detector using xgboost - A video lie detector using OpenFace and xgboost

video_lie_detector_using_xgboost a video lie detector using OpenFace and xgboost

2 Jan 11, 2022
OMLT: Optimization and Machine Learning Toolkit

OMLT is a Python package for representing machine learning models (neural networks and gradient-boosted trees) within the Pyomo optimization environment.

C⚙G - Imperial College London 179 Jan 02, 2023
LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.

This project is based on ultralytics/yolov3. LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image. Download $ git clone http

26 Dec 13, 2022
The code used for the free [email protected] Webinar series on Reinforcement Learning in Finance

Reinforcement Learning in Finance [email protected] Webinar This repository provides the code f

Yves Hilpisch 62 Dec 22, 2022
Reimplementation of the paper `Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? (ACL2020)`

Human Attention for Text Classification Re-implementation of the paper Human Attention Maps for Text Classification: Do Humans and Neural Networks Foc

Shunsuke KITADA 15 Dec 13, 2021
TF Image Segmentation: Image Segmentation framework

TF Image Segmentation: Image Segmentation framework The aim of the TF Image Segmentation framework is to provide/provide a simplified way for: Convert

Daniil Pakhomov 546 Dec 17, 2022
FewBit — a library for memory efficient training of large neural networks

FewBit FewBit — a library for memory efficient training of large neural networks. Its efficiency originates from storage optimizations applied to back

24 Oct 22, 2022
PyTorch implementation of the ideas presented in the paper Interaction Grounded Learning (IGL)

Interaction Grounded Learning This repository contains a simple PyTorch implementation of the ideas presented in the paper Interaction Grounded Learni

Arthur Juliani 4 Aug 31, 2022
Code for Robust Contrastive Learning against Noisy Views

Robust Contrastive Learning against Noisy Views This repository provides a PyTorch implementation of the Robust InfoNCE loss proposed in paper Robust

Ching-Yao Chuang 53 Jan 08, 2023
Official implementation of NeurIPS'2021 paper TransformerFusion

TransformerFusion: Monocular RGB Scene Reconstruction using Transformers Project Page | Paper | Video TransformerFusion: Monocular RGB Scene Reconstru

Aljaz Bozic 118 Dec 25, 2022
Curating a dataset for bioimage transfer learning

CytoImageNet A large-scale pretraining dataset for bioimage transfer learning. Motivation In past few decades, the increase in speed of data collectio

Stanley Z. Hua 9 Jun 20, 2022
Keras implementation of AdaBound

AdaBound for Keras Keras port of AdaBound Optimizer for PyTorch, from the paper Adaptive Gradient Methods with Dynamic Bound of Learning Rate. Usage A

Somshubra Majumdar 132 Sep 23, 2022
Dynamic Graph Event Detection

DyGED Dynamic Graph Event Detection Get Started pip install -r requirements.txt TODO Paper link to arxiv, and how to cite. Twitter Weather dataset tra

Mert Koşan 3 May 09, 2022
Multi Task RL Baselines

MTRL Multi Task RL Algorithms Contents Introduction Setup Usage Documentation Contributing to MTRL Community Acknowledgements Introduction M

Facebook Research 171 Jan 09, 2023