Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

Related tags

Deep Learningkilonerf
Overview

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

Check out the paper on arXiv: https://arxiv.org/abs/2103.13744

KiloNeRF interactive demo

This repo contains the code for KiloNeRF, together with instructions on how to download pretrained models and datasets. Additionally, we provide a viewer for interactive visualization of KiloNeRF scenes. We further improved the implementation and KiloNeRF now runs ~5 times faster than the numbers we report in the first arXiv version of the paper. As a consequence the Lego scene can now be rendered at around 50 FPS.

Prerequisites

  • OS: Ubuntu 20.04.2 LTS
  • GPU: >= NVIDIA GTX 1080 Ti with >= 460.73.01 driver
  • Python package manager conda

Setup

Open a terminal in the root directory of this repo and execute export KILONERF_HOME=$PWD

Install OpenGL and GLUT development files
sudo apt install libgl-dev freeglut3-dev

Install Python packages
conda env create -f $KILONERF_HOME/environment.yml

Activate kilonerf environment
source activate kilonerf

CUDA extension installation

You can either install our pre-compiled CUDA extension or compile the extension yourself. Only compiling it yourself will allow you to make changes to the CUDA code but is more tedious.

Option A: Install pre-compiled CUDA extension

Install pre-compiled CUDA extension
pip install $KILONERF_HOME/cuda/dist/kilonerf_cuda-0.0.0-cp38-cp38-linux_x86_64.whl

Option B: Build CUDA extension yourself

Install CUDA development kit and restart your bash:

wget https://developer.download.nvidia.com/compute/cuda/11.1.1/local_installers/cuda_11.1.1_455.32.00_linux.run
sudo sh cuda_11.1.1_455.32.00_linux.run
echo -e "\nexport PATH=\"/usr/local/cuda/bin:\$PATH\"" >> ~/.bashrc
echo "export LD_LIBRARY_PATH=\"/usr/local/cuda/lib64:\$LD_LIBRARY_PATH\"" >> ~/.bashrc

Download magma from http://icl.utk.edu/projectsfiles/magma/downloads/magma-2.5.4.tar.gz then build and install to /usr/local/magma

sudo apt install gfortran libopenblas-dev
wget http://icl.utk.edu/projectsfiles/magma/downloads/magma-2.5.4.tar.gz
tar -zxvf magma-2.5.4.tar.gz
cd magma-2.5.4
cp make.inc-examples/make.inc.openblas make.inc
export GPU_TARGET="Maxwell Pascal Volta Turing Ampere"
export CUDADIR=/usr/local/cuda
export OPENBLASDIR="/usr"
make
sudo -E make install prefix=/usr/local/magma

For further information on installing magma see: http://icl.cs.utk.edu/projectsfiles/magma/doxygen/installing.html

Finally compile KiloNeRF's C++/CUDA code

cd $KILONERF_HOME/cuda
python setup.py develop

Download pretrained models

We provide pretrained KiloNeRF models for the following scenes: Synthetic_NeRF_Chair, Synthetic_NeRF_Lego, Synthetic_NeRF_Ship, Synthetic_NSVF_Palace, Synthetic_NSVF_Robot

cd $KILONERF_HOME
mkdir logs
cd logs
wget https://www.dropbox.com/s/eqvf3x23qbubr9p/kilonerf-pretrained.tar.gz?dl=1 --output-document=paper.tar.gz
tar -xf paper.tar.gz

Download NSVF datasets

Credit to NSVF authors for providing their datasets: https://github.com/facebookresearch/NSVF

cd $KILONERF_HOME/data/nsvf
wget https://dl.fbaipublicfiles.com/nsvf/dataset/Synthetic_NSVF.zip && unzip -n Synthetic_NSVF.zip
wget https://dl.fbaipublicfiles.com/nsvf/dataset/Synthetic_NeRF.zip && unzip -n Synthetic_NeRF.zip
wget https://dl.fbaipublicfiles.com/nsvf/dataset/BlendedMVS.zip && unzip -n BlendedMVS.zip
wget https://dl.fbaipublicfiles.com/nsvf/dataset/TanksAndTemple.zip && unzip -n TanksAndTemple.zip

Since we slightly adjusted the bounding boxes for some scenes, it is important that you use the provided unzip argument to avoid overwriting our bounding boxes.

Usage

To benchmark a trained model run:
bash benchmark.sh

You can launch the interactive viewer by running:
bash render_to_screen.sh

To train a model yourself run
bash train.sh

The default dataset is Synthetic_NeRF_Lego, you can adjust the dataset by setting the dataset variable in the respective script.

Owner
Christian Reiser
Christian Reiser
Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification

STAM - Pytorch Implementation of STAM (Space Time Attention Model), yet another pure and simple SOTA attention model that bests all previous models in

Phil Wang 109 Dec 28, 2022
Official page of Struct-MDC (RA-L'22 with IROS'22 option); Depth completion from Visual-SLAM using point & line features

Struct-MDC (click the above buttons for redirection!) Official page of "Struct-MDC: Mesh-Refined Unsupervised Depth Completion Leveraging Structural R

Urban Robotics Lab. @ KAIST 37 Dec 22, 2022
Audio2Face - Audio To Face With Python

Audio2Face Discription We create a project that transforms audio to blendshape w

FACEGOOD 724 Dec 26, 2022
The Python ensemble sampling toolkit for affine-invariant MCMC

emcee The Python ensemble sampling toolkit for affine-invariant MCMC emcee is a stable, well tested Python implementation of the affine-invariant ense

Dan Foreman-Mackey 1.3k Dec 31, 2022
Repository containing detailed experiments related to the paper "Memotion Analysis through the Lens of Joint Embedding".

Memotion Analysis Through The Lens Of Joint Embedding This repository contains the experiments conducted as described in the paper 'Memotion Analysis

Nethra Gunti 1 Mar 16, 2022
Voice assistant - Voice assistant with python

🌐 Python Voice Assistant 🌵 - User's greeting 🌵 - Writing tasks to todo-list ?

PythonToday 10 Dec 26, 2022
On Generating Extended Summaries of Long Documents

ExtendedSumm This repository contains the implementation details and datasets used in On Generating Extended Summaries of Long Documents paper at the

Georgetown Information Retrieval Lab 76 Sep 05, 2022
This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark

SILG This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark. If you find this work helpful, please cons

Victor Zhong 17 Nov 27, 2022
HiFT: Hierarchical Feature Transformer for Aerial Tracking (ICCV2021)

HiFT: Hierarchical Feature Transformer for Aerial Tracking Ziang Cao, Changhong Fu, Junjie Ye, Bowen Li, and Yiming Li Our paper is Accepted by ICCV 2

Intelligent Vision for Robotics in Complex Environment 55 Nov 23, 2022
A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN

A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN Please follow Faster R-CNN and DAF to complete the environment confi

2 Jan 12, 2022
DeepFaceLive - Live Deep Fake in python, Real-time face swap for PC streaming or video calls

DeepFaceLive - Live Deep Fake in python, Real-time face swap for PC streaming or video calls

8.3k Dec 31, 2022
ACL'2021: LM-BFF: Better Few-shot Fine-tuning of Language Models

LM-BFF (Better Few-shot Fine-tuning of Language Models) This is the implementation of the paper Making Pre-trained Language Models Better Few-shot Lea

Princeton Natural Language Processing 607 Jan 07, 2023
Facial Expression Detection In The Realtime

The human's facial expressions is very important to detect thier emotions and sentiment. It can be very efficient to use to make our computers make interviews. Furthermore, we have robots now can det

Adel El-Nabarawy 4 Mar 01, 2022
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
Generalized and Efficient Blackbox Optimization System.

OpenBox Doc | OpenBox中文文档 OpenBox: Generalized and Efficient Blackbox Optimization System OpenBox is an efficient and generalized blackbox optimizatio

DAIR Lab 238 Dec 29, 2022
A PyTorch implementation of "CoAtNet: Marrying Convolution and Attention for All Data Sizes".

CoAtNet Overview This is a PyTorch implementation of CoAtNet specified in "CoAtNet: Marrying Convolution and Attention for All Data Sizes", arXiv 2021

Justin Wu 268 Jan 07, 2023
A Gura parser implementation for Python

Gura Python parser This repository contains the implementation of a Gura (compliant with version 1.0.0) format parser in Python. Installation pip inst

Gura Config Lang 19 Jan 25, 2022
PyTorch Code for "Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning"

Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning [Project Page] [Paper] Wenlong Huang1, Igor Mordatch2, Pieter Abbeel1,

Wenlong Huang 40 Nov 22, 2022
U-Net Implementation: Convolutional Networks for Biomedical Image Segmentation" using the Carvana Image Masking Dataset in PyTorch

U-Net Implementation By Christopher Ley This is my interpretation and implementation of the famous paper "U-Net: Convolutional Networks for Biomedical

Christopher Ley 1 Jan 06, 2022
Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper

Ponder(ing) Transformer Implementation of a Transformer that learns to adapt the number of computational steps it takes depending on the difficulty of

Phil Wang 65 Oct 04, 2022