Source code for "OmniPhotos: Casual 360° VR Photography"

Overview

OmniPhotos: Casual 360° VR Photography

Project Page | Video | Paper | Demo | Data

This repository contains the source code for creating and viewing OmniPhotos – a new approach for casual 360° VR photography using a consumer 360° video camera.

OmniPhotos: Casual 360° VR Photography
Tobias Bertel, Mingze Yuan, Reuben Lindroos, Christian Richardt
ACM Transactions on Graphics (SIGGRAPH Asia 2020)

Demo

The quickest way to try out OmniPhotos is via our precompiled demo (610 MB). Download and unzip to get started. Documentation for the precompiled binaries, which can also be downloaded separately (25 MB), can be found in the downloaded demo directory.

For the demo to run smoothly, we recommend a recently updated Windows 10 machine with a discrete GPU.

Additional OmniPhotos

We provide 31 OmniPhotos for download:

  • 9 preprocessed datasets that are ready for viewing (3.2 GB zipped, 12.8 GB uncompressed)
  • 31 unprocessed datasets with their input videos, camera poses etc.; this includes the 9 preprocessed datasets (17.4 GB zipped, 17.9 GB uncompressed)

Note: A few of the .insv files are missing for the 5.7k datasets. If you need to process these from scratch (using the insv files) these files can be found here.

How to view OmniPhotos

OmniPhotos are viewed using the "Viewer" executable, either in windowed mode (default) or in a compatible VR headset (see below). To run the viewer executable on the preprocessed datasets above, run the command:

Viewer.exe path-to-datasets/Preprocessed/

with paths adjusted for your machine. The viewer will automatically load the first dataset in the directory (in alphabetical order) and give you the option to load any of the datasets in the directory.

If you would like to run the viewer with VR enabled, please ensure that the firmware for your HMD is updated, you have SteamVR installed on your machine, and then run the command:

Viewer.exe --vr path-to-datasets/Preprocessed/

The OmniPhotos viewer can also load a specific single dataset directly:

Viewer.exe [--vr] path-to-datasets/Preprocessed/Temple3/Config/config-viewer.yaml

How to preprocess datasets

If you would like to preprocess additional datasets, for example "Ship" in the "Unprocessed" directory, run the command:

Preprocessing.exe path-to-datasets/Unpreprocessed/Ship/Config/config-viewer.yaml

This will preprocess the dataset according to the options specified in the config file. Once the preprocessing is finished, the dataset can be opened in the Viewer.

For processing new datasets from scratch, please follow the detailed documentation at Python/preprocessing/readme.md.

Compiling from source

The OmniPhotos Preprocessing and Viewer applications are written in C++11, with some Python used for preparing datasets.

Both main applications and the included libraries use CMake as build system generator. We recommend CMake 3.16 or newer, but older 3.x versions might also work.

Our code has been developed and tested with Microsoft Visual Studio 2015 and 2019 (both 64 bit).

Required dependencies

  1. GLFW 3.3 (version 3.3.1 works)
  2. Eigen 3.3 (version 3.3.2 works)
    • Please note: Ceres (an optional dependency) requires Eigen version "3.3.90" (~Eigen master branch).
  3. OpenCV 4.2
    • OpenCV 4.2 includes DIS flow in the main distribution, so precompiled OpenCV can be used.
    • OpenCV 4.1.1 needs to be compiled from source with the optflow contrib package (for DIS flow).
    • We also support the CUDA Brox flow from the cudaoptflow module, if it is compiled in. In this case, tick USE_CUDA_IN_OPENCV in CMake.
  4. OpenGL 4.1: provided by the operating system
  5. glog (newer than 0.4.0 master works)
  6. gflags (version 2.2.2 works)

Included dependencies (in /src/3rdParty/)

  1. DearImGui 1.79: included automatically as a git submodule.
  2. GL3W
  3. JsonCpp 1.8.0: almalgamated version
  4. nlohmann/json 3.6.1
  5. OpenVR 1.10.30: enable with WITH_OPENVR in CMake.
  6. TCLAP
  7. tinyfiledialogs 3.3.8

Optional dependencies

  1. Ceres (with SuiteSparse) is required for the scene-adaptive proxy geometry fitting. Enable with USE_CERES in CMake.
  2. googletest (master): automatically added when WITH_TEST is enabled in CMake.

Citation

Please cite our paper if you use this code or any of our datasets:

@article{OmniPhotos,
  author    = {Tobias Bertel and Mingze Yuan and Reuben Lindroos and Christian Richardt},
  title     = {{OmniPhotos}: Casual 360° {VR} Photography},
  journal   = {ACM Transactions on Graphics},
  year      = {2020},
  volume    = {39},
  number    = {6},
  pages     = {266:1--12},
  month     = dec,
  issn      = {0730-0301},
  doi       = {10.1145/3414685.3417770},
  url       = {https://richardt.name/omniphotos/},
}

Acknowledgements

We thank the reviewers for their thorough feedback that has helped to improve our paper. We also thank Peter Hedman, Ana Serrano and Brian Cabral for helpful discussions, and Benjamin Attal for his layered mesh rendering code.

This work was supported by EU Horizon 2020 MSCA grant FIRE (665992), the EPSRC Centre for Doctoral Training in Digital Entertainment (EP/L016540/1), RCUK grant CAMERA (EP/M023281/1), an EPSRC-UKRI Innovation Fellowship (EP/S001050/1), a Rabin Ezra Scholarship and an NVIDIA Corporation GPU Grant.

Comments
  • Documentation pipeline update

    Documentation pipeline update

    Pipeline to automatically create documenation on read the docs using doxygen.

    • [ ] link to github page on index.html (mainpage.hpp)
    • [ ] add documentation convention to docs\README.md
    • [ ] fork branch from cr333/main and apply changes to that
    • [ ] create new PR from forked branch
    opened by reubenlindroos 1
  • Adds a progress bar when circleselector is running

    Adds a progress bar when circleselector is running

    Also improves speed of the circleselector module by ~50%

    Todo:

    • [x] add tqdm to requirements.txt
    • [x] np.diffs for find_path_length
    • [x] atomic lock for incrementing the progress bar?
    opened by reubenlindroos 0
  • Circle Selector

    Circle Selector

    • [x] clean up requirements.txt
    • [x] save plot of heatmaps to the cache/dataset directory.
    • [x] move json file to capture directory
    • [x] update the template with option to switch off circlefitting
    • [x] update template to remove some of the options (e.g op_filename_expression)
    • [x] Update README.md with automatic circle selection (section 2.2)
    • [x] Update documentation for installation?
    • [x] Linting (spacing), comment convention, Pep convention
    • [x] sort imports
    • [x] replace op_filename_epression with original_filename_expression

    cv_utils

    • [x] remove extra copy of computeColor
    • [x] change pjoin to os.path.join
    • [x] add more documentation for parameters in cv_utils (change lookatang to look_at_angle)
    • [x] 'nxt' to 'next'
    • [x] comment on line 100 (slice_equirect)

    datatypes

    • [x] more comments on some of the methods in PointDict
    opened by reubenlindroos 0
  • Documentation pipeline update

    Documentation pipeline update

    Pipeline to automatically create documenation on read the docs using doxygen.

    • [x] link to github page on index.html (mainpage.hpp)
    • [x] add documentation convention to docs\README.md
    • [x] fork branch from cr333/main and apply changes to that
    • [x] create new PR from forked branch
    • [x] remove documentation for header comment block in docs/README.md
    • [x] cleanup index.rst (try removing, see if sphinx can build anyway)
    • [ ] mainpage.hpp cleanup (capitalise, centralise)
    • [x] clarify line 48 in README.md
    opened by reubenlindroos 0
  • Demo updated

    Demo updated

    Converts demo documentation files from rst and based in sphinx to be hosted in Github. The sites markdown API now renders the documentation files rather than using sphinx + rtd.

    opened by reubenlindroos 0
  • Adds build test to master branch on push and PR

    Adds build test to master branch on push and PR

    build

    • [ ] change actions to not send email for every build
    • [x] fix requested changes
    • [x] make into squash merge to not mess with main branch history
    • [x] group build steps (building dependencies which ahve been left seperate for debugging purposes)
    • [x] check glog build variables in cmake (e.g BUILD_TEST should not be enabled)
    • [x] check eigen warnings in build log
    • [x] check if precompiled headers might speed up build
    • [x] check if multithread build could be used
    • [x] remove verbose flag from extraction of opencv

    test

    • [x] add test data download
    • [x] reduce size of test dataset
    • [x] check what happens on failure
    • [ ] check if we can "publish" test results (xml?)
    opened by reubenlindroos 0
  • Get problems while preprocessing

    Get problems while preprocessing

    I did download all of those binary files from here:https://github.com/cr333/OmniPhotos/releases/download/v1.1/OmniPhotos-v1.1-win10-x64.zip

    And I did put ffmpeg.exe into system Path. However, Im getting errors saying this below:

    $ ./preproc/preproc.exe -c preproc-config-template.yaml [23276] Failed to execute script 'main' due to unhandled exception! Traceback (most recent call last): File "main.py", line 24, in File "preproc_app.py", line 39, in init File "data_preprocessor.py", line 32, in init File "abs_preprocessor.py", line 70, in init File "abs_preprocessor.py", line 225, in load_origin_data_info File "ffmpeg_probe.py", line 20, in probe File "subprocess.py", line 800, in init File "subprocess.py", line 1207, in _execute_child FileNotFoundError: [WinError 2]

    image

    opened by BlairLeng 2
Releases(v1.1)
Owner
Christian Richardt
Christian Richardt
Adversarial examples to the new ConvNeXt architecture

Adversarial examples to the new ConvNeXt architecture To get adversarial examples to the ConvNeXt architecture, run the Colab: https://github.com/stan

Stanislav Fort 19 Sep 18, 2022
[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

CC 4.4k Dec 27, 2022
Time-Optimal Planning for Quadrotor Waypoint Flight

Time-Optimal Planning for Quadrotor Waypoint Flight This is an example implementation of the paper "Time-Optimal Planning for Quadrotor Waypoint Fligh

Robotics and Perception Group 38 Dec 02, 2022
Large scale and asynchronous Hyperparameter Optimization at your fingertip.

Syne Tune This package provides state-of-the-art distributed hyperparameter optimizers (HPO) where trials can be evaluated with several backend option

Amazon Web Services - Labs 236 Jan 01, 2023
Official implementation of the NRNS paper: No RL, No Simulation: Learning to Navigate without Navigating

No RL No Simulation (NRNS) Official implementation of the NRNS paper: No RL, No Simulation: Learning to Navigate without Navigating NRNS is a heriarch

Meera Hahn 20 Nov 29, 2022
The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp.

PISE The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp. Requirement conda create -n pise pyt

jinszhang 110 Nov 21, 2022
This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018

Learning-to-See-in-the-Dark This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018, by Chen Chen, Qifeng Chen, Jia Xu, and Vl

5.3k Jan 01, 2023
A collection of resources and papers on Diffusion Models, a darkhorse in the field of Generative Models

This repository contains a collection of resources and papers on Diffusion Models and Score-based Models. If there are any missing valuable resources

5.1k Jan 08, 2023
Angle data is a simple data type.

angledat Angle data is a simple data type. Installing + using Put angledat.py in the main dir of your project. Import it and use. Comments Comments st

1 Jan 05, 2022
ZeroVL - The official implementation of ZeroVL

This repository contains source code necessary to reproduce the results presente

31 Nov 04, 2022
Official implementation of UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation

UTNet (Accepted at MICCAI 2021) Official implementation of UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation Introduction Transf

110 Jan 01, 2023
10th place solution for Google Smartphone Decimeter Challenge at kaggle.

Under refactoring 10th place solution for Google Smartphone Decimeter Challenge at kaggle. Google Smartphone Decimeter Challenge Global Navigation Sat

12 Oct 25, 2022
Python script that allows you to automatically setup your Growtopia server.

AutoSetup Python script that allows you to automatically setup your Growtopia server. How To Use Firstly, install all the required modules that used i

Aspire 3 Mar 06, 2022
Provide baselines and evaluation metrics of the task: traffic flow prediction

Note: This repo is adpoted from https://github.com/UNIMIBInside/Smart-Mobility-Prediction. Due to technical reasons, I did not fork their code. Introd

Zhangzhi Peng 11 Nov 02, 2022
PyTorch implementation of ICLR 2022 paper PiCO: Contrastive Label Disambiguation for Partial Label Learning

PiCO: Contrastive Label Disambiguation for Partial Label Learning This is a PyTorch implementation of ICLR 2022 Oral paper PiCO; also see our Project

王皓波 147 Jan 07, 2023
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers

DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers Authors: Jaemin Cho, Abhay Zala, and Mohit Bansal (

Jaemin Cho 98 Dec 15, 2022
基于YoloX目标检测+DeepSort算法实现多目标追踪Baseline

项目简介: 使用YOLOX+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。 代码地址(欢迎star): https://github.com/Sharpiless/yolox-deepsort/ 最终效果: 运行demo: python demo

114 Dec 30, 2022
Intrusion Test Tool with Python

P3ntsT00L Uma ferramenta escrita em Python, feita para Teste de intrusão. Requisitos ter o python 3.9.8 instalado em sua máquina. ter a git instalada

josh washington 2 Dec 27, 2021
The code succinctly shows how our ensemble learning based on deep learning CNN is used for LAM-avulsion-diagnosis.

deep-learning-LAM-avulsion-diagnosis The code succinctly shows how our ensemble learning based on deep learning CNN is used for LAM-avulsion-diagnosis

1 Jan 12, 2022
Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"

T-Few This repository contains the official code for the paper: "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learni

220 Dec 31, 2022