This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models

Related tags

Deep Learningmvp
Overview

Mixture of Volumetric Primitives -- Training and Evaluation

This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models.

If you use Mixture of Volumetric Primitives in your research, please cite:
Mixture of Volumetric Primitives for Efficient Neural Rendering
Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, Jason Saragih
ACM Transactions on Graphics (SIGGRAPH 2021) 40, 4. Article 59

@article{Lombardi21,
author = {Lombardi, Stephen and Simon, Tomas and Schwartz, Gabriel and Zollhoefer, Michael and Sheikh, Yaser and Saragih, Jason},
title = {Mixture of Volumetric Primitives for Efficient Neural Rendering},
year = {2021},
issue_date = {August 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {40},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3450626.3459863},
doi = {10.1145/3450626.3459863},
journal = {ACM Trans. Graph.},
month = {jul},
articleno = {59},
numpages = {13},
keywords = {neural rendering}
}

Requirements

  • Python (3.8+)
    • PyTorch
    • NumPy
    • SciPy
    • Pillow
    • OpenCV
  • ffmpeg (in $PATH to render videos)
  • CUDA 10 or higher

Building

The repository contains two CUDA PyTorch extensions. To build, cd to each directory and use make:

cd extensions/mvpraymarcher
make
cd -
cd extensions/utils
make

How to Use

There are two main scripts in the root directory: train.py and render.py. The scripts take a configuration file for the experiment that defines the dataset used and the options for the model (e.g., the type of decoder that is used).

Download the latest release on Github to get the experiments directory.

To train the model:

python train.py experiments/dryice1/experiment1/config.py

To render a video of a trained model:

python render.py experiments/dryice1/experiment1/config.py

See ARCHITECTURE.md for more details.

Training Data

See the latest Github release for data.

Using your own Data

Implement your own Dataset class to return images and camera parameters. An example is given in data.multiviewvideo. A dataset class will need to return camera pose parameters, image data, and tracked mesh data.

How to Extend

See ARCHITECTURE.md

License

See the LICENSE file for details.

Comments
  • ModuleNotFoundError: No module named 'utilslib'

    ModuleNotFoundError: No module named 'utilslib'

    Hi , thanks for share with this awsome job , have nice day:-) When I just want to render the demo in the experiment data ,I got this error :-) By the way ,I have compiled the extension file 2022-07-06 23-31-21 的屏幕截图

    opened by Myzhencai 9
  • build success, but cannot run

    build success, but cannot run

    Traceback (most recent call last):
      File "render.py", line 118, in <module>
        output, _ = ae(
      File "/home/an/anaconda3/envs/py38-t19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/an/an_project/mvp/models/volumetric.py", line 286, in forward
        rayrgba, rmlosses = self.raymarcher(raypos, raydir, tminmax,
      File "/home/an/anaconda3/envs/py38-t19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/an/an_project/mvp/models/raymarchers/mvpraymarcher.py", line 32, in forward
        rayrgba = mvpraymarch(raypos, raydir, dt, tminmax,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 273, in mvpraymarch
        out = MVPRaymarch.apply(raypos, raydir, stepsize, tminmax,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 119, in forward
        sortedobjid, nodechildren, nodeaabb = build_accel(primtransfin,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 44, in build_accel
        sortedobjid = (torch.arange(N*K, dtype=torch.int32, device=dev) % K).view(N, K)
    RuntimeError: CUDA error: no kernel image is available for execution on the device
    
    

    use: python 3.8.13 pytorch 1.13.0a0+git4503c45 cuda 11.3.0 gcc version 8.4.0


    Hi, are you having similar issues?
    pytorch is working normally.

    opened by AN-ZE 8
  • What's basetransf matrix used for?

    What's basetransf matrix used for?

    Hi, I have a little question when I apply my own data using this code. What's the self.basetransf matrix used for as in multiviewvideo.py do? I find this 3x4 matrix is applied for all camera poses and all the frametransf, but wht's the purpose for it? :)

    https://github.com/facebookresearch/mvp/blob/d758f53662e79d7fec885f4dd1a3ee457f7c4b00/data/multiviewvideo.py#L410-L415

    https://github.com/facebookresearch/mvp/blob/d758f53662e79d7fec885f4dd1a3ee457f7c4b00/data/multiviewvideo.py#L385-L387

    Besides, I find it necessary to apply this basetransf, because when I change it to an eyes matrix, it didn't converge during training. So how to get my own basetransf?

    Your answer will help me a lot! Thank you!

    opened by Qingcsai 5
  • Background image uesed in Lombardi‘s MVP not be found in multiface dataset

    Background image uesed in Lombardi‘s MVP not be found in multiface dataset

    Multiface dataset has been used in "Mixture of Volumetric Primitives for Efficient Neural Rendering". The "mvp" onfig file needs the path to the background image. But I can't find the background image in multiface dataset. The code in config file is: bgpath = os.path.join(imagepathbase, 'bg', image', 'cam{cam}', 'image0000.png').

    opened by shuishiwojiade 2
  • Can not build the cuda pytorch extension

    Can not build the cuda pytorch extension

    May I know what pytorch version and cuda version are used to build the two CUDA PyTorch extensions? I was using pytorch 1.7.1, cuda 10.1 and gcc 5.5.0, but I got the error "torch/utils/cpp_extension.py", line 445, in unix_wrap_ninja_compile post_cflags = extra_postargs['cxx'] KeyError: 'cxx' when I tried to build the two CUDA PyTorch extensions. Any suggestions on how to solve the problem?

    opened by ZhaoyangLyu 2
  • Bug Report: mvp/extensions/mvpraymarch/bvh.cu error: too many initializer values

    Bug Report: mvp/extensions/mvpraymarch/bvh.cu error: too many initializer values

    Hi, I meet a bug after cd to extensions/mvpraymarch and run "make" command. Could someone kindly help me to solve the problem?

    I use a remote server with: gcc (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0, g++ (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0, GNU Make 4.1, Ubuntu 16.04.7 LTS, Python 3.9.12, nvcc 10.2.

    The bug is from pointer assignment as shown in the screenshot. K

    Below is the output message after run "make" command in extensions/mvpraymarch directory:

    python setup.py build_ext --inplace CUDA_HOME: /data/hzhangcc/cuda-10.2 CUDNN_HOME: None running build_ext building 'mvpraymarchlib' extension Emitting ninja build file /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/2] /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 FAILED: /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu(214): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu(272): error: too many initializer values

    2 errors detected in the compilation of "/tmp/tmpxft_00003cf1_00000000-6_bvh.cpp1.ii". [2/2] /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 FAILED: /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(80): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(87): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(93): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(99): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(167): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(174): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(180): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(186): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_subset_kernel.h(30): warning: variable "validthread" was declared but never referenced

    8 errors detected in the compilation of "/tmp/tmpxft_00003cf2_00000000-6_mvpraymarch_kernel.cpp1.ii". ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1814, in _run_ninja_build subprocess.run( File "/data/hzhangcc/anaconda3/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "/data/hzhangcc/mvp/extensions/mvpraymarch/setup.py", line 13, in setup( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/init.py", line 87, in setup return distutils.core.setup(**attrs) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup return run_commands(dist) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands dist.run_commands() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands self.run_command(cmd) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/dist.py", line 1214, in run_command super().run_command(command) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run _build_ext.build_ext.run(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 339, in run self.build_extensions() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 771, in build_extensions build_ext.build_extensions(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions _build_ext.build_ext.build_extensions(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 592, in unix_wrap_ninja_compile _write_ninja_file_and_compile_objects( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1493, in _write_ninja_file_and_compile_objects _run_ninja_build( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1830, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension makefile:2: recipe for target 'all' failed make: *** [all] Error 1

    opened by Wushanfangniuwa 1
  • Traning data for MVP

    Traning data for MVP

    Hi, thank you for sharing your wonderful work! I was able to train the Neural Volume data. I was wondering if the training data for MVP will be released in the near future. Thank you!

    opened by weilunhuang-jhu 1
  • miss files of example?

    miss files of example?

    Hi, I notice there are some discrepancies between the latest release and README.md and mvp/ARCHITECTURE.md. Are there some files of example missing? Such as 'experiments/dryice1/experiment1/config.py' metioned in README.md or 'experiments/example/config.py' metioned in mvp/ARCHITECTURE.md? I follow the steps in README.md but can not train or render the model.

    opened by faneggs 1
  • Training time

    Training time

    Awesome Work! I have been a huge fan of your works since Neural Volumes. This work also seems very interesting!

    How long does it take for training?

    Thank you!

    opened by yeong5366 1
  • render .py not exporting the images for

    render .py not exporting the images for "render_rotate.mp4"

    Hi, thank you for sharing your great work! I'm trying it work the repository's code of the "neuralvolumes" and "mvp" of your great works with each experiments data you provided. "neuralvolumes" worked well! Thank you!! I was able to exec train.py and render.py of the "NeuralVolume" code of your previous work,and could get the images sequence of "prog_XXXXXX.jpg" and "render_rotate.mp4" movie file. Then I'm trying it work "mvp" at the same emvironment,and I did success to work train.py and could get the images sequence of "prog_XXXXXX.jpg".And "log.txt","optimparams.pt","aeparams.pt" file too. But I'm trying exec render.py,the image sequence files of consisting for "render_rotate.mp4" are not exported at /tmp/xxxxxxxxxx/ directory.

    The error message is [image2 @ 0x56400177b780] Could find no file with path '/tmp/5613023327/%06d.png' and index in the range 0-4 /tmp/5613023327/%06d.png: No such file or directory

    Do you have any information about this error?

    My environment is ubuntu20.04LTS, Python 3.8.13, GCC 9.4.0, PyTorch 1.10.1+cu113,GPU_A6000. The setup.py has been fixed about cuda arch at the mvpraymarch and the utils directory.

    opened by ppponpon 3
  • Understanding about the opacity fade factor

    Understanding about the opacity fade factor

    Thanks for sharing this awsome job. I have read the paper. And I have some questions about the opacity fade factor.

    1. Due to the fade factor, the opacity downsample at the volume edge. Does it make the primitives scale become bigger to cover the scene?(maybe opposite to the Volume Minimization Prior)
    2. Is there any experiments to ensure the parameters $\alpha$ and $\beta$?
    3. Does the fade factor make the center of the primitives more close to high occupancy point?
    4. And what is the relationship between stylegan2 and fade factor?

    And is there any suggestions to understand the cuda code of raymarching? I haven't do anything rely to cuda parallel.

    opened by LSQsjtu 0
  •  How you can reconstruct the mesh from images with different views

    How you can reconstruct the mesh from images with different views

    Hi, I noticed that the multi-view images of different frames of the same expression with the same id in your dataset have fixed camera parameters when reconstructing the mesh. When I try to perform mesh reconstruction from multi-view images, the camera parameters obtained from multi-view image reconstruction from different frames are all different. I was wondering how you fixed the camera parameters for mesh reconstruction.

    opened by LiTian0215 1
  • Cannot replicate experiments/neuralvolumes results: Completely vague output and vanishing kldiv

    Cannot replicate experiments/neuralvolumes results: Completely vague output and vanishing kldiv

    Hi, could someone kindly help me? I cannot replicate the output results in experiments/neuralvolumes. I have successfully built the extension and download the experiment.zip in the latest release. However, the output images are completely vague. I found that my kldiv term is quickly vanishing to 0, while in the example log.txt file given in experiment.zip, the kldiv terms remain larger than 0.3.

    Below are my outputs after 79579, 92139, and 106682 iterations. prog_079579 prog_092139 prog_106682

    Here I append my log.txt file which contains my configuration information and training statistics: log.txt

    I'm not sure where is the problem. Incorrect camera pose? Or the code in this repository has some bugs. Really hope some kind guy could help me. Thanks! :)

    opened by Wushanfangniuwa 2
  • About how is the tracking mesh built?

    About how is the tracking mesh built?

    Hello, I have a new problem: I am trying to use the video data of my head to reconstruct mesh and texture as the input and supervision of the network. I am trying to use colmap to reconstruct the point cloud and retargeting it to a public model with 7306 vertices; But the effect is very poor. Is there any method to build mesh for reference?

    opened by Luh1124 6
Releases(v0.1)
  • v0.1(Jan 6, 2022)

    This is an initial release of the Mixture of Volumetric Primitives code. This release includes code for training, rendering, and evaluation. Bundled with this release is a pretrained MVP model. Note that training data for MVP is not included with this release, but will be released in the future. To give an example of how to use the training code, training data and a training configuration file for Neural Volumes is included.

    Source code(tar.gz)
    Source code(zip)
    experiments.zip(1069.70 MB)
Owner
Meta Research
Meta Research
Codebase for Diffusion Models Beat GANS on Image Synthesis.

Codebase for Diffusion Models Beat GANS on Image Synthesis.

Katherine Crowson 128 Dec 02, 2022
This is the official github repository of the Met dataset

The Met dataset This is the official github repository of the Met dataset. The official webpage of the dataset can be found here. What is it? This cod

Nikolaos-Antonios Ypsilantis 35 Dec 17, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 06, 2023
[CVPR 2021] Teachers Do More Than Teach: Compressing Image-to-Image Models (CAT)

CAT arXiv Pytorch implementation of our method for compressing image-to-image models. Teachers Do More Than Teach: Compressing Image-to-Image Models Q

Snap Research 160 Dec 09, 2022
FastFace: Lightweight Face Detection Framework

Light Face Detection using PyTorch Lightning

Ömer BORHAN 75 Dec 05, 2022
Demo notebooks for Qiskit application modules demo sessions (Oct 8 & 15):

qiskit-application-modules-demo-sessions This repo hosts demo notebooks for the Qiskit application modules demo sessions hosted on Qiskit YouTube. Par

Qiskit Community 46 Nov 24, 2022
Code and dataset for ACL2018 paper "Exploiting Document Knowledge for Aspect-level Sentiment Classification"

Aspect-level Sentiment Classification Code and dataset for ACL2018 [paper] ‘‘Exploiting Document Knowledge for Aspect-level Sentiment Classification’’

Ruidan He 146 Nov 29, 2022
Machine Translation Implement By Bi-GRU And Transformer

Seq2Seq Translation Implement By Bidirectional GRU And Transformer In Pytorch Before You Run The Code You should download the data through the link be

He Wang 2 Oct 27, 2021
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition This repository contains code for the CVPR2021 paper "Patch-NetV

QVPR 368 Jan 06, 2023
Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination

Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination (ICCV 2021) Dataset License This work is l

DongYoung Kim 33 Jan 04, 2023
Neural Architecture Search Powered by Swarm Intelligence 🐜

Neural Architecture Search Powered by Swarm Intelligence 🐜 DeepSwarm DeepSwarm is an open-source library which uses Ant Colony Optimization to tackle

288 Oct 28, 2022
a curated list of docker-compose files prepared for testing data engineering tools, databases and open source libraries.

data-services A repository for storing various Data Engineering docker-compose files in one place. How to use it ? Set the required settings in .env f

BigData.IR 525 Dec 03, 2022
PyTorch implementation of DirectCLR from paper Understanding Dimensional Collapse in Contrastive Self-supervised Learning

DirectCLR DirectCLR is a simple contrastive learning model for visual representation learning. It does not require a trainable projector as SimCLR. It

Meta Research 49 Dec 21, 2022
PyTorch implementation of adversarial patch

adversarial-patch PyTorch implementation of adversarial patch This is an implementation of the Adversarial Patch paper. Not official and likely to hav

Jamie Hayes 172 Nov 29, 2022
This is the official PyTorch implementation of the paper "TransFG: A Transformer Architecture for Fine-grained Recognition" (Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille).

TransFG: A Transformer Architecture for Fine-grained Recognition Official PyTorch code for the paper: TransFG: A Transformer Architecture for Fine-gra

Ju He 307 Jan 03, 2023
Boosting Adversarial Attacks with Enhanced Momentum (BMVC 2021)

EMI-FGSM This repository contains code to reproduce results from the paper: Boosting Adversarial Attacks with Enhanced Momentum (BMVC 2021) Xiaosen Wa

John Hopcroft Lab at HUST 10 Sep 26, 2022
PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement.

DECOR-GAN PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement, Zhiqin Chen, Vladimir G. Kim, Matthew Fish

Zhiqin Chen 72 Dec 31, 2022
Hypersearch weight debugging and losses tutorial

tutorial Activate tensorboard option Running TensorBoard remotely When working on a remote server, you can use SSH tunneling to forward the port of th

1 Dec 11, 2021
[SIGGRAPH'22] StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets

[Project] [PDF] This repository contains code for our SIGGRAPH'22 paper "StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets" by Axel Sauer, Katja

742 Jan 04, 2023
An educational AI robot based on NVIDIA Jetson Nano.

JetBot Looking for a quick way to get started with JetBot? Many third party kits are now available! JetBot is an open-source robot based on NVIDIA Jet

NVIDIA AI IOT 2.6k Dec 29, 2022