PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

Overview

PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

The implementation is based on SIGGRAPH Aisa'20.

Dependencies

  • Python 3.7
  • Ubuntu 18.04 (The system should run on other Ubuntu versions and Windows, however not tested.)
  • RBDL: Rigid Body Dynamics Library (https://rbdl.github.io/)
  • PyTorch 1.8.1 with GPU support (cuda 10.2 is tested to work)
  • For other python packages, please check requirements.txt

Installation

  • Download and install Python binded RBDL from https://github.com/rbdl/rbdl

  • Install Pytorch 1.8.1 with GPU support (https://pytorch.org/) (other versions should also work but not tested)

  • Install python packages by:

      pip install -r requirements.txt
    

How to Run on the Sample Data

We provide a sample data taken from DeepCap dataset CVPR'20. To run the code on the sample data, first go to physcap_release directory and run:

python pipeline.py --contact_estimation 0 --floor_known 1 --floor_frame  data/floor_frame.npy  --humanoid_path asset/physcap.urdf --skeleton_filename asset/physcap.skeleton --motion_filename data/sample.motion --contact_path data/sample_contacts.npy --stationary_path data/sample_stationary.npy --save_path './results/'

To visualize the prediction, run:

python visualizer.py --q_path ./results/PhyCap_q.npy

To run PhysCap with its full functionality, the floor position should be given as 4x4 matrix (rotation and translation). In case you don't know the floor position, you can still run PhysCap with "--floor_known 0" option:

python pipeline.py --contact_estimation 0 --floor_known 0  --humanoid_path asset/physcap.urdf --skeleton_filename asset/physcap.skeleton --motion_filename data/sample.motion --save_path './results/'

How to Run on Your Data

  1. Run Stage I:

    we employ VNect for the stage I of PhysCap pipeline. Please install the VNect C++ library and use its prediction to run PhysCap. When running VNect, please replace "default.skeleton" with "physcap.skeleton" in asset folder that is compatible with PhysCap skeletion definition (physcap.urdf). After running VNect on your sequence, the predictions (motion.motion and ddd.mdd) will be saved under the specified folder. For this example, we assuem the predictions are saved under "data/VNect_data" folder.

  2. Run Stage II and III:

    First, run the following command to apply preprocessing on the 2D keypoints:

     python process_2Ds.py --input ./data/VNect_data/ddd.mdd --output ./data/VNect_data/ --smoothing 0
    

    The processed keypoints will be stored as "vnect_2ds.npy". Then run the following command to run Stage II and III:

     python pipeline.py --contact_estimation 1 --vnect_2d_path ./data/VNect_data/vnect_2ds.npy --save_path './results/' --floor_known 0 --humanoid_path asset/physcap.urdf --skeleton_filename asset/physcap.skeleton --motion_filename ./data/VNect_data/motion.motion --contact_path results/contacts.npy --stationary_path results/stationary.npy  
    

    In case you know the exact floor position, you can use the options --floor_known 1 --floor_frame /Path/To/FloorFrameFile

    To visualize the results, run:

     python visualizer.py --q_path ./results/PhyCap_q.npy
    

License Terms

Permission is hereby granted, free of charge, to any person or company obtaining a copy of this software and associated documentation files (the "Software") from the copyright holders to use the Software for any non-commercial purpose. Publication, redistribution and (re)selling of the software, of modifications, extensions, and derivates of it, and of other software containing portions of the licensed Software, are not permitted. The Copyright holder is permitted to publically disclose and advertise the use of the software by any licensee.

Packaging or distributing parts or whole of the provided software (including code, models and data) as is or as part of other software is prohibited. Commercial use of parts or whole of the provided software (including code, models and data) is strictly prohibited. Using the provided software for promotion of a commercial entity or product, or in any other manner which directly or indirectly results in commercial gains is strictly prohibited.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Citation

If the code is used, the licesnee is required to cite the use of VNect and the following publication in any documentation or publication that results from the work:

@article{
	PhysCapTOG2020,
	author = {Shimada, Soshi and Golyanik, Vladislav and Xu, Weipeng and Theobalt, Christian},
	title = {PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time},
	journal = {ACM Transactions on Graphics}, 
	month = {dec},
	volume = {39},
	number = {6}, 
	articleno = {235},
	year = {2020}, 
	publisher = {ACM}, 
	keywords = {physics-based, 3D, motion capture, real time}
} 
Owner
soratobtai
soratobtai
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)

Contrastive Unpaired Translation (CUT) video (1m) | video (10m) | website | paper We provide our PyTorch implementation of unpaired image-to-image tra

1.7k Dec 27, 2022
A denoising diffusion probabilistic model synthesises galaxies that are qualitatively and physically indistinguishable from the real thing.

Realistic galaxy simulation via score-based generative models Official code for 'Realistic galaxy simulation via score-based generative models'. We us

Michael Smith 32 Dec 20, 2022
Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)

Active Learning for Deep Object Detection via Probabilistic Modeling This repository is the official PyTorch implementation of Active Learning for Dee

NVIDIA Research Projects 130 Jan 06, 2023
Vision Deep-Learning using Tensorflow, Keras.

Welcome! I am a computer vision deep learning developer working in Korea. This is my blog, and you can see everything I've studied here. https://www.n

kimminjun 6 Dec 14, 2022
This program generates a random 12 digit/character password (upper and lowercase) and stores it in a file along with your username and app/website.

PasswordGeneratorAndVault This program generates a random 12 digit/character password (upper and lowercase) and stores it in a file along with your us

Chris 1 Feb 26, 2022
Adversarial Learning for Modeling Human Motion

Adversarial Learning for Modeling Human Motion This repository contains the open source code which reproduces the results for the paper: Adversarial l

wangqi 6 Jun 15, 2021
Baseline powergrid model for NY

Baseline-powergrid-model-for-NY Table of Contents About The Project Built With Usage License Contact Acknowledgements About The Project As the urgency

Anderson Energy Lab at Cornell 6 Nov 24, 2022
Locally cache assets that are normally streamed in POPULATION: ONE

Population One Localizer This is no longer needed as of the build shipped on 03/03/22, thank you bigbox :) Locally cache assets that are normally stre

Ahman Woods 2 Mar 04, 2022
Gradient Step Denoiser for convergent Plug-and-Play

Source code for the paper "Gradient Step Denoiser for convergent Plug-and-Play"

Samuel Hurault 11 Sep 17, 2022
USAD - UnSupervised Anomaly Detection on multivariate time series

USAD - UnSupervised Anomaly Detection on multivariate time series Scripts and utility programs for implementing the USAD architecture. Implementation

116 Jan 04, 2023
Neural Magic Eye: Learning to See and Understand the Scene Behind an Autostereogram, arXiv:2012.15692.

Neural Magic Eye Preprint | Project Page | Colab Runtime Official PyTorch implementation of the preprint paper "NeuralMagicEye: Learning to See and Un

Zhengxia Zou 56 Jul 15, 2022
Code for 1st place solution in Sleep AI Challenge SNU Hospital

Sleep AI Challenge SNU Hospital 2021 Code for 1st place solution for Sleep AI Challenge (Note that the code is not fully organized) Refer to the notio

Saewon Yang 13 Jan 03, 2022
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥

face.evoLVe: High-Performance Face Recognition Library based on PaddlePaddle & PyTorch Evolve to be more comprehensive, effective and efficient for fa

Zhao Jian 3.1k Jan 04, 2023
Header-only library for using Keras models in C++.

frugally-deep Use Keras models in C++ with ease Table of contents Introduction Usage Performance Requirements and Installation FAQ Introduction Would

Tobias Hermann 927 Jan 05, 2023
Codes for our IJCAI21 paper: Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

DDAMS This is the pytorch code for our IJCAI 2021 paper Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization [Arxiv Pr

xcfeng 55 Dec 27, 2022
Compact Bidirectional Transformer for Image Captioning

Compact Bidirectional Transformer for Image Captioning Requirements Python 3.8 Pytorch 1.6 lmdb h5py tensorboardX Prepare Data Please use git clone --

YE Zhou 19 Dec 12, 2022
Deep learning models for change detection of remote sensing images

Change Detection Models (Remote Sensing) Python library with Neural Networks for Change Detection based on PyTorch. âš¡ âš¡ âš¡ I am trying to build this pr

Kaiyu Li 176 Dec 24, 2022
UIUCTF 2021 Public Challenge Repository

UIUCTF-2021-Public UIUCTF 2021 Public Challenge Repository Notes: every challenge folder contains a challenge.yml file in the format for ctfcli, CTFd'

SIGPwny 15 Nov 03, 2022
Anime Face Detector using mmdet and mmpose

Anime Face Detector This is an anime face detector using mmdetection and mmpose. (To avoid copyright issues, I use generated images by the TADNE model

198 Jan 07, 2023