[CVPR'21] Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration

Related tags

Deep LearningPTF
Overview

Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration

This repository contains the implementation of our paper Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration . The code is largely based on Occupancy Networks - Learning 3D Reconstruction in Function Space.

You can find detailed usage instructions for training your own models and using pretrained models below.

If you find our code useful, please consider citing:

@InProceedings{PTF:CVPR:2021,
    author = {Shaofei Wang and Andreas Geiger and Siyu Tang},
    title = {Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration},
    booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2021}
}

Installation

This repository has been tested on the following platforms:

  1. Python 3.7, PyTorch 1.6 with CUDA 10.2 and cuDNN 7.6.5, Ubuntu 20.04
  2. Python 3.7, PyTorch 1.6 with CUDA 10.1 and cuDNN 7.6.4, CentOS 7.9.2009

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called PTF using

conda env create -n PTF python=3.7
conda activate PTF

Second, install PyTorch 1.6 via the official PyTorch website.

Third, install dependencies via

pip install -r requirements.txt

Fourth, manually install pytorch-scatter.

Lastly, compile the extension modules. You can do this via

python setup.py build_ext --inplace

(Optional) if you want to use the registration code under smpl_registration/, you need to install kaolin. Download the code from the kaolin repository, checkout to commit e7e513173bd4159ae45be6b3e156a3ad156a3eb9 and install it according to the instructions.

(Optional) if you want to train/evaluate single-view models (which corresponds to configurations in configs/cape_sv), you need to install OpenDR to render depth images. You need to first install OSMesa, here is the command of installing it on Ubuntu:

sudo apt-get install libglu1-mesa-dev freeglut3-dev mesa-common-dev libosmesa6-dev

For installing OSMesa on CentOS 7, please check this related issue. After installing OSMesa, install OpenDR via:

pip install opendr

Build the dataset

To prepare the dataset for training/evaluation, you have to first download the CAPE dataset from the CAPE website.

  1. Download SMPL v1.0, clean-up the chumpy objects inside the models using this code, and rename the files and extract them to ./body_models/smpl/, eventually, the ./body_models folder should have the following structure:
    body_models
     └-- smpl
     	├-- male
     	|   └-- model.pkl
     	└-- female
     	    └-- model.pkl
    
    

Besides the SMPL models, you will also need to download all the .pkl files from IP-Net repository and put them under ./body_models/misc/. Finally, run the following script to extract necessary SMPL parameters used in our code:

python extract_smpl_parameters.py

The extracted SMPL parameters will be save into ./body_models/misc/.

  1. Extract CAPE dataset to an arbitrary path, denoted as ${CAPE_ROOT}. The extracted dataset should have the following structure:
    ${CAPE_ROOT}
     ├-- 00032
     ├-- 00096
     |   ...
     ├-- 03394
     └-- cape_release
    
    
  2. Create data directory under the project directory.
  3. Modify the parameters in preprocess/build_dataset.sh accordingly (i.e. modify the --dataset_path to ${CAPE_ROOT}) to extract training/evaluation data.
  4. Run preprocess/build_dataset.sh to preprocess the CAPE dataset.

Pre-trained models

We provide pre-trained PTF and IP-Net models with two encoder resolutions, that is, 64x3 and 128x3. After downloading them, please put them under respective directories ./out/cape or ./out/cape_sv.

Generating Meshes

To generate all evaluation meshes using a trained model, use

python generate.py configs/cape/{config}.yaml

Alternatively, if you want to parallelize the generation on a HPC cluster, use:

python generate.py --subject-idx ${SUBJECT_IDX} --sequence-idx ${SEQUENCE_IDX} configs/cape/${config}.yaml

to generate meshes for specified subject/sequence combination. A list of all subject/sequence combinations can be found in ./misc/subject_sequence.txt.

SMPL/SMPL+D Registration

To register SMPL/SMPL+D models to the generated meshes, use either of the following:

python smpl_registration/fit_SMPLD_PTFs.py --num-joints 24 --use-parts --init-pose configs/cape/${config}.yaml # for PTF
python smpl_registration/fit_SMPLD_PTFs.py --num-joints 14 --use-parts configs/cape/${config}.yaml # for IP-Net

Note that registration is very slow, taking roughly 1-2 minutes per frame. If you have access to HPC cluster, it is advised to parallelize over subject/sequence combinations using the same subject/sequence input arguments for generating meshes.

Training

Finally, to train a new network from scratch, run

python train.py --num_workers 8 configs/cape/${config}.yaml

You can monitor on http://localhost:6006 the training process using tensorboard:

tensorboard --logdir ${OUTPUT_DIR}/logs --port 6006

where you replace ${OUTPUT_DIR} with the respective output directory.

License

We employ MIT License for the PTF code, which covers

extract_smpl_parameters.py
generate.py
train.py
setup.py
im2mesh/
preprocess/

Modules not covered by our license are modified versions from IP-Net (./smpl_registration) and SMPL-X (./human_body_prior); for these parts, please consult their respective licenses and cite the respective papers.

This repository contains a Ruby API for utilizing TensorFlow.

tensorflow.rb Description This repository contains a Ruby API for utilizing TensorFlow. Linux CPU Linux GPU PIP Mac OS CPU Not Configured Not Configur

somatic labs 825 Dec 26, 2022
Semi-supervised Learning for Sentiment Analysis

Neural-Semi-supervised-Learning-for-Text-Classification-Under-Large-Scale-Pretraining Code, models and Datasets for《Neural Semi-supervised Learning fo

47 Jan 01, 2023
Context-Sensitive Misspelling Correction of Clinical Text via Conditional Independence, CHIL 2022

cim-misspelling Pytorch implementation of Context-Sensitive Spelling Correction of Clinical Text via Conditional Independence, CHIL 2022. This model (

Juyong Kim 11 Dec 19, 2022
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4.2k Jan 09, 2023
Change Detection in SAR Images Based on Multiscale Capsule Network

SAR_CD_MS_CapsNet Code for the paper "Change Detection in SAR Images Based on Multiscale Capsule Network" , IEEE Geoscience and Remote Sensing Letters

Feng Gao 21 Nov 29, 2022
Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation, NeurIPS 2021 Spotlight

PCAN for Multiple Object Tracking and Segmentation This is the offical implementation of paper PCAN for MOTS. We also present a trailer that consists

ETH VIS Group 328 Dec 29, 2022
Geometric Vector Perceptron --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Code to accompany Learning from Protein Structure with Geometric Vector Perceptrons by B Jing, S Eismann, P Suriana, RJL T

Dror Lab 85 Dec 29, 2022
Convert Table data to approximate values with GUI

Table_Editor Convert Table data to approximate values with GUIs... usage - Import methods for extension Tables. Imported method supposed to have only

CLJ 1 Jan 10, 2022
Implementation of our NeurIPS 2021 paper "A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs".

PPO-BiHyb This is the official implementation of our NeurIPS 2021 paper "A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Grap

<a href=[email protected]"> 66 Nov 23, 2022
Make Watson Assistant send messages to your Discord Server

Make Watson Assistant send messages to your Discord Server Prerequisites Sign up for an IBM Cloud account. Fill in the required information and press

1 Jan 10, 2022
GLANet - The code for Global and Local Alignment Networks for Unpaired Image-to-Image Translation arxiv

GLANet The code for Global and Local Alignment Networks for Unpaired Image-to-Image Translation arxiv Framework: visualization results: Getting Starte

stanley 29 Dec 14, 2022
Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.

Summary Explorer Summary Explorer is a tool to visually inspect the summaries from several state-of-the-art neural summarization models across multipl

Webis 42 Aug 14, 2022
Artstation-Artistic-face-HQ Dataset (AAHQ)

Artstation-Artistic-face-HQ Dataset (AAHQ) Artstation-Artistic-face-HQ (AAHQ) is a high-quality image dataset of artistic-face images. It is proposed

onion 105 Dec 16, 2022
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight) Abstract Due to the limited and even imbalanced dat

Hanzhe Hu 99 Dec 12, 2022
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

Tencent YouTu Research 64 Nov 11, 2022
Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral)

DSA^2 F: Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral) This repo is the official imp

如今我已剑指天涯 46 Dec 21, 2022
Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Gyeongjae Choi 17 Sep 23, 2021
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy" (ICLR 2022 Spotlight)

About Code release for Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy (ICLR 2022 Spotlight)

THUML @ Tsinghua University 221 Dec 31, 2022
PyTorch implementation for the ICLR 2020 paper "Understanding the Limitations of Variational Mutual Information Estimators"

Smoothed Mutual Information ``Lower Bound'' Estimator PyTorch implementation for the ICLR 2020 paper Understanding the Limitations of Variational Mutu

50 Nov 09, 2022
TensorFlow implementation of "Attention is all you need (Transformer)"

[TensorFlow 2] Attention is all you need (Transformer) TensorFlow implementation of "Attention is all you need (Transformer)" Dataset The MNIST datase

YeongHyeon Park 4 Jan 05, 2022