[CVPR'21] Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration

Related tags

Deep LearningPTF
Overview

Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration

This repository contains the implementation of our paper Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration . The code is largely based on Occupancy Networks - Learning 3D Reconstruction in Function Space.

You can find detailed usage instructions for training your own models and using pretrained models below.

If you find our code useful, please consider citing:

@InProceedings{PTF:CVPR:2021,
    author = {Shaofei Wang and Andreas Geiger and Siyu Tang},
    title = {Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration},
    booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2021}
}

Installation

This repository has been tested on the following platforms:

  1. Python 3.7, PyTorch 1.6 with CUDA 10.2 and cuDNN 7.6.5, Ubuntu 20.04
  2. Python 3.7, PyTorch 1.6 with CUDA 10.1 and cuDNN 7.6.4, CentOS 7.9.2009

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called PTF using

conda env create -n PTF python=3.7
conda activate PTF

Second, install PyTorch 1.6 via the official PyTorch website.

Third, install dependencies via

pip install -r requirements.txt

Fourth, manually install pytorch-scatter.

Lastly, compile the extension modules. You can do this via

python setup.py build_ext --inplace

(Optional) if you want to use the registration code under smpl_registration/, you need to install kaolin. Download the code from the kaolin repository, checkout to commit e7e513173bd4159ae45be6b3e156a3ad156a3eb9 and install it according to the instructions.

(Optional) if you want to train/evaluate single-view models (which corresponds to configurations in configs/cape_sv), you need to install OpenDR to render depth images. You need to first install OSMesa, here is the command of installing it on Ubuntu:

sudo apt-get install libglu1-mesa-dev freeglut3-dev mesa-common-dev libosmesa6-dev

For installing OSMesa on CentOS 7, please check this related issue. After installing OSMesa, install OpenDR via:

pip install opendr

Build the dataset

To prepare the dataset for training/evaluation, you have to first download the CAPE dataset from the CAPE website.

  1. Download SMPL v1.0, clean-up the chumpy objects inside the models using this code, and rename the files and extract them to ./body_models/smpl/, eventually, the ./body_models folder should have the following structure:
    body_models
     └-- smpl
     	├-- male
     	|   └-- model.pkl
     	└-- female
     	    └-- model.pkl
    
    

Besides the SMPL models, you will also need to download all the .pkl files from IP-Net repository and put them under ./body_models/misc/. Finally, run the following script to extract necessary SMPL parameters used in our code:

python extract_smpl_parameters.py

The extracted SMPL parameters will be save into ./body_models/misc/.

  1. Extract CAPE dataset to an arbitrary path, denoted as ${CAPE_ROOT}. The extracted dataset should have the following structure:
    ${CAPE_ROOT}
     ├-- 00032
     ├-- 00096
     |   ...
     ├-- 03394
     └-- cape_release
    
    
  2. Create data directory under the project directory.
  3. Modify the parameters in preprocess/build_dataset.sh accordingly (i.e. modify the --dataset_path to ${CAPE_ROOT}) to extract training/evaluation data.
  4. Run preprocess/build_dataset.sh to preprocess the CAPE dataset.

Pre-trained models

We provide pre-trained PTF and IP-Net models with two encoder resolutions, that is, 64x3 and 128x3. After downloading them, please put them under respective directories ./out/cape or ./out/cape_sv.

Generating Meshes

To generate all evaluation meshes using a trained model, use

python generate.py configs/cape/{config}.yaml

Alternatively, if you want to parallelize the generation on a HPC cluster, use:

python generate.py --subject-idx ${SUBJECT_IDX} --sequence-idx ${SEQUENCE_IDX} configs/cape/${config}.yaml

to generate meshes for specified subject/sequence combination. A list of all subject/sequence combinations can be found in ./misc/subject_sequence.txt.

SMPL/SMPL+D Registration

To register SMPL/SMPL+D models to the generated meshes, use either of the following:

python smpl_registration/fit_SMPLD_PTFs.py --num-joints 24 --use-parts --init-pose configs/cape/${config}.yaml # for PTF
python smpl_registration/fit_SMPLD_PTFs.py --num-joints 14 --use-parts configs/cape/${config}.yaml # for IP-Net

Note that registration is very slow, taking roughly 1-2 minutes per frame. If you have access to HPC cluster, it is advised to parallelize over subject/sequence combinations using the same subject/sequence input arguments for generating meshes.

Training

Finally, to train a new network from scratch, run

python train.py --num_workers 8 configs/cape/${config}.yaml

You can monitor on http://localhost:6006 the training process using tensorboard:

tensorboard --logdir ${OUTPUT_DIR}/logs --port 6006

where you replace ${OUTPUT_DIR} with the respective output directory.

License

We employ MIT License for the PTF code, which covers

extract_smpl_parameters.py
generate.py
train.py
setup.py
im2mesh/
preprocess/

Modules not covered by our license are modified versions from IP-Net (./smpl_registration) and SMPL-X (./human_body_prior); for these parts, please consult their respective licenses and cite the respective papers.

Dogs classification with Deep Metric Learning using some popular losses

Tsinghua Dogs classification with Deep Metric Learning 1. Introduction Tsinghua Dogs dataset Tsinghua Dogs is a fine-grained classification dataset fo

QuocThangNguyen 45 Nov 09, 2022
Various operations like path tracking, counting, etc by using yolov5

Object-tracing-with-YOLOv5 Various operations like path tracking, counting, etc by using yolov5

Pawan Valluri 5 Nov 28, 2022
List of awesome things around semantic segmentation 🎉

Awesome Semantic Segmentation List of awesome things around semantic segmentation 🎉 Semantic segmentation is a computer vision task in which we label

Dam Minh Tien 18 Nov 26, 2022
Real time Human Detection Counting

In this python project, we are going to build the Human Detection and Counting System through Webcam or you can give your own video or images. This is a deep learning project on computer vision, whic

Mir Nawaz Ahmad 2 Jun 17, 2022
Code and data for ACL2021 paper Cross-Lingual Abstractive Summarization with Limited Parallel Resources.

Multi-Task Framework for Cross-Lingual Abstractive Summarization (MCLAS) The code for ACL2021 paper Cross-Lingual Abstractive Summarization with Limit

Yu Bai 43 Nov 07, 2022
Unofficial PyTorch implementation of SimCLR by Google Brain

Unofficial PyTorch implementation of SimCLR by Google Brain

Rishabh Anand 2 Oct 13, 2021
Company clustering with K-means/GMM and visualization with PCA, t-SNE, using SSAN relation extraction

RE results graph visualization and company clustering Installation pip install -r requirements.txt python -m nltk.downloader stopwords python3.7 main.

Jieun Han 1 Oct 06, 2022
Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations"

Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations" this repository is maintained by bo

Yuhan Liu 24 Nov 29, 2022
Caffe implementation for Hu et al. Segmentation for Natural Language Expressions

Segmentation from Natural Language Expressions This repository contains the Caffe reimplementation of the following paper: R. Hu, M. Rohrbach, T. Darr

10 Jul 27, 2021
Fibonacci Method Gradient Descent

An implementation of the Fibonacci method for gradient descent, featuring a TKinter GUI for inputting the function / parameters to be examined and a matplotlib plot of the function and results.

Emma 1 Jan 28, 2022
pytorch implementation of GPV-Pose

GPV-Pose Pytorch implementation of GPV-Pose: Category-level Object Pose Estimation via Geometry-guided Point-wise Voting. (link) UPDATE A new version

40 Dec 01, 2022
Unrolled Generative Adversarial Networks

Unrolled Generative Adversarial Networks Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein arxiv:1611.02163 This repo contains an example notebo

Ben Poole 292 Dec 06, 2022
Process JSON files for neural recording sessions using Medtronic's BrainSense Percept PC neurostimulator

percept_processing This code processes JSON files for streamed neural data using Medtronic's Percept PC neurostimulator with BrainSense Technology for

Maria Olaru 3 Jun 06, 2022
A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains (IJCV submission)

wsss-analysis The code of: A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains, arXiv pre-print 2019 paper.

Lyndon Chan 48 Dec 18, 2022
Progressive Growing of GANs for Improved Quality, Stability, and Variation

Progressive Growing of GANs for Improved Quality, Stability, and Variation — Official TensorFlow implementation of the ICLR 2018 paper Tero Karras (NV

Tero Karras 5.9k Jan 05, 2023
Deep learning with TensorFlow and earth observation data.

Deep Learning with TensorFlow and EO Data Complete file set for Jupyter Book Autor: Development Seed Date: 04 October 2021 ISBN: (to come) Notebook tu

Development Seed 20 Nov 16, 2022
Fast and robust certifiable relative pose estimation

Fast and Robust Relative Pose Estimation for Calibrated Cameras This repository contains the code for the relative pose estimation between two central

42 Dec 06, 2022
Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.

VQGAN-CLIP-GENERATOR Overview This is a package (with available notebook) for running VQGAN+CLIP locally, with a focus on ease of use, good documentat

Ryan Hamilton 98 Dec 30, 2022
Self-Learned Video Rain Streak Removal: When Cyclic Consistency Meets Temporal Correspondence

In this paper, we address the problem of rain streaks removal in video by developing a self-learned rain streak removal method, which does not require any clean groundtruth images in the training pro

Yang Wenhan 44 Dec 06, 2022