i3DMM: Deep Implicit 3D Morphable Model of Human Heads

Related tags

Deep Learningi3DMM
Overview

i3DMM: Deep Implicit 3D Morphable Model of Human Heads

CVPR 2021 (Oral)

Arxiv | Poject Page

Teaser

This project is the official implementation our work, i3DMM. Much of our code is from DeepSDF's repository. We thank Park et al. for making their code publicly available.

The pretrained model is included in this repository.

Setup

  1. To get started, clone this repository into a local directory.
  2. Install Anaconda, if you don't already have it.
  3. Create a conda environment in the path with the following command:
conda create -p ./i3dmm_env
  1. Activate the conda environment from the same folder:
conda activate ./i3dmm_env
  1. Use the following commands to install required packages:
conda install pytorch=1.1 cudatoolkit=10.0 -c pytorch
pip install opencv-python trimesh[all] scikit-learn mesh-to-sdf plyfile

Preparing Data

Rigid Alignment

We assume that all the input data is rigidly aligned. Therefore, we provide reference 3D landmarks to align your test/training data. Please use centroids.txt file in the model folder to align your data to these landmarks. The landmarks in the file are in the following order:

  1. Right eye left corner
  2. Right eye right corner
  3. Left eye left corner
  4. Left eye right corner
  5. Nose tip
  6. Right lips corner
  7. Left lips corner
  8. Point on the chin The following image shows these landmarks. The centroids.txt file consists of 3D landmarks with coordinates x, y, z. Each file consists of 8 lines. Each line consists of the 3 values in 'x y z' order corresponding to the landmarks described above separated by a space.

Please see our paper for more information on rigid alignment.

Dataset

We closely follow ShapeNet Dataset's folder structure. Please see the a mesh folder in the dataset for an example. The dataset is assumed to be as follows:


   
    /
    
     /
     
      /models/
      
       .obj

       
        /
        
         /
         
          /models/
          
           .mtl 
           
            /
            
             /
             
              /models/
              
               .jpg 
               
                /
                
                 /
                 
                  /models/centroids.txt 
                  
                   /
                   
                    /
                    
                     /models/centroidsEars.txt 
                    
                   
                  
                 
                
               
              
             
            
           
          
         
        
       
      
     
    
   

The model name should be in a specific structure, xxxxx_eyy where xxxxx are 5 characters which identify an identity and yy are unique numbers to specify different expressions and hairstyles. We follow e01 - e10 for different expressions where e07 is neutral expression. e11-e13 are hairstyles in neutral expression. Rest of the expression identifiers are for test expressions.

The centroids.txt file contains landmarks as described in the alignment step. Additionally, to train the model, one could also have centroidEars.txt file which has the 3D ear landmarks in the following order:

  1. Left ear top
  2. Left ear left
  3. Left ear bottom
  4. Left ear right
  5. Right ear top
  6. Right ear left
  7. Right ear bottom
  8. Right ear right These 8 landmarks are as shown in the following image. The file is organized similar to centroids.txt. Please see the a mesh folder in the dataset for an example.

Once the dataset is prepared, create the splits as shown in model/headModel/splits/*.json files. These files are similar to the splits files in DeepSDF.

Preprocessing

The following commands preprocesses the meshes from the dataset described above and places them in data folder. The command must be run from "model" folder. To preprocess training data:

python preprocessData.py --samples_directory ./data --input_meshes_directory 
   
      -e headModel -s Train

   

To preprocess test data:

python preprocessData.py --samples_directory ./data --input_meshes_directory 
   
     -e headModel -s Test

   

'headModel' is the folder containing network settings for the 'specs.json'. The json file also contains split file and preprocessed data paths. The splits files are in model/headModel/splits/*.json These files indicate files that are for testing, training, and reference shape initialisation.

Training the Model

Once data is preprocessed, one can train the model with the following command.

python train_i3DMM.py -e headModel

When working with a large dataset, please consider using batch_split option with a power of 2 (2, 4, 8, 16 etc.). The following command is an example.

python train_i3DMM.py -e headModel --batch_split 2

Additionally, if one considers using landmark supervision or ears constraints for long hair (see paper for details), please export the centroids and ear centroids as a dictionaries with npy files (8 face landmarks: eightCentroids.npy, ear landmarks: gtEarCentroids.npy).

An example entry in the dictionary: {"xxxxx_eyy: 8x3 numpy array"}

Fitting i3DMM to Preprocessed Data

Please see the preprocessing section for preparing the data. Once the data is ready, please use the following command to fit i3DMM to the data.

To save as image:

python fit_i3DMM_to_mesh.py -e headModel -c latest -d data -s 
   
     --imNM True

   

To save as a mesh:

python fit_i3DMM_to_mesh.py -e headModel -c latest -d data -s 
   
     --imNM False

   

Test dataset can be downloaded with this link. Please extract and move the 'heads' folder to dataset folder.

Citation

Please cite our paper if you use any part of this repository.

@inproceedings {yenamandra2020i3dmm,
 author = {T Yenamandra and A Tewari and F Bernard and HP Seidel and M Elgharib and D Cremers and C Theobalt},
 title = {i3DMM: Deep Implicit 3D Morphable Model of Human Heads},
 booktitle = {Proceedings of the IEEE / CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
 month = {June},
 year = {2021}
}
Owner
Tarun Yenamandra
Tarun Yenamandra
Deep Video Matting via Spatio-Temporal Alignment and Aggregation [CVPR2021]

Deep Video Matting via Spatio-Temporal Alignment and Aggregation [CVPR2021] Paper: https://arxiv.org/abs/2104.11208 Introduction Despite the significa

76 Dec 07, 2022
一个免费开源一键搭建的通用验证码识别平台,大部分常见的中英数验证码识别都没啥问题。

captcha_server 一个免费开源一键搭建的通用验证码识别平台,大部分常见的中英数验证码识别都没啥问题。 使用方法 python = 3.8 以上环境 pip install -r requirements.txt -i https://pypi.douban.com/simple gun

Sml2h3 189 Dec 02, 2022
[NeurIPS 2021] Low-Rank Subspaces in GANs

Low-Rank Subspaces in GANs Figure: Image editing results using LowRankGAN on StyleGAN2 (first three columns) and BigGAN (last column). Low-Rank Subspa

112 Dec 28, 2022
Hydra Lightning Template for Structured Configs

Hydra Lightning Template for Structured Configs Template for creating projects with pytorch-lightning and hydra. How to use this template? Create your

Model-driven Machine Learning 4 Jul 19, 2022
Feup-csr - Repository holding my group's submission to the CSR project competition

CSR Competições de Swarm Robotics Swarm Robotics Competitions This repository holds the files submitted for the CSR project competition. Project group

Nuno Pereira 1 Jan 04, 2022
A custom DeepStack model for detecting 16 human actions.

DeepStack_ActionNET This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API fo

MOSES OLAFENWA 16 Nov 11, 2022
Anderson Acceleration for Deep Learning

Anderson Accelerated Deep Learning (AADL) AADL is a Python package that implements the Anderson acceleration to speed-up the training of deep learning

Oak Ridge National Laboratory 7 Nov 24, 2022
UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac protocols on unmanned aerial vehicle networks.

UAV-Networks Simulator - Autonomous Networking - A.A. 20/21 UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac pr

0 Nov 13, 2021
Awesome-AI-books - Some awesome AI related books and pdfs for learning and downloading

Awesome AI books Some awesome AI related books and pdfs for downloading and learning. Preface This repo only used for learning, do not use in business

luckyzhou 1k Jan 01, 2023
Software associated to AAAI paper "Planning with Biological Neurons and Synapses"

jBrain Software associated with the AAAI 2022 paper Francesco D'Amore, Daniel Mitropolsky, Pierluigi Crescenzi, Emanuele Natale, Christos H. Papadimit

Pierluigi Crescenzi 1 Apr 10, 2022
AFLNet: A Greybox Fuzzer for Network Protocols

AFLNet: A Greybox Fuzzer for Network Protocols AFLNet is a greybox fuzzer for protocol implementations. Unlike existing protocol fuzzers, it takes a m

626 Jan 06, 2023
A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

A small demonstration of using WebDataset with ImageNet and PyTorch Lightning This is a small repo illustrating how to use WebDataset on ImageNet. usi

50 Dec 16, 2022
Companion code for "Bayesian logistic regression for online recalibration and revision of risk prediction models with performance guarantees"

Companion code for "Bayesian logistic regression for online recalibration and revision of risk prediction models with performance guarantees" Installa

0 Oct 13, 2021
Official Datasets and Implementation from our Paper "Video Class Agnostic Segmentation in Autonomous Driving".

Video Class Agnostic Segmentation [Method Paper] [Benchmark Paper] [Project] [Demo] Official Datasets and Implementation from our Paper "Video Class A

Mennatullah Siam 26 Oct 24, 2022
PyTorch implementation of popular datasets and models in remote sensing

PyTorch Remote Sensing (torchrs) (WIP) PyTorch implementation of popular datasets and models in remote sensing tasks (Change Detection, Image Super Re

isaac 222 Dec 28, 2022
Fedlearn支持前沿算法研发的Python工具库 | Fedlearn algorithm toolkit for researchers

FedLearn-algo Installation Development Environment Checklist python3 (3.6 or 3.7) is required. To configure and check the development environment is c

89 Nov 14, 2022
Full Stack Deep Learning Labs

Full Stack Deep Learning Labs Welcome! Project developed during lab sessions of the Full Stack Deep Learning Bootcamp. We will build a handwriting rec

Full Stack Deep Learning 1.2k Dec 31, 2022
The official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness.

This repository is the official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness. Requirements pip install -r requi

Jie Ren 17 Dec 12, 2022
Accelerated deep learning R&D

Accelerated deep learning R&D PyTorch framework for Deep Learning research and development. It focuses on reproducibility, rapid experimentation, and

Catalyst-Team 3.1k Jan 06, 2023
The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

SCINet This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and I

386 Jan 01, 2023