An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]

Overview

Deep-motion-editing

Python Pytorch Blender

This library provides fundamental and advanced functions to work with 3D character animation in deep learning with Pytorch. The code contains end-to-end modules, from reading and editing animation files to visualizing and rendering (using Blender) them.

The main deep editing operations provided here, motion retargeting and motion style transfer, are based on two works published in SIGGRAPH 2020:

Skeleton-Aware Networks for Deep Motion Retargeting: Project | Paper | Video


Unpaired Motion Style Transfer from Video to Animation: Project | Paper | Video


This library is written and maintained by Kfir Aberman, Peizhuo Li and Yijia Weng. The library is still under development.

Prerequisites

  • Linux or macOS
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Quick Start

We provide pretrained models together with demo examples using animation files specified in bvh format.

Motion Retargeting

Download and extract the test dataset from Google Drive or Baidu Disk (ye1q). Then place the Mixamo directory within retargeting/datasets.

To generate the demo examples with the pretrained model, run

cd retargeting
sh demo.sh

The results will be saved in retargeting/examples.

To reconstruct the quantitative result with the pretrained model, run

cd retargeting
python test.py

The retargeted demo results, that consists both intra-structual retargeting and cross-structural retargeting, will be saved in retargeting/pretrained/results.

Motion Style Transfer

To receive the demo examples, simply run

sh style_transfer/demo.sh

The results will be saved in style_transfer/demo_results, where each folder contains the raw output raw.bvh and the output after footskate clean-up fixed.bvh.

Train from scratch

We provide instructions for retraining our models

Motion Retargeting

Dataset

We use Mixamo dataset to train our model. You can download our preprocessed data from Google Drive or Baidu Disk(4rgv). Then place the Mixamo directory within retargeting/datasets.

Otherwise, if you want to download Mixamo dataset or use your own dataset, please follow the instructions below. Unless specifically mentioned, all script should be run in retargeting directory.

  • To download Mixamo on your own, you can refer to this good tutorial. You will need to download as fbx file (skin is not required) and make a subdirectory for each character in retargeting/datasets/Mixamo. In our original implementation we download 60fps fbx files and downsample them into 30fps. Since we use an unpaired way in training, it is recommended to divide all motions into two equal size sets for each group and equal size sets for each character in each group. If you use your own data, you need to make sure that your dataset consists of bvh files with same t-pose. You should also put your dataset in subdirectories of retargeting/datasets/Mixamo.

  • Enter retargeting/datasets directory and run blender -b -P fbx2bvh.py to convert fbx files to bvh files. If you already have bvh file as dataset, please skil this step.

  • In our original implementation, we manually split three joints for skeletons in group A. If you want to follow our routine, run python datasets/split_joint.py. This step is optional.

  • Run python datasets/preprocess.py to simplify the skeleton by removing some less interesting joints, e.g. fingers and convert bvh files into npy files. If you use your own data, you'll need to define simplified structure in retargeting/datasets/bvh_parser.py. This information currently is hard-coded in the code. See the comment in source file for more details. There are four steps to make your own dataset work.

  • Training and testing character are hard-coded in retargeting/datasets/__init__.py. You'll need to modify it if you want to use your own dataset.

Train

After preparing dataset, simply run

cd retargeting
python train.py --save_dir=./training/

It will use default hyper-parameters to train the model and save trained model in retargeting/training directory. More options are available in retargeting/option_parser.py. You can use tensorboard to monitor the training progress by running

tensorboard --logdir=./retargeting/training/logs/

Motion Style Transfer

Dataset

  • Download the dataset from Google Drive or Baidu Drive (zzck). The dataset consists of two parts: one is the taken from the motion style transfer dataset proposed by Xia et al. and the other is our BFA dataset, where both parts contain .bvh files retargeted to the standard skeleton of CMU mocap dataset.

  • Extract the .zip files into style_transfer/data

  • Pre-process data for training:

    cd style_transfer/data_proc
    sh gen_dataset.sh

    This will produce xia.npz, bfa.npz in style_transfer/data.

Train

After downloading the dataset simply run

python style_transfer/train.py

Style from videos

To run our models in test time with your own videos, you first need to use OpenPose to extract the 2D joint positions from the video, then use the resulting JSON files as described in the demo examples.

Blender Visualization

We provide a simple wrapper of blender's python API (2.80) for rendering 3D animations.

Prerequisites

The Blender releases distributed from blender.org include a complete Python installation across all platforms, which means that any extensions you have installed in your systems Python won’t appear in Blender.

To use external python libraries, you can install new packages directly to Blender's python distribution. Alternatively, you can change the default blender python interpreter by:

  1. Remove the built-in python directory: [blender_path]/2.80/python.

  2. Make a symbolic link or simply copy a python interpreter at [blender_path]/2.80/python. E.g. ln -s ~/anaconda3/envs/env_name [blender_path]/2.80/python

This interpreter should be python 3.7.x version and contains at least: numpy, scipy.

Usage

Arguments

Due to blender's argparse system, the argument list should be separated from the python file with an extra '--', for example:

blender -P render.py -- --arg1 [ARG1] --arg2 [ARG2]

engine: "cycles" or "eevee". Please refer to Render section for more details.

render: 0 or 1. If set to 1, the data will be rendered outside blender's GUI. It is recommended to use render = 0 in case you need to manually adjust the camera.

The full parameters list can be displayed by: blender -P render.py -- -h

Load bvh File (load_bvh.py)

To load example.bvh, run blender -P load_bvh.py. Please finish the preparation first.

Note that currently it uses primitive_cone with 5 vertices for limbs.

Note that Blender and bvh file have different xyz-coordinate systems. In bvh file, the "height" axis is y-axis while in blender it's z-axis. load_bvh.py swaps the axis in the BVH_file class initialization funtion.

Currently all the End Sites in bvh file are discarded, this is because of the out-side code used in utils/.

After loading the bvh file, it's height is normalized to 10.

Material, Texture, Light and Camera (scene.py)

This file enables to add a checkerboard floor, camera, a "sun" to the scene and to apply a basic color material to character.

The floor is placed at y=0, and should be corrected manually in case that it is needed (depends on the character parametes in the bvh file).

Rendering

We support 2 render engines provided in Blender 2.80: Eevee and Cycles, where the trade-off is between speed and quality.

Eevee (left) is a fast, real-time, render engine provides limited quality, while Cycles (right) is a slower, unbiased, ray-tracing render engine provides photo-level rendering result. Cycles also supports CUDA and OpenGL acceleration.

Skinning

Automatic Skinning

We provide a blender script that applies "skinning" to the output skeletons. You first need to download the fbx file which corresponds to the targeted character (for example, "mousey"). Then, you can get a skinned animation by simply run

blender -P blender_rendering/skinning.py -- --bvh_file [bvh file path] --fbx_file [fbx file path]

Note that the script might not work well for all the fbx and bvh files. If it fails, you can try to tweak the script or follow the manual skinning guideline below.

Manual Skinning

Here we provide a "quick and dirty" guideline for how to apply skin to the resulting bvh files, with blender:

  • Download the fbx file that corresponds to the retargeted character (for example, "mousey")
  • Import the fbx file to blender (uncheck the "import animation" option)
  • Merge meshes - select all the parts and merge them (ctrl+J)
  • Import the retargeted bvh file
  • Click "context" (menu bar) -> "Rest Position" (under sekeleton)
  • Manually align the mesh and the skeleton (rotation + translation)
  • Select the skeleton and the mesh (the skeleton object should be highlighted)
  • Click Object -> Parent -> with automatic weights (or Ctrl+P)

Now the skeleton and the skin are bound and the animation can be rendered.

Acknowledgments

The code in the utils directory is mostly taken from Holden et al. [2016].
In addition, part of the MoCap dataset is taken from Adobe Mixamo and from the work of Xia et al..

Citation

If you use this code for your research, please cite our papers:

@article{aberman2020skeleton,
  author = {Aberman, Kfir and Li, Peizhuo and Sorkine-Hornung Olga and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
  title = {Skeleton-Aware Networks for Deep Motion Retargeting},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {39},
  number = {4},
  pages = {62},
  year = {2020},
  publisher = {ACM}
}

and

@article{aberman2020unpaired,
  author = {Aberman, Kfir and Weng, Yijia and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
  title = {Unpaired Motion Style Transfer from Video to Animation},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {39},
  number = {4},
  pages = {64},
  year = {2020},
  publisher = {ACM}
}
Official code for "End-to-End Optimization of Scene Layout" -- including VAE, Diff Render, SPADE for colorization (CVPR 2020 Oral)

End-to-End Optimization of Scene Layout Code release for: End-to-End Optimization of Scene Layout CVPR 2020 (Oral) Project site, Bibtex For help conta

Andrew Luo 41 Dec 09, 2022
TensorFlow implementation of ENet

TensorFlow-ENet TensorFlow implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation. This model was tested on th

Kwotsin 255 Oct 17, 2022
Predicting path with preference based on user demonstration using Maximum Entropy Deep Inverse Reinforcement Learning in a continuous environment

Preference-Planning-Deep-IRL Introduction Check my portfolio post Dependencies Gym stable-baselines3 PyTorch Usage Take Demonstration python3 record.

Tianyu Li 9 Oct 26, 2022
PyTorch implementation for COMPLETER: Incomplete Multi-view Clustering via Contrastive Prediction (CVPR 2021)

Completer: Incomplete Multi-view Clustering via Contrastive Prediction This repo contains the code and data of the following paper accepted by CVPR 20

XLearning Group 72 Dec 07, 2022
Official implementation for paper: Feature-Style Encoder for Style-Based GAN Inversion

Feature-Style Encoder for Style-Based GAN Inversion Official implementation for paper: Feature-Style Encoder for Style-Based GAN Inversion. Code will

InterDigital 63 Jan 03, 2023
Face Depixelizer based on "PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models" repository.

NOTE We have noticed a lot of concern that PULSE will be used to identify individuals whose faces have been blurred out. We want to emphasize that thi

Denis Malimonov 2k Dec 29, 2022
Keepsake is a Python library that uploads files and metadata (like hyperparameters) to Amazon S3 or Google Cloud Storage

Keepsake Version control for machine learning. Keepsake is a Python library that uploads files and metadata (like hyperparameters) to Amazon S3 or Goo

Replicate 1.6k Dec 29, 2022
Python library for computer vision labeling tasks. The core functionality is to translate bounding box annotations between different formats-for example, from coco to yolo.

PyLabel pip install pylabel PyLabel is a Python package to help you prepare image datasets for computer vision models including PyTorch and YOLOv5. I

PyLabel Project 176 Jan 01, 2023
Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition"

CLIPstyler Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition" Environment Pytorch 1.7.1, Python 3.6 $ c

201 Dec 29, 2022
A modern pure-Python library for reading PDF files

pdf A modern pure-Python library for reading PDF files. The goal is to have a modern interface to handle PDF files which is consistent with itself and

6 Apr 06, 2022
Machine learning evaluation metrics, implemented in Python, R, Haskell, and MATLAB / Octave

Note: the current releases of this toolbox are a beta release, to test working with Haskell's, Python's, and R's code repositories. Metrics provides i

Ben Hamner 1.6k Dec 26, 2022
This code is for eCaReNet: explainable Cancer Relapse Prediction Network.

eCaReNet This code is for eCaReNet: explainable Cancer Relapse Prediction Network. (Towards Explainable End-to-End Prostate Cancer Relapse Prediction

Institute of Medical Systems Biology 2 Jul 28, 2022
Implementation of ProteinBERT in Pytorch

ProteinBERT - Pytorch (wip) Implementation of ProteinBERT in Pytorch. Original Repository Install $ pip install protein-bert-pytorch Usage import torc

Phil Wang 92 Dec 25, 2022
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Code for Paper "Imbalanced Gradients: A Subtle Cause of Overestimated Adv

Hanxun Huang 11 Nov 30, 2022
Learning Confidence for Out-of-Distribution Detection in Neural Networks

Learning Confidence Estimates for Neural Networks This repository contains the code for the paper Learning Confidence for Out-of-Distribution Detectio

235 Jan 05, 2023
Voice Conversion by CycleGAN (语音克隆/语音转换):CycleGAN-VC3

CycleGAN-VC3-PyTorch 中文说明 | English This code is a PyTorch implementation for paper: CycleGAN-VC3: Examining and Improving CycleGAN-VCs for Mel-spectr

Kun Ma 110 Dec 24, 2022
Dynamic Slimmable Network (CVPR 2021, Oral)

Dynamic Slimmable Network (DS-Net) This repository contains PyTorch code of our paper: Dynamic Slimmable Network (CVPR 2021 Oral). Architecture of DS-

Changlin Li 197 Dec 09, 2022
State-of-the-art data augmentation search algorithms in PyTorch

MuarAugment Description MuarAugment is a package providing the easiest way to a state-of-the-art data augmentation pipeline. How to use You can instal

43 Dec 12, 2022
CarND-LaneLines-P1 - Lane Finding Project for Self-Driving Car ND

Finding Lane Lines on the Road Overview When we drive, we use our eyes to decide where to go. The lines on the road that show us where the lanes are a

Udacity 769 Dec 27, 2022
PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner [Li et al., 2020].

VGPL-Visual-Prior PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner (VGPL). Give

Toru 8 Dec 29, 2022