An addon uses SMPL's poses and global translation to drive cartoon character in Blender.

Overview

Blender addon for driving character

The addon drives the cartoon character by passing SMPL's poses and global translation into model's armature in Blender. Poses and global translation can be obtained from ROMP or any other 3D pose estimation model. If the model outputs poses and global translation at a high FPS, you can drive cartoon characters in Blender in real time.

Demo

image image

The first demo uses ROMP outputs from the video, which is stored in a file.

The second demo uses ROMP outputs from the webcam in real time.

How to Use the add-on

Data Requester

This addon is a data requester that sends a data request over TCP to 127.0.0.1:9999.It gets one data at a time from the data server.

After running the addon by pressing ctrl+E in Blender, it keeps asking for data until the server closes the TCP connection. You can also press A to close the TCP connection.

Data Server

The data server is bound to 127.0.0.1:9999. After receiving a request from the data requester, one data is sent to the requester.

I've written server.py as a example data server (you only need to know a little about Python TCP to understand it).

Real-time data server can be found in ROMP.You just need to run webcam_blender.sh.

Data Format

The data is a Python list of four elements in the form of [mode,poses,global translation,current keyframe id].

  1. Mode is an integer, 1 for insert keyframe, 0 for no insert keyframe. Insert keyframes and animation rendering can be carried out later. Keyframes are generally not inserted in real-time mode to make real-time driving cartoon characters smoother.
  2. Poses is a list of length 72.
  3. Global translation is a list of length 3. If you don't need global translation, just go into [0,0,0].
  4. Current keyframe id is an integer.If you insert keyframes, you should set it to the correct keyframe id. If you don't insert keyframes, just set it to 0.

Steps

  1. Install the addon in Blender
  2. Run data server
  3. Press Ctrl+E in Blender to run addon
  4. Press A in Blender to stop addon or wait until the data transfer is complete

In step 3, you'd better select Armature, otherwise bugs may occur. Also, the mouse must be placed in the 3D viewport area(where the model is), otherwise the addon will not run.

Something about Blender

If you're not familiar with Blender, I've placed a blender project in the resources folder to help you.All you need to do is open it and follow Steps to achieve the effect shown in the Demo.(It's better to know something about animation in Blender.)

If you need a video background in the demo, select Compositing in the top menu bar, click Open Clip in the Movie Clip, and select your video.

图 2

If you are familiar with Blender and want to use your own models, you should make sure it's armature is SMPL's skeleton. The armature should name Armature and each bone has the same name as the bones in demo model(Only the 24 bones of SMPL skeleton are needed, and the fingers don't need to change their names).

图 3

Comments
  • How to run .fbx file to control the charater

    How to run .fbx file to control the charater

    Hello. I have successfully run the demo of ROMP, which exported .fbx file. And currently I want to use the .fbx to control the character. Can you provide steps for the video demo? I can only see the camera one

    opened by CheungBH 31
  • [simple-romp] How to use it?

    [simple-romp] How to use it?

    I tried to use it with simple-romp but it did not work. I already created an issue at the ROMP repository and described my problem here and here in detail.

    The author of the ROMP repository answered there:

    About live blender driving, please refer to this repo. https://github.com/yanch2116/CharacterDriven-BlenderAddon My colleague is responsible for maintaining this funciton now. Best regard.

    and closed the issue.

    @yanch2116 So how to solve it?

    My goal is to send the positions and quaternions of ROMP (or any other SMPL based solution) over the VMC protocol to other application (not Blender). Any ideas, whats the best way to do it?

    opened by vivi90 26
  • Could I ask for a detailed instruction on how to change the 3D Character?

    Could I ask for a detailed instruction on how to change the 3D Character?

    Also, I would like to know if there is a way to replace the 3D Character before I run the script and process the video so that I don't have to manually change it every time?

    opened by XXZhe 12
  • Can't connect ROMP into addon

    Can't connect ROMP into addon

    I use ROMP v1.1 on Windows machine. But CDBA works with version 1.0 of ROMP I read installation and using documentation of ROMP v1.0 but i can't figured out all How can i use this addon properly? I'm not good at programming i'm animator and interested in your project. Can you write steps how to install and use ROMP with this addon,please?

    image

    opened by hasanleiva 11
  • Hi, Looks like change another mixamo character not work

    Hi, Looks like change another mixamo character not work

    I have ROMP driven SMPL model looks OK, but when using CDBA (I using script locally) driven mixamo model, result looks wrong:

    image

    I am using this bone mapper in your repo:

    bones_mixamo_smpl_mapper = {
        "Hips": "Pelvis",
        "LeftUpLeg": "L_Hip",
        "RightUpLeg": "R_Hip",
        "Spine2": "Spine3",
        "Spine1": "Spine2",
        "Spine": "Spine1",
        "LeftLeg": "L_Knee",
        "RightLeg": "R_Knee",
        "LeftFoot": "L_Ankle",
        "RightFoot": "R_Ankle",
        "LeftToeBase": "L_Foot",
        "RightToeBase": "R_Foot",
        "Neck": "Neck",
        "LeftShoulder": "L_Collar",
        "RightShoulder": "R_Collar",
        "Head": "Head",
        "LeftArm": "L_Shoulder",
        "RightArm": "R_Shoulder",
        "LeftForeArm": "L_Elbow",
        "RightForeArm": "R_Elbow",
        "LeftHand": "L_Wrist",
        "RightHand": "R_Wrist",
        "LeftHandIndex1": "L_Hand",
        "LeftHandMiddle1": "L_Hand",
        "RightHandMiddle1": "R_Hand",
        "RightHandIndex1": "R_Hand",
    }
    bones_smpl_mixamo_mapper = {v: k for k, v in bones_mixamo_smpl_mapper.items()}
    bone_name_from_index_character = {
        k: bones_smpl_mixamo_mapper[v] for k, v in bone_name_from_index.items()
    }
    
    

    Do u know why?

    also the hand look not right.

    image

    opened by jinfagang 11
  • Detection variable: outputs = {'poses': poses, 'trans': trans[0]}

    Detection variable: outputs = {'poses': poses, 'trans': trans[0]}

    Hey, Yanchxx,

    May I ask, what is the output variable, I think "trans" is the translation between camera and object.

    What are the poses, is it the location x, y, z, or rotation x, y, z, degrees?

    Thank you 👍

    opened by zhangby2085 10
  • 导入fbx时骨骼方向乱了

    导入fbx时骨骼方向乱了

    @yanch2116 你好,非常酷的工作!我基本把整个项目跑通了,但我有两个问题想进一步请教一下你。 、 1.导入fbx时骨骼的方向错乱了,参见:https://www.bilibili.com/read/cv2520452 。但是这个文章里的方法没有完全解决方向错乱的问题,所以我想请问一下你,你是怎么解决这个问题的呢? 2.我想请教一下,如何将编辑骨架让其和smpl的骨架一致呢? 望不吝赐教!万分感谢!

    opened by syguan96 8
  • Use keyframes to prevent pose shaking

    Use keyframes to prevent pose shaking

    I found that the avatar will shake drastically when running the webcam demo. And I set the mode =1 to insert keyframe record the webcam results for playback.

    Now, if we set one more condition for inserting keyframe, only the frame_idx % 3 == 0 or frame_idx % 5 == 0. This would allow the avatar to move along these keyframes much smoother.

    However, is there a way to let the webcam demo runs in real-time using the keyframe strategy I said? This skip keyframe strategy only seems to work with the recorded playback. The character is still moving on each frame when we are actually running in real-time.

    opened by ZhengdiYu 4
  • Can I rotate the scene while running scripts?

    Can I rotate the scene while running scripts?

    Hi, I'm able to run to demo with blender now, but it turns out that the view is locked while the script is running.

    If there's a way to enable rotation?

    opened by anzisheng 4
  • Which version of Blender are you using?

    Which version of Blender are you using?

    I met the following error while running Beta.blend. I'm using blender 2.83.9. What's the expected version?

    Read blend: E:\Workspace\blender_test\addons\CharacterDriven-BlenderAddon-master\blender\Beta.blend 0 meshes freed Error: File written by newer Blender binary (290.0), expect loss of data!

    opened by sylyt62 4
  • multiple people

    multiple people

    Hey, yanch2116, very impressive job done in visualizing 3D characters. I tested and it works, one question to ask, the code romp_server.py has the setting.show_largest=True, I am thinking, what is the condition with multiple people in the webcamera, when there are many people, the blender character keeps on shifting. Are they ways to solve this? 1, keep on the object tracker in single object_ID 2, visualize multiple 3D characters as input in the camera

    opened by zhangby2085 3
Owner
犹在镜中
犹在镜中
Disturbing Target Values for Neural Network regularization: attacking the loss layer to prevent overfitting

Disturbing Target Values for Neural Network regularization: attacking the loss layer to prevent overfitting 1. Classification Task PyTorch implementat

Yongho Kim 0 Apr 24, 2022
A curated list of awesome resources combining Transformers with Neural Architecture Search

A curated list of awesome resources combining Transformers with Neural Architecture Search

Yash Mehta 173 Jan 03, 2023
Neural Radiance Fields Using PyTorch

This project is a PyTorch implementation of Neural Radiance Fields (NeRF) for reproduction of results whilst running at a faster speed.

Vedant Ghodke 1 Feb 11, 2022
METS/ALTO OCR enhancing tool by the National Library of Luxembourg (BnL)

Nautilus-OCR The National Library of Luxembourg (BnL) started its first initiative in digitizing newspapers, with layout recognition and OCR on articl

National Library of Luxembourg 36 Dec 05, 2022
Breaking the Dilemma of Medical Image-to-image Translation

Breaking the Dilemma of Medical Image-to-image Translation Supervised Pix2Pix and unsupervised Cycle-consistency are two modes that dominate the field

Kid Liet 86 Dec 21, 2022
[ICCV 2021] Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation

MAED: Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation Getting Started Our codes are implemented and tested with pyth

ZiNiU WaN 176 Dec 15, 2022
LineBoard - Python+React+MySQL-白板即時系統改善人群行為

LineBoard-白板即時系統改善人群行為 即時顯示實驗室的使用狀況,並遠端預約排隊,以此來改善人們的工作效率 程式架構 運作流程 使用者先至該實驗室網站預約

Bo-Jyun Huang 1 Feb 22, 2022
Neural Cellular Automata + CLIP

🧠 Text-2-Cellular Automata Using Neural Cellular Automata + OpenAI CLIP (Work in progress) Examples Text Prompt: Cthulu is watching cthulu_is_watchin

Mainak Deb 21 Dec 19, 2022
Boston House Prediction Valuation Tool

Boston-House-Prediction-Valuation-Tool From Below Anlaysis The Valuation Tool is Designed Correlation Matrix Regrssion Analysis Between Target Vs Pred

0 Sep 09, 2022
Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks

Inhomogeneous Social Recommendation with Hypergraph Convolutional Networks This is our Pytorch implementation for the paper: Zirui Zhu, Chen Gao, Xu C

Zirui Zhu 3 Dec 30, 2022
One-Shot Neural Ensemble Architecture Search by Diversity-Guided Search Space Shrinking

One-Shot Neural Ensemble Architecture Search by Diversity-Guided Search Space Shrinking This is an official implementation for NEAS presented in CVPR

Multimedia Research 19 Sep 08, 2022
Implementation of Heterogeneous Graph Attention Network

HetGAN Implementation of Heterogeneous Graph Attention Network This is the code repository of paper "Prediction of Metro Ridership During the COVID-19

5 Dec 28, 2021
【Arxiv】Exploring Separable Attention for Multi-Contrast MR Image Super-Resolution

SANet Exploring Separable Attention for Multi-Contrast MR Image Super-Resolution Dependencies numpy==1.18.5 scikit_image==0.16.2 torchvision==0.8.1 to

36 Jan 05, 2023
Code for "LASR: Learning Articulated Shape Reconstruction from a Monocular Video". CVPR 2021.

LASR Installation Build with conda conda env create -f lasr.yml conda activate lasr # install softras cd third_party/softras; python setup.py install;

Google 157 Dec 26, 2022
Roadmap to becoming a machine learning engineer in 2020

Roadmap to becoming a machine learning engineer in 2020, inspired by web-developer-roadmap.

Chris Hoyean Song 1.7k Dec 29, 2022
Learning with Noisy Labels via Sparse Regularization, ICCV2021

Learning with Noisy Labels via Sparse Regularization This repository is the official implementation of [Learning with Noisy Labels via Sparse Regulari

Xiong Zhou 38 Oct 20, 2022
Extreme Dynamic Classifier Chains - XGBoost for Multi-label Classification

Extreme Dynamic Classifier Chains Classifier chains is a key technique in multi-label classification, sinceit allows to consider label dependencies ef

6 Oct 08, 2022
Hand-distance-measurement-game - Hand Distance Measurement Game

Hand Distance Measurement Game This is program is made to calculate the distance

Priyansh 2 Jan 12, 2022
This is the repo for the paper "Improving the Accuracy-Memory Trade-Off of Random Forests Via Leaf-Refinement".

Improving the Accuracy-Memory Trade-Off of Random Forests Via Leaf-Refinement This is the repository for the paper "Improving the Accuracy-Memory Trad

3 Dec 29, 2022
Prml - Repository of notes, code and notebooks in Python for the book Pattern Recognition and Machine Learning by Christopher Bishop

Pattern Recognition and Machine Learning (PRML) This project contains Jupyter notebooks of many the algorithms presented in Christopher Bishop's Patte

Gerardo Durán-Martín 1k Jan 07, 2023