A free, multiplatform SDK for real-time facial motion capture using blendshapes, and rigid head pose in 3D space from any RGB camera, photo, or video.

Overview


mocap4face by Facemoji
mocap4face by Facemoji

mocap4face by Facemoji is a free, multiplatform SDK for real-time facial motion capture based on Facial Action Coding System or (FACS). It provides real-time FACS-derived blendshape coefficients, and rigid head pose in 3D space from any mobile camera, webcam, photo, or video enabling live animation of 3D avatars, digital characters, and more.

After fetching the input from one of the mentioned sources, mocap4face SDK produces data in ARKit-compatible blendshapes, i.e., morph targets weight values as a per-frame expression shown in the video below. Useful for, e.g., animating a 2D or 3D avatar in a way that mimics the user's facial expressions in real-time à la Apple Memoji but without the need of a hardware-based TrueDepth Camera.

With mocap4face, you can drive live avatars, build Snapchat-like lenses, AR experiences, face filters that trigger actions, VTubing apps, and more with as little energy impact and CPU/GPU use as possible. As an example, check out how the popular avatar live-streaming app REALITY is using our SDK.

Please Star us on GitHub—it motivates us a lot!

📋 Table of Content

🤓 Tech Specs

Key Features

  • 42 tracked facial expressions via blendshapes
  • Eye tracking including eye gaze vector
  • Tongue tracking
  • Light & fast, just 3MB ML model size
  • ≤ ±50° pitch, ≤ ±40° yaw and ≤ ±30° roll tracking coverage

🤳 Input

  • All RGB camera
  • Photo
  • Video

📦 Output

  • ARKit-compatible blendshapes
  • Head position and scale in 2D and 3D
  • Head rotation in world coordinates

Performance

  • 50 FPS on Pixel 4
  • 60 FPS on iPhone SE (1st gen)
  • 90 FPS on iPhone X or newer

💿 Installation

Prerequisites

  1. Get a new developer account at studio.facemoji.co
  2. Generate a unique API key for your app
  3. Paste the API key to your source code

iOS

  1. Open the sample XCode project and run the demo on your iOS device
  2. To use the SDK in your project, either use the bundled XCFramework directly or use the Swift Package manager (this repository also serves as a Swift PM repository)

Android

  1. Open the sample project in Android Studio and run the demo on your Android device
  2. Add this repository to the list of your Maven repositories in your root build.gradle, for example:
allprojects {
    repositories {
        google()
        mavenCentral()
        // Any other repositories here...

        maven {
            name = "Facemoji"
            url = uri("https://facemoji.jfrog.io/artifactory/default-maven-local/")
        }
    }
}
  1. To use the SDK in your project, add implementation 'co.facemoji:mocap4face:0.1.0' to your Gradle dependencies

🚀 Use Cases

  • Live avatar experiences
  • Snapchat-like lense
  • AR experiences
  • VTubing apps
  • Live streaming apps
  • Face filters
  • AR games with facial triggers
  • Beauty AR
  • Virtual try-on
  • Play to earn games

❤️ Links

📄 License

This library is provided under the Facemoji SDK License Agreement—see LICENSE.

The sample code in this repository is provided under the Facemoji Samples License

Notices

OSS used in mocap4face SDK:

Comments
  • Mainactivity  re-opened cannot be traced

    Mainactivity re-opened cannot be traced

    Only the first open MainActivity can be tracked. when back to SplashActivity and reopen Mainactivity the track result is null.

    The test code is as follows(modify SplashAtivity):

    override fun onResume() {
        super.onResume()
        Timer("Splash", false).schedule(5000) {
            startMainActivity()
        }
    }
    
    bug 
    opened by cike978 9
  • can not find any documents about how to build a app using facemoji SDK

    can not find any documents about how to build a app using facemoji SDK

    i want to build a app like the offical facemoji app, but the demo code is so simple, the SDK even has no document.

    can anyone tell me how to use the SDK to build a facemoji app ?

    opened by danlanxiaohei 3
  • Fix

    Fix "artifact not found for target 'mocap4face'" error

    Issue

    • I got "artifact not found for target 'mocap4face'" error when I tried to load mocap4face Swift Package by Xcode 13.3 mocap4face_err
      • The target name should be exactly the same as the framework name (case sensitive)

    Changes

    • Change the target name from mocap4face to Mocap4Face to match the framework name
    opened by magicien 2
  • Any plans for Windows & MacOS Support?

    Any plans for Windows & MacOS Support?

    I'm guessing if this is already cross-platform on Android, Web, and iOS that this an upcoming features. Just curious since we'd love to use this everywhere if possible

    opened by aaronn 2
  • [Question]: Is this Lib compatible with react-native?

    [Question]: Is this Lib compatible with react-native?

    It would be incredibly awesome to include this library in my React-Native start-up App but I wonder whether there's React-Native compatible SDK?

    Thanks!!!

    enhancement 
    opened by shamxeed 2
  • OpenGLResizer20: setup OES texture: glError 1282: invalid operation

    OpenGLResizer20: setup OES texture: glError 1282: invalid operation

    使用 Android 版本打开 Activity 正常,关闭页面再次启用的时候出现 OpenGLResizer20: setup OES texture: glError 1282: invalid operation

    onCameraImageArkitLink(cameraTexture: OpenGLTexture)

    cameraTexture = null

    opened by pengzhenkun 1
  • Stop face tracking

    Stop face tracking

    Hello. Thanks for the great face tracking library!

    How can I stop tracking? Something like a dispose/destroy function?

    I am using this library in web (js)

    opened by Nawarius 1
  • Status of macOS build?

    Status of macOS build?

    Has any new work been done to bring this to the Mac via catalyst?

    We primarily just want to be able to use the full camera width when running under M1 catalyst. We don't need the full feature-set, but macOS forces "non-mac" frameworks to use a cropped iPhone-like camera view even on the Mac which has a much wider view.

    opened by aaronn 0
  • Problems with one eye open and the other closed

    Problems with one eye open and the other closed

    Platform Android With one eye open and the other closed, the output is eyeBlink_L 0.7 and eyeBlink_R 0.5 Expected result is eyeBlink_L 0.7 and eyeBlink_R 0

    bug 
    opened by will4621d 2
Releases(0.3.0)
Owner
Facemoji
Avatars for apps and games.
Facemoji
Elegy is a framework-agnostic Trainer interface for the Jax ecosystem.

Elegy Elegy is a framework-agnostic Trainer interface for the Jax ecosystem. Main Features Easy-to-use: Elegy provides a Keras-like high-level API tha

435 Dec 30, 2022
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning By Zhenda Xie*, Yutong Lin*, Zheng Zhang, Yue Ca

Zhenda Xie 293 Dec 20, 2022
B-cos Networks: Attention is All we Need for Interpretability

Convolutional Dynamic Alignment Networks for Interpretable Classifications M. Böhle, M. Fritz, B. Schiele. B-cos Networks: Alignment is All we Need fo

58 Dec 23, 2022
This repository contains the code for Direct Molecular Conformation Generation (DMCG).

Direct Molecular Conformation Generation This repository contains the code for Direct Molecular Conformation Generation (DMCG). Dataset Download rdkit

25 Dec 20, 2022
🏖 Keras Implementation of Painting outside the box

Keras implementation of Image OutPainting This is an implementation of Painting Outside the Box: Image Outpainting paper from Standford University. So

Bendang 1.1k Dec 10, 2022
A PyTorch toolkit for 2D Human Pose Estimation.

PyTorch-Pose PyTorch-Pose is a PyTorch implementation of the general pipeline for 2D single human pose estimation. The aim is to provide the interface

Wei Yang 1.1k Dec 30, 2022
PyTorch implementation of TSception V2 using DEAP dataset

TSception This is the PyTorch implementation of TSception V2 using DEAP dataset in our paper: Yi Ding, Neethu Robinson, Su Zhang, Qiuhao Zeng, Cuntai

Yi Ding 27 Dec 15, 2022
TLDR; Train custom adaptive filter optimizers without hand tuning or extra labels.

AutoDSP TLDR; Train custom adaptive filter optimizers without hand tuning or extra labels. About Adaptive filtering algorithms are commonplace in sign

Jonah Casebeer 48 Sep 19, 2022
Explanatory Learning: Beyond Empiricism in Neural Networks

Explanatory Learning This is the official repository for "Explanatory Learning: Beyond Empiricism in Neural Networks". Datasets Download the datasets

GLADIA Research Group 10 Dec 06, 2022
Multi-view 3D reconstruction using neural rendering. Unofficial implementation of UNISURF, VolSDF, NeuS and more.

Volume rendering + 3D implicit surface Showcase What? previous: surface rendering; now: volume rendering previous: NeRF's volume density; now: implici

Jianfei Guo 682 Jan 04, 2023
Autoregressive Models in PyTorch.

Autoregressive This repository contains all the necessary PyTorch code, tailored to my presentation, to train and generate data from WaveNet-like auto

Christoph Heindl 41 Oct 09, 2022
ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet, ICCV 2021 Update: 2021/03/11: update our new results. Now our T2T-ViT-14 w

YITUTech 1k Dec 31, 2022
The spiritual successor to knockknock for PyTorch Lightning, get notified when your training ends

Who's there? The spiritual successor to knockknock for PyTorch Lightning, to get a notification when your training is complete or when it crashes duri

twsl 70 Oct 06, 2022
code for paper"A High-precision Semantic Segmentation Method Combining Adversarial Learning and Attention Mechanism"

PyTorch implementation of UAGAN(U-net Attention Generative Adversarial Networks) This repository contains the source code for the paper "A High-precis

Tong 8 Apr 25, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
A deep learning based semantic search platform that computes similarity scores between provided query and documents

semanticsearch This is a deep learning based semantic search platform that computes similarity scores between provided query and documents. Documents

1 Nov 30, 2021
An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners

An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners This is a coarse version for MAE, only make the pretrain model, the fine

FlyEgle 214 Dec 29, 2022
Official implementation of "SinIR: Efficient General Image Manipulation with Single Image Reconstruction" (ICML 2021)

SinIR (Official Implementation) Requirements To install requirements: pip install -r requirements.txt We used Python 3.7.4 and f-strings which are in

47 Oct 11, 2022
This repository is for EMNLP 2021 paper: It is Not as Good as You Think! Evaluating Simultaneous Machine Translation on Interpretation Data

InterpretationData This repository is for our EMNLP 2021 paper: It is Not as Good as You Think! Evaluating Simultaneous Machine Translation on Interpr

4 Apr 21, 2022
MOpt-AFL provided by the paper "MOPT: Optimized Mutation Scheduling for Fuzzers"

MOpt-AFL 1. Description MOpt-AFL is a AFL-based fuzzer that utilizes a customized Particle Swarm Optimization (PSO) algorithm to find the optimal sele

172 Dec 18, 2022