An efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning"

Overview

MMGEN-FaceStylor

English | 简体中文

Introduction

This repo is an efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning". We note that since the training code of AgileGAN is not released yet, this repo merely adopts the pipeline from AgileGAN and combines other helpful practices in this literature.

This project is based on MMCV and MMGEN, star and fork is welcomed 🤗 !

Results from FaceStylor trained by MMGEN

Requirements

  • CUDA 10.0 / CUDA 10.1
  • Python 3
  • PyTorch >= 1.6.0
  • MMCV-Full >= 1.3.15
  • MMGeneration >= 0.3.0

Setup

Step-1: Create an Environment

First, we should build a conda virtual environment and activate it.

conda create -n facestylor python=3.7 -y
conda activate facestylor

Suppose you have installed CUDA 10.1, you need to install the prebuilt PyTorch with CUDA 10.1.

conda install pytorch=1.6.0 cudatoolkit=10.1 torchvision -c pytorch
pip install requirements.txt

Step-2: Install MMCV and MMGEN

We can run the following command to install MMCV.

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.6.0/index.html

Of course, you can also refer to the MMCV Docs to install it.

Next, we should install MMGEN containing the basic generative models that will be used in this project.

# Clone the MMGeneration repository.
git clone https://github.com/open-mmlab/mmgeneration.git
cd mmgeneration
# Install build requirements and then install MMGeneration.
pip install -r requirements.txt
pip install -v -e .  # or "python setup.py develop"
cd ..

Step-3: Clone repo and prepare the data and weights

Now, we need to clone this repo first.

git clone https://github.com/open-mmlab/MMGEN-FaceStylor.git

For convenience, we suggest that you make these folders under MMGEN-FaceStylor.

cd MMGEN-FaceStylor
mkdir data
mkdir work_dirs
mkdir work_dirs/experiments
mkdir work_dirs/pre-trained

Then, you can put or create the soft-link for your data under data folder, and store your experiments under work_dirs/experiments.

For testing and training, you need to download some necessary data provided by AgileGAN and put them under data folder. Or just run this:

wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1AavRxpZJYeCrAOghgtthYqVB06y9QJd3' -O data/shape_predictor_68_face_landmarks.dat

We also provide some pre-trained weights.

Pre-trained Weights
FFHQ-1024 StyleGAN2
FFHQ-256 StyleGAN2
IR-SE50 Model
Encoder for FFHQ-1024 StyleGAN2
Encoder for FFHQ-256 StyleGAN2
MetFace-Oil 1024 StyleGAN2
MetFace-Sketch 1024 StyleGAN2
Toonify 1024 StyleGAN2
Cartoon 256
Bitmoji 256
Comic 256
More Styles on the Way!

Play with MMGEN-FaceStylor

If you have followed the aforementioned steps, we can start to investigate FaceStylor!

Quick Try

To quickly try our project, please run the command below

python demo/quick_try.py demo/src.png --style toonify

Then, you can check the result in work_dirs/demos/agile_result.png.

  • If you want to play with your own photos, you can replace demo/src.png with your photo.
  • If you want to switch to another style, change toonify with other styles. Now, supported styles include toonify, oil, sketch, bitmoji, cartoon, comic.

Inversion

The inversion task will adopt a source image as input and return the most similar image that can be generated by the generator model.

For inversion, you can directly use agilegan_demo like this

python demo/agilegan_demo.py SOURCE_PATH CONFIG [--ckpt CKPT] [--device DEVICE] [--save-path SAVE_PATH]

Here, you should set SOURCE_PATH to your image path, CONFIG to the config file path, and CKPT to checkpoint path.

Take Celebahq-Encoder as an example, you need to download the weights to work_dirs/pre-trained/agile_encoder_celebahq1024x1024_lr_1e-4_150k.pth, put your test image under data run

python demo/agilegan_demo.py demo/src.png configs/agilegan/agile_encoder_celebahq1024x1024_lr_1e-4_150k.py --ckpt work_dirs/pre-trained/agile_encoder_celebahq_lr_1e-4_150k.pth

You will find the result work_dirs/demos/agile_result.png.

Stylization

Since the encoder and decoder of stylization can be trained from different configs, you're supposed to set their ckpts' path in config file. Take Metface-oil as an example, you can see the first two lines in config file.

encoder_ckpt_path = xxx
stylegan_weights = xxx

You should keep your actual weights path in line with your configs. Then run the same command without specifying CKPT.

python demo/agilegan_demo.py SOURCE_PATH CONFIG [--device DEVICE] [--save-path SAVE_PATH]

Train

Here I will tell you how to fine-tune with your own datasets. With only 100-200 images and less than one hour, you can train your own StyleGAN2. The only thing you need to do is to copy an agile_transfer config, like this one. Then modify the imgs_root with your actual data root, choose one of the two commands below to train your own model.

# For distributed training
bash tools/dist_train.sh ${CONFIG_FILE} ${GPUS_NUMBER} \
    --work-dir ./work_dirs/experiments/experiments_name \
    [optional arguments]
# For slurm training
bash tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG} ${WORK_DIR} \
    [optional arguments]

Training Details

In this part, I will explain some training details, including ADA setting, layer freeze, and losses.

ADA Setting

To use ADA in your discriminator, you can use ADAStyleGAN2Discriminator as your discriminator, and adjust ADAAug setting as follows:

model = dict(
    discriminator=dict(
                 type='ADAStyleGAN2Discriminator',
                 data_aug=dict(type='ADAAug',
                 aug_pipeline=aug_kwargs, # This and below arguments can be set by yourself.
                 update_interval=4,
                 augment_initial_p=0.,
                 ada_target=0.6,
                 ada_kimg=500,
                 use_slow_aug=False)))

Layer Freeze Setting

FreezeD can be used for small data fine-tuning.

FreezeG can be used for pseudo translation.

model = dict(
  freezeD=5, # set to -1 if not need
  freezeG=4 # set to -1 if not need
  )

Losses Setting

In AgileGAN, to preserve the recognizable identity of the generated image, they introduce a similarity loss at the perceptual level. You can adjust the lpips_lambda as follows:

model = dict(lpips_lambda=0.8)

Generally speaking, the larger lpips_lambda is, the better the recognizable identity can be kept.

Datasets Link

To make it easier for you to train your own models, here are some links to publicly available datasets.

Dataset Links
MetFaces
AFHQ
Toonify
photo2cartoon
selfie2anime
face2comics v2
High-Resolution Anime Face
Bitmoji

Applications

We also provide LayerSwap and DNI apps for the trade-off between the structure of the original image and the stylization degree. To this end, you can adjust some parameters to get your desired result.

LayerSwap

When Layer Swapping is applied, the generated images have a higher similarity to the source image than AgileGAN's results.

From Left to Right: Input, Layer-Swap with L = 4, 3, 2, xxx Output

Run this command line to perform layer swapping:

python apps/layerSwap.py source_path modelA modelB \
      [--swap-layer SWAP_LAYER] [--device DEVICE] [--save-path SAVE_PATH]

Here, modelA is set to an PSPEncoderDecoder(config starts with agile_encoder) with FFHQ-StyleGAN2 as the decoder, modelB is set to an PSPEncoderDecoder(config starts with agile_encoder) with desired style generator as the decoder. Generally, the deeper you set swap-layer, the better structure of the original image will be kept.

We also provide a blending script to create and save the mixed weights.

python modelA modelB [--swap-layer SWAP_LAYER] [--show-input SHOW_INPUT] [--device DEVICE] [--save-path SAVE_PATH]

Here, modelA is the base model, where only the deep layers of its decoder will be replaced with modelB's counterpart.

DNI

Deep Network Interpolation between L4 and AgileGAN output

For more precise stylization control, you can try DNI with following commands:

python apps/dni.py source_path modelA modelB [--intervals INTERVALS] [--device DEVICE] [--save-folder SAVE_FOLDER]

Here, modelA and modelB are supposed to be PSPEncoderDecoder(configs start with agile_encoder) with decoders of different stylization degrees. INTERVALS is supposed to be the interpolation numbers.

You can also try applications in MMGEN, like interpolation and SeFA.

Interpolation


Indeed, we have provided an application script to users. You can use apps/interpolate_sample.py with the following commands for unconditional models’ interpolation:

python apps/interpolate_sample.py \
    ${CONFIG_FILE} \
    ${CHECKPOINT} \
    [--show-mode ${SHOW_MODE}] \
    [--endpoint ${ENDPOINT}] \
    [--interval ${INTERVAL}] \
    [--space ${SPACE}] \
    [--samples-path ${SAMPLES_PATH}] \
    [--batch-size ${BATCH_SIZE}] \

For more details, you can read related Docs.

Galary

Toonify





Oil





Cartoon





Comic





Bitmoji





Notions and TODOs

  • For encoder, I experimented with vae-encoder but found no significant improvement for inversion. I follow the "encoding into z plus space" way as the author does. I will release the vae-encoder version later, but I only offer a vanilla encoder this time.
  • For generator, I released vanilla stylegan2-generator, and attribute-aware generator will be released in next version.
  • For training settings, the parameters have slight difference from the paper. And I also tried ADA, freezeD and other methods not mentioned in paper.
  • More styles will be available in the next version.
  • More applications will be available in the next version.
  • We are also considering a web-side application.
  • Further code clean jobs.

Acknowledgments

Codes reference:

Display photos from: https://unsplash.com/t/people

Web demo powered by: https://gradio.app/

License

This project is released under the Apache 2.0 license. Some implementation in MMGEN-FaceStylor are with other licenses instead of Apache2.0. Please refer to LICENSES.md for the careful check, if you are using our code for commercial matters.

Owner
OpenMMLab
OpenMMLab
Code for the paper titled "Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks" (NeurIPS 2021 Spotlight).

Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks This repository contains the code and pre-trained

Hassan Dbouk 7 Dec 05, 2022
A hyperparameter optimization framework

Optuna: A hyperparameter optimization framework Website | Docs | Install Guide | Tutorial Optuna is an automatic hyperparameter optimization software

7.4k Jan 04, 2023
dataset for ECCV 2020 "Motion Capture from Internet Videos"

Motion Capture from Internet Videos Motion Capture from Internet Videos Junting Dong*, Qing Shuai*, Yuanqing Zhang, Xian Liu, Xiaowei Zhou, Hujun Bao

ZJU3DV 98 Dec 07, 2022
An implementation of the paper "A Neural Algorithm of Artistic Style"

A Neural Algorithm of Artistic Style implementation - Neural Style Transfer This is an implementation of the research paper "A Neural Algorithm of Art

Srijarko Roy 27 Sep 20, 2022
Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021

Embedding Transfer with Label Relaxation for Improved Metric Learning Official PyTorch implementation of CVPR 2021 paper Embedding Transfer with Label

Sungyeon Kim 37 Dec 06, 2022
Container : Context Aggregation Network

Container : Context Aggregation Network If you use this code for a paper please cite: @article{gao2021container, title={Container: Context Aggregati

AI2 47 Dec 16, 2022
Using deep actor-critic model to learn best strategies in pair trading

Deep-Reinforcement-Learning-in-Stock-Trading Using deep actor-critic model to learn best strategies in pair trading Abstract Partially observed Markov

281 Dec 09, 2022
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise

45 Dec 08, 2022
JAX-based neural network library

Haiku: Sonnet for JAX Overview | Why Haiku? | Quickstart | Installation | Examples | User manual | Documentation | Citing Haiku What is Haiku? Haiku i

DeepMind 2.3k Jan 04, 2023
Classification of Long Sequential Data using Circular Dilated Convolutional Neural Networks

Classification of Long Sequential Data using Circular Dilated Convolutional Neural Networks arXiv preprint: https://arxiv.org/abs/2201.02143. Architec

19 Nov 30, 2022
An interactive DNN Model deployed on web that predicts the chance of heart failure for a patient with an accuracy of 98%

Heart Failure Predictor About A Web UI deployed Dense Neural Network Model Made using Tensorflow that predicts whether the patient is healthy or has c

Adit Ahmedabadi 0 Jan 09, 2022
yolov5 deepsort 行人 车辆 跟踪 检测 计数

yolov5 deepsort 行人 车辆 跟踪 检测 计数 实现了 出/入 分别计数。 默认是 南/北 方向检测,若要检测不同位置和方向,可在 main.py 文件第13行和21行,修改2个polygon的点。 默认检测类别:行人、自行车、小汽车、摩托车、公交车、卡车。 检测类别可在 detect

554 Dec 30, 2022
Paddle Graph Learning (PGL) is an efficient and flexible graph learning framework based on PaddlePaddle

DOC | Quick Start | 中文 Breaking News !! 🔥 🔥 🔥 OGB-LSC KDD CUP 2021 winners announced!! (2021.06.17) Super excited to announce our PGL team won TWO

1.5k Jan 06, 2023
Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

LESA Introduction This repository contains the official implementation of Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Cont

Chenglin Yang 20 Dec 31, 2021
Repositorio oficial del curso IIC2233 Programación Avanzada 🚀✨

IIC2233 - Programación Avanzada Evaluación Las evaluaciones serán efectuadas por medio de actividades prácticas en clases y tareas. Se calculará la no

IIC2233 @ UC 0 Dec 15, 2022
Image Fusion Transformer

Image-Fusion-Transformer Platform Python 3.7 Pytorch =1.0 Training Dataset MS-COCO 2014 (T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ram

Vibashan VS 68 Dec 23, 2022
LightningFSL: Pytorch-Lightning implementations of Few-Shot Learning models.

LightningFSL: Few-Shot Learning with Pytorch-Lightning In this repo, a number of pytorch-lightning implementations of FSL algorithms are provided, inc

Xu Luo 76 Dec 11, 2022
某学校选课系统GIF验证码数据集 + Baseline模型 + 上下游相关工具

elective-dataset-2021spring 某学校2021春季选课系统GIF验证码数据集(29338张) + 准确率98.4%的Baseline模型 + 上下游相关工具。 数据集采用 知识共享署名-非商业性使用 4.0 国际许可协议 进行许可。 Baseline模型和上下游相关工具采用

xmcp 27 Sep 17, 2021
Code of the paper "Deep Human Dynamics Prior" in ACM MM 2021.

Code of the paper "Deep Human Dynamics Prior" in ACM MM 2021. Figure 1: In the process of motion capture (mocap), some joints or even the whole human

Shinny cui 3 Oct 31, 2022
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 04, 2022