AttGAN: Facial Attribute Editing by Only Changing What You Want (IEEE TIP 2019)

Overview

News

  • 11 Jan 2020: We clean up the code to make it more readable! The old version is here: v1.

     


AttGAN
TIP Nov. 2019, arXiv Nov. 2017

TensorFlow implementation of AttGAN: Facial Attribute Editing by Only Changing What You Want.

Related

Exemplar Results

  • See results.md for more results, we try higher resolution and more attributes (all 40 attributes!!!)

  • Inverting 13 attributes respectively

    from left to right: Input, Reconstruction, Bald, Bangs, Black_Hair, Blond_Hair, Brown_Hair, Bushy_Eyebrows, Eyeglasses, Male, Mouth_Slightly_Open, Mustache, No_Beard, Pale_Skin, Young

Usage

  • Environment

    • Python 3.6

    • TensorFlow 1.15

    • OpenCV, scikit-image, tqdm, oyaml

    • we recommend Anaconda or Miniconda, then you can create the AttGAN environment with commands below

      conda create -n AttGAN python=3.6
      
      source activate AttGAN
      
      conda install opencv scikit-image tqdm tensorflow-gpu=1.15
      
      conda install -c conda-forge oyaml
    • NOTICE: if you create a new conda environment, remember to activate it before any other command

      source activate AttGAN
  • Data Preparation

    • Option 1: CelebA-unaligned (higher quality than the aligned data, 10.2GB)

      • download the dataset

      • unzip and process the data

        7z x ./data/img_celeba/img_celeba.7z/img_celeba.7z.001 -o./data/img_celeba/
        
        unzip ./data/img_celeba/annotations.zip -d ./data/img_celeba/
        
        python ./scripts/align.py
    • Option 2: CelebA-HQ (we use the data from CelebAMask-HQ, 3.2GB)

      • CelebAMask-HQ.zip (move to ./data/CelebAMask-HQ.zip): Google Drive or Baidu Netdisk

      • unzip and process the data

        unzip ./data/CelebAMask-HQ.zip -d ./data/
        
        python ./scripts/split_CelebA-HQ.py
  • Run AttGAN

    • training (see examples.md for more training commands)

      \\ for CelebA
      CUDA_VISIBLE_DEVICES=0 \
      python train.py \
      --load_size 143 \
      --crop_size 128 \
      --model model_128 \
      --experiment_name AttGAN_128
      
      \\ for CelebA-HQ
      CUDA_VISIBLE_DEVICES=0 \
      python train.py \
      --img_dir ./data/CelebAMask-HQ/CelebA-HQ-img \
      --train_label_path ./data/CelebAMask-HQ/train_label.txt \
      --val_label_path ./data/CelebAMask-HQ/val_label.txt \
      --load_size 128 \
      --crop_size 128 \
      --n_epochs 200 \
      --epoch_start_decay 100 \
      --model model_128 \
      --experiment_name AttGAN_128_CelebA-HQ
    • testing

      • single attribute editing (inversion)

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test.py \
        --experiment_name AttGAN_128
        
        \\ for CelebA-HQ
        CUDA_VISIBLE_DEVICES=0 \
        python test.py \
        --img_dir ./data/CelebAMask-HQ/CelebA-HQ-img \
        --test_label_path ./data/CelebAMask-HQ/test_label.txt \
        --experiment_name AttGAN_128_CelebA-HQ
      • multiple attribute editing (inversion) example

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test_multi.py \
        --test_att_names Bushy_Eyebrows Pale_Skin \
        --experiment_name AttGAN_128
      • attribute sliding example

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test_slide.py \
        --test_att_name Pale_Skin \
        --test_int_min -2 \
        --test_int_max 2 \
        --test_int_step 0.5 \
        --experiment_name AttGAN_128
    • loss visualization

      CUDA_VISIBLE_DEVICES='' \
      tensorboard \
      --logdir ./output/AttGAN_128/summaries \
      --port 6006
    • convert trained model to .pb file

      python to_pb.py --experiment_name AttGAN_128
  • Using Trained Weights

  • Example for Custom Dataset

Citation

If you find AttGAN useful in your research work, please consider citing:

@ARTICLE{8718508,
author={Z. {He} and W. {Zuo} and M. {Kan} and S. {Shan} and X. {Chen}},
journal={IEEE Transactions on Image Processing},
title={AttGAN: Facial Attribute Editing by Only Changing What You Want},
year={2019},
volume={28},
number={11},
pages={5464-5478},
keywords={Face;Facial features;Task analysis;Decoding;Image reconstruction;Hair;Gallium nitride;Facial attribute editing;attribute style manipulation;adversarial learning},
doi={10.1109/TIP.2019.2916751},
ISSN={1057-7149},
month={Nov},}
Comments
  •  TypeError

    TypeError

    hello I have downloaded trained model and trying to test it but i am getting following error. can u please suggest what went wrong?

    I am testing it on google colab and using only 182000 to 182637 images. TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.

    opened by shbnm21 21
  • Unable to use different number of images

    Unable to use different number of images

    Hello. I am using hd - celeba 384 dataset with provided 384_shortcut1_inject1_none_hd model. I am trying to use custom number of images instead of using all 202599 images. I tried to do the following: modify list_attr_celeba.txt file to only include first 20 images and put these 20 images in ./data/img_crop_celeba/*.jpg. However, this is the error I get:

    TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.

    I also tried to train with only 20 images and get the same error. I get no errors when running train/test for all 202599 images.

    opened by githubusername001 10
  • Questions about the handling of noise z in DTLCGAN with an encoder attached

    Questions about the handling of noise z in DTLCGAN with an encoder attached

    Hi, I am referring to your DTLCGAN code. According to your last reply, I added an encoder to it and have some questions about your code. In your train.py, the z_sample you choose to sample is generated by: z_ipt_samples = [np.stack([np.random.normal(size=[z_dim])] * len(c_ipt_sample)) for i in range(15)] which is of (15,18,100).

    So, right now I used an encoder, and the noise of z (as well as the z_ipt used in training) here should be replaced by the encoder's output, right?

    But, what does len(c_ipt_sample) here mean? You generated 18 noises for one testing sample? I counted your sampling training, the lowest layer in your decision tree does have 18 images( 233=18). So why do you generate testing sample from bottom to the top, but not the reverse? How can you be certain that this 18 noises all belong to the same person since you generated from bottom to top?

    Besides, should my encoder do the parellel, choosing 18 frontcodes of 18 images and uses them to do the sampling? It seems wrong here because the 18 frontcodes of mine are from 18 different images(or say 18 different persons), the resulting sampling tree was weird(some are ok, and I am confused about them). But if I used the same frontcode of one image(or say the same person) and copy it for 18 times, the training samples are the same, no change of attributes at all.

    opened by XijieJiao 8
  • Facial Attribute

    Facial Attribute

    Hello, @LynnHo Can you tell how we can do facial feature extraction, means if input any image of face and then how we can get the 40 facial attribute from that.

    Thanks.

    opened by xyzdcgan 8
  • The same result for all the attributes.

    The same result for all the attributes.

    As written in the title, I obtain a row of the same images without any changes regardless to the attribute (column). I use custom data-set organized as CelebA. Could you give an advise, what may cause it?

    opened by acecreamu 8
  • Attribute Classifier for Editing Accuracy/Error

    Attribute Classifier for Editing Accuracy/Error

    I'm curious what you used for the attribute classifier to measure the attribute editing accuracy and preservation error. Also do you have any plans to release this trained model? Thanks.

    Attribute Classifier 
    opened by tegillis 7
  • About the performance of pretrained model

    About the performance of pretrained model

    The pre-trained model you provided is not well-performing over the Celeb-A-HQ dataset. So I've got a question that for how many epochs you have trained the pre-trained model and on what data set. Another question is that my use case applies glasses to the face, so I need to know that if I trained a new model from scratch over the Celeb-A-HQ dataset it will help us to achieve my task. can we train the model over a single attribute like eyeglasses or a smile? Thanks in advance.

    opened by alan-ai-learner 4
  • Attribute Style Manipulation

    Attribute Style Manipulation

    Hi, thank you for sharing the great project. I found your attribute style manipulation particularly meaningful and useful for my recent research. I saw from previous issue that you have no plan to open source the code for this part. I have the following questions:

    1. I found nowhere in your paper as for how you derive your θ and the relationship between θ and the image, so how do you get the θ in an unsupervised way for each input?
    2. Is this part's idea (and the way you derived θ) based on the paper 'Generative Attribute Controller with Conditional Filtered Generative Adversarial Networks'? (I found their code is also not open source).
    3. If I want to realize this part myself, could you give me some hints of where to start or any papers and sources I could refer to (there is really very few works on accurate or multiple attribute style manipulation)?

    Thank you!

    opened by XijieJiao 4
  • Cannot get a desired result on CelebA-HQ dataset

    Cannot get a desired result on CelebA-HQ dataset

    Hi there,

    Your work is interesting. I have a problem. Could you help figure it out?

    I applied your method on CelebA-HQ dataset for a single attribute manipulation. But I cannot get the desired result. The result (the interested attribute is "Smiling") at the 59th training epoch is shown as follows. There is no change in the third column images. attgan

    Thanks and Regards,

    opened by EvaFlower 4
  • Hi, I have a question about the training and the test

    Hi, I have a question about the training and the test

    First, I appreciate your excellent work and have been interested in your work since 2018.

    I have a question about the test and training in your work. In advance, I clarify that I consider the case where the value of attributes is binary.

    For training, the value of attributes seems to be -1 or 1. (Read 0 or 1 then, *2 - 1 -> [-1, 1]) (https://github.com/LynnHo/AttGAN-Tensorflow/blob/master/train.py#L161)

    On the other hand, the range of attributes is [-2, 2] for test. ( Read 0 or 1 then, *2 - 1 -> [-1, 1], finally *2 -> [-2, 2] ) (https://github.com/LynnHo/AttGAN-Tensorflow/blob/master/train.py#L246, test_int = 2.0)

    Is it right that you use the different values of the attribute vector in training and test?

    I just find that I cannot reproduce the result of attribute classification without this trick. However, I can reproduce the result by using [-2, 2].

    Thanks!

    opened by FriedRonaldo 4
  • Why attributes are encoded into [-1, 1] not [0, 1]

    Why attributes are encoded into [-1, 1] not [0, 1]

    @LynnHo Hi, I read your code theses days, and I wonder about why the label of attributes has to map into [-1, 1] instead of [0, 1]. It seems that it is very important and has some technical reason because you commented on three exclamation marks on that code. Could you share some experimental knowledge about this?

    opened by ChengBinJin 4
  • Applying your code in datasets from masked face to non-masked face

    Applying your code in datasets from masked face to non-masked face

    I want to apply this code on Celeb-A with fake masked images and i want to remove mask .. so how can i apply this concept .. can you guide me if i can do it or no using your code ?? if yes where should i change in the code just train.py and data.py ??

    opened by Nuha1412 1
  • Style manipulation not robust, very sensitive to varied parameters

    Style manipulation not robust, very sensitive to varied parameters

    How do you get a good balance among varied hyper-parameters like different loss weights, learning_rate when style manipulation is adopted? I found the training of the network very unstable.

    I can get style manipulation results on bangs and eyeglasses, but the control is unstable and the sharpness of images are also affected. The control on eyeglasses is only on the shade and the model has no control on shape and size.

    Except for hyper-parameters, is there any other places where there can be problems like training settings?

    Besides, when implementing style manipulation, except for the loss of generated style controller, do you also use the original attribute loss?

    Looking forward to your answer. Thank you!

    opened by jiaoxijie 2
Releases(v1)
Owner
Zhenliang He
Zhenliang He
CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator

CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator This is the official code repository for NeurIPS 2021 paper: CARMS: Categorica

Alek Dimitriev 1 Jul 09, 2022
EMNLP'2021: Simple Entity-centric Questions Challenge Dense Retrievers

EntityQuestions This repository contains the EntityQuestions dataset as well as code to evaluate retrieval results from the the paper Simple Entity-ce

Princeton Natural Language Processing 119 Sep 28, 2022
YOLO5Face: Why Reinventing a Face Detector (https://arxiv.org/abs/2105.12931)

Introduction Yolov5-face is a real-time,high accuracy face detection. Performance Single Scale Inference on VGA resolution(max side is equal to 640 an

DeepCam Shenzhen 1.4k Jan 07, 2023
Drone-based Joint Density Map Estimation, Localization and Tracking with Space-Time Multi-Scale Attention Network

DroneCrowd Paper Detection, Tracking, and Counting Meets Drones in Crowds: A Benchmark. Introduction This paper proposes a space-time multi-scale atte

VisDrone 98 Nov 16, 2022
A Machine Teaching Framework for Scalable Recognition

MEMORABLE This repository contains the source code accompanying our ICCV 2021 paper. A Machine Teaching Framework for Scalable Recognition Pei Wang, N

2 Dec 08, 2021
Py-FEAT: Python Facial Expression Analysis Toolbox

Py-FEAT is a suite for facial expressions (FEX) research written in Python. This package includes tools to detect faces, extract emotional facial expressions (e.g., happiness, sadness, anger), facial

Computational Social Affective Neuroscience Laboratory 147 Jan 06, 2023
Collection of common code that's shared among different research projects in FAIR computer vision team.

fvcore fvcore is a light-weight core library that provides the most common and essential functionality shared in various computer vision frameworks de

Meta Research 1.5k Jan 07, 2023
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

zyrant丶 14 Dec 29, 2021
This repository contains answers of the Shopify Summer 2022 Data Science Intern Challenge.

Data-Science-Intern-Challenge This repository contains answers of the Shopify Summer 2022 Data Science Intern Challenge. Summer 2022 Data Science Inte

1 Jan 11, 2022
Implementation of ICCV 2021 oral paper -- A Novel Self-Supervised Learning for Gaussian Mixture Model

SS-GMM Implementation of ICCV 2021 oral paper -- Self-Supervised Image Prior Learning with GMM from a Single Noisy Image with supplementary material R

HUST-The Tan Lab 4 Dec 05, 2022
Code to reproduce the results for Compositional Attention

Compositional-Attention This repository contains the official implementation for the paper Compositional Attention: Disentangling Search and Retrieval

Sarthak Mittal 58 Nov 30, 2022
Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data

LiDAR-MOS: Moving Object Segmentation in 3D LiDAR Data This repo contains the code for our paper: Moving Object Segmentation in 3D LiDAR Data: A Learn

Photogrammetry & Robotics Bonn 394 Dec 29, 2022
Making Structure-from-Motion (COLMAP) more robust to symmetries and duplicated structures

SfM disambiguation with COLMAP About Structure-from-Motion generally fails when the scene exhibits symmetries and duplicated structures. In this repos

Computer Vision and Geometry Lab 193 Dec 26, 2022
Code, environments, and scripts for the paper: "How Private Is Your RL Policy? An Inverse RL Based Analysis Framework"

Privacy-Aware Inverse RL (PRIL) Analysis Framework Code, environments, and scripts for the paper: "How Private Is Your RL Policy? An Inverse RL Based

1 Dec 06, 2021
Character Grounding and Re-Identification in Story of Videos and Text Descriptions

Character in Story Identification Network (CiSIN) This project hosts the code for our paper. Youngjae Yu, Jongseok Kim, Heeseung Yun, Jiwan Chung and

8 Dec 09, 2022
3D-printable hand-strapped keyboard

Note: This repo has not been cleaned up and prepared for general consumption at all. This is just a dump of the project files. If there is any interes

Wojciech Baranowski 41 Dec 31, 2022
Learning View Priors for Single-view 3D Reconstruction (CVPR 2019)

Learning View Priors for Single-view 3D Reconstruction (CVPR 2019) This is code for a paper Learning View Priors for Single-view 3D Reconstruction by

Hiroharu Kato 38 Aug 17, 2022
A project to make Amazon Echo respond to sign language using your webcam

Making Alexa respond to Sign Language using Tensorflow.js Try the live demo Read the Blog Post on Tensorflow's Blog Coming Soon Watch the video This p

Abhishek Singh 444 Jan 03, 2023
automated systems to assist guarding corona Virus precautions for Closed Rooms (e.g. Halls, offices, etc..)

Automatic-precautionary-guard automated systems to assist guarding corona Virus precautions for Closed Rooms (e.g. Halls, offices, etc..) what is this

badra 0 Jan 06, 2022
基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

37 Jan 01, 2023