AttGAN: Facial Attribute Editing by Only Changing What You Want (IEEE TIP 2019)

Overview

News

  • 11 Jan 2020: We clean up the code to make it more readable! The old version is here: v1.

     


AttGAN
TIP Nov. 2019, arXiv Nov. 2017

TensorFlow implementation of AttGAN: Facial Attribute Editing by Only Changing What You Want.

Related

Exemplar Results

  • See results.md for more results, we try higher resolution and more attributes (all 40 attributes!!!)

  • Inverting 13 attributes respectively

    from left to right: Input, Reconstruction, Bald, Bangs, Black_Hair, Blond_Hair, Brown_Hair, Bushy_Eyebrows, Eyeglasses, Male, Mouth_Slightly_Open, Mustache, No_Beard, Pale_Skin, Young

Usage

  • Environment

    • Python 3.6

    • TensorFlow 1.15

    • OpenCV, scikit-image, tqdm, oyaml

    • we recommend Anaconda or Miniconda, then you can create the AttGAN environment with commands below

      conda create -n AttGAN python=3.6
      
      source activate AttGAN
      
      conda install opencv scikit-image tqdm tensorflow-gpu=1.15
      
      conda install -c conda-forge oyaml
    • NOTICE: if you create a new conda environment, remember to activate it before any other command

      source activate AttGAN
  • Data Preparation

    • Option 1: CelebA-unaligned (higher quality than the aligned data, 10.2GB)

      • download the dataset

      • unzip and process the data

        7z x ./data/img_celeba/img_celeba.7z/img_celeba.7z.001 -o./data/img_celeba/
        
        unzip ./data/img_celeba/annotations.zip -d ./data/img_celeba/
        
        python ./scripts/align.py
    • Option 2: CelebA-HQ (we use the data from CelebAMask-HQ, 3.2GB)

      • CelebAMask-HQ.zip (move to ./data/CelebAMask-HQ.zip): Google Drive or Baidu Netdisk

      • unzip and process the data

        unzip ./data/CelebAMask-HQ.zip -d ./data/
        
        python ./scripts/split_CelebA-HQ.py
  • Run AttGAN

    • training (see examples.md for more training commands)

      \\ for CelebA
      CUDA_VISIBLE_DEVICES=0 \
      python train.py \
      --load_size 143 \
      --crop_size 128 \
      --model model_128 \
      --experiment_name AttGAN_128
      
      \\ for CelebA-HQ
      CUDA_VISIBLE_DEVICES=0 \
      python train.py \
      --img_dir ./data/CelebAMask-HQ/CelebA-HQ-img \
      --train_label_path ./data/CelebAMask-HQ/train_label.txt \
      --val_label_path ./data/CelebAMask-HQ/val_label.txt \
      --load_size 128 \
      --crop_size 128 \
      --n_epochs 200 \
      --epoch_start_decay 100 \
      --model model_128 \
      --experiment_name AttGAN_128_CelebA-HQ
    • testing

      • single attribute editing (inversion)

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test.py \
        --experiment_name AttGAN_128
        
        \\ for CelebA-HQ
        CUDA_VISIBLE_DEVICES=0 \
        python test.py \
        --img_dir ./data/CelebAMask-HQ/CelebA-HQ-img \
        --test_label_path ./data/CelebAMask-HQ/test_label.txt \
        --experiment_name AttGAN_128_CelebA-HQ
      • multiple attribute editing (inversion) example

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test_multi.py \
        --test_att_names Bushy_Eyebrows Pale_Skin \
        --experiment_name AttGAN_128
      • attribute sliding example

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test_slide.py \
        --test_att_name Pale_Skin \
        --test_int_min -2 \
        --test_int_max 2 \
        --test_int_step 0.5 \
        --experiment_name AttGAN_128
    • loss visualization

      CUDA_VISIBLE_DEVICES='' \
      tensorboard \
      --logdir ./output/AttGAN_128/summaries \
      --port 6006
    • convert trained model to .pb file

      python to_pb.py --experiment_name AttGAN_128
  • Using Trained Weights

  • Example for Custom Dataset

Citation

If you find AttGAN useful in your research work, please consider citing:

@ARTICLE{8718508,
author={Z. {He} and W. {Zuo} and M. {Kan} and S. {Shan} and X. {Chen}},
journal={IEEE Transactions on Image Processing},
title={AttGAN: Facial Attribute Editing by Only Changing What You Want},
year={2019},
volume={28},
number={11},
pages={5464-5478},
keywords={Face;Facial features;Task analysis;Decoding;Image reconstruction;Hair;Gallium nitride;Facial attribute editing;attribute style manipulation;adversarial learning},
doi={10.1109/TIP.2019.2916751},
ISSN={1057-7149},
month={Nov},}
Comments
  •  TypeError

    TypeError

    hello I have downloaded trained model and trying to test it but i am getting following error. can u please suggest what went wrong?

    I am testing it on google colab and using only 182000 to 182637 images. TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.

    opened by shbnm21 21
  • Unable to use different number of images

    Unable to use different number of images

    Hello. I am using hd - celeba 384 dataset with provided 384_shortcut1_inject1_none_hd model. I am trying to use custom number of images instead of using all 202599 images. I tried to do the following: modify list_attr_celeba.txt file to only include first 20 images and put these 20 images in ./data/img_crop_celeba/*.jpg. However, this is the error I get:

    TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.

    I also tried to train with only 20 images and get the same error. I get no errors when running train/test for all 202599 images.

    opened by githubusername001 10
  • Questions about the handling of noise z in DTLCGAN with an encoder attached

    Questions about the handling of noise z in DTLCGAN with an encoder attached

    Hi, I am referring to your DTLCGAN code. According to your last reply, I added an encoder to it and have some questions about your code. In your train.py, the z_sample you choose to sample is generated by: z_ipt_samples = [np.stack([np.random.normal(size=[z_dim])] * len(c_ipt_sample)) for i in range(15)] which is of (15,18,100).

    So, right now I used an encoder, and the noise of z (as well as the z_ipt used in training) here should be replaced by the encoder's output, right?

    But, what does len(c_ipt_sample) here mean? You generated 18 noises for one testing sample? I counted your sampling training, the lowest layer in your decision tree does have 18 images( 233=18). So why do you generate testing sample from bottom to the top, but not the reverse? How can you be certain that this 18 noises all belong to the same person since you generated from bottom to top?

    Besides, should my encoder do the parellel, choosing 18 frontcodes of 18 images and uses them to do the sampling? It seems wrong here because the 18 frontcodes of mine are from 18 different images(or say 18 different persons), the resulting sampling tree was weird(some are ok, and I am confused about them). But if I used the same frontcode of one image(or say the same person) and copy it for 18 times, the training samples are the same, no change of attributes at all.

    opened by XijieJiao 8
  • Facial Attribute

    Facial Attribute

    Hello, @LynnHo Can you tell how we can do facial feature extraction, means if input any image of face and then how we can get the 40 facial attribute from that.

    Thanks.

    opened by xyzdcgan 8
  • The same result for all the attributes.

    The same result for all the attributes.

    As written in the title, I obtain a row of the same images without any changes regardless to the attribute (column). I use custom data-set organized as CelebA. Could you give an advise, what may cause it?

    opened by acecreamu 8
  • Attribute Classifier for Editing Accuracy/Error

    Attribute Classifier for Editing Accuracy/Error

    I'm curious what you used for the attribute classifier to measure the attribute editing accuracy and preservation error. Also do you have any plans to release this trained model? Thanks.

    Attribute Classifier 
    opened by tegillis 7
  • About the performance of pretrained model

    About the performance of pretrained model

    The pre-trained model you provided is not well-performing over the Celeb-A-HQ dataset. So I've got a question that for how many epochs you have trained the pre-trained model and on what data set. Another question is that my use case applies glasses to the face, so I need to know that if I trained a new model from scratch over the Celeb-A-HQ dataset it will help us to achieve my task. can we train the model over a single attribute like eyeglasses or a smile? Thanks in advance.

    opened by alan-ai-learner 4
  • Attribute Style Manipulation

    Attribute Style Manipulation

    Hi, thank you for sharing the great project. I found your attribute style manipulation particularly meaningful and useful for my recent research. I saw from previous issue that you have no plan to open source the code for this part. I have the following questions:

    1. I found nowhere in your paper as for how you derive your θ and the relationship between θ and the image, so how do you get the θ in an unsupervised way for each input?
    2. Is this part's idea (and the way you derived θ) based on the paper 'Generative Attribute Controller with Conditional Filtered Generative Adversarial Networks'? (I found their code is also not open source).
    3. If I want to realize this part myself, could you give me some hints of where to start or any papers and sources I could refer to (there is really very few works on accurate or multiple attribute style manipulation)?

    Thank you!

    opened by XijieJiao 4
  • Cannot get a desired result on CelebA-HQ dataset

    Cannot get a desired result on CelebA-HQ dataset

    Hi there,

    Your work is interesting. I have a problem. Could you help figure it out?

    I applied your method on CelebA-HQ dataset for a single attribute manipulation. But I cannot get the desired result. The result (the interested attribute is "Smiling") at the 59th training epoch is shown as follows. There is no change in the third column images. attgan

    Thanks and Regards,

    opened by EvaFlower 4
  • Hi, I have a question about the training and the test

    Hi, I have a question about the training and the test

    First, I appreciate your excellent work and have been interested in your work since 2018.

    I have a question about the test and training in your work. In advance, I clarify that I consider the case where the value of attributes is binary.

    For training, the value of attributes seems to be -1 or 1. (Read 0 or 1 then, *2 - 1 -> [-1, 1]) (https://github.com/LynnHo/AttGAN-Tensorflow/blob/master/train.py#L161)

    On the other hand, the range of attributes is [-2, 2] for test. ( Read 0 or 1 then, *2 - 1 -> [-1, 1], finally *2 -> [-2, 2] ) (https://github.com/LynnHo/AttGAN-Tensorflow/blob/master/train.py#L246, test_int = 2.0)

    Is it right that you use the different values of the attribute vector in training and test?

    I just find that I cannot reproduce the result of attribute classification without this trick. However, I can reproduce the result by using [-2, 2].

    Thanks!

    opened by FriedRonaldo 4
  • Why attributes are encoded into [-1, 1] not [0, 1]

    Why attributes are encoded into [-1, 1] not [0, 1]

    @LynnHo Hi, I read your code theses days, and I wonder about why the label of attributes has to map into [-1, 1] instead of [0, 1]. It seems that it is very important and has some technical reason because you commented on three exclamation marks on that code. Could you share some experimental knowledge about this?

    opened by ChengBinJin 4
  • Applying your code in datasets from masked face to non-masked face

    Applying your code in datasets from masked face to non-masked face

    I want to apply this code on Celeb-A with fake masked images and i want to remove mask .. so how can i apply this concept .. can you guide me if i can do it or no using your code ?? if yes where should i change in the code just train.py and data.py ??

    opened by Nuha1412 1
  • Style manipulation not robust, very sensitive to varied parameters

    Style manipulation not robust, very sensitive to varied parameters

    How do you get a good balance among varied hyper-parameters like different loss weights, learning_rate when style manipulation is adopted? I found the training of the network very unstable.

    I can get style manipulation results on bangs and eyeglasses, but the control is unstable and the sharpness of images are also affected. The control on eyeglasses is only on the shade and the model has no control on shape and size.

    Except for hyper-parameters, is there any other places where there can be problems like training settings?

    Besides, when implementing style manipulation, except for the loss of generated style controller, do you also use the original attribute loss?

    Looking forward to your answer. Thank you!

    opened by jiaoxijie 2
Releases(v1)
Owner
Zhenliang He
Zhenliang He
Fast Scattering Transform with CuPy/PyTorch

Announcement 11/18 This package is no longer supported. We have now released kymatio: http://www.kymat.io/ , https://github.com/kymatio/kymatio which

Edouard Oyallon 289 Dec 07, 2022
[ICCV 2021] HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration

HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration Introduction The repository contains the source code and pre-tr

Intelligent Sensing, Perception and Computing Group 55 Dec 14, 2022
CS583: Deep Learning

CS583: Deep Learning

Shusen Wang 2.6k Dec 30, 2022
A Light CNN for Deep Face Representation with Noisy Labels

A Light CNN for Deep Face Representation with Noisy Labels Citation If you use our models, please cite the following paper: @article{wulight, title=

Alfred Xiang Wu 715 Nov 05, 2022
A PyTorch Implementation of Single Shot MultiBox Detector

SSD: Single Shot MultiBox Object Detector, in PyTorch A PyTorch implementation of Single Shot MultiBox Detector from the 2016 paper by Wei Liu, Dragom

Max deGroot 4.8k Jan 07, 2023
Using machine learning to predict and analyze high and low reader engagement for New York Times articles posted to Facebook.

How The New York Times can increase Engagement on Facebook Using machine learning to understand characteristics of news content that garners "high" Fa

Jessica Miles 0 Sep 16, 2021
moving object detection for satellite videos.

DSFNet: Dynamic and Static Fusion Network for Moving Object Detection in Satellite Videos Algorithm Introduction DSFNet: Dynamic and Static Fusion Net

xiaochao 39 Dec 16, 2022
Code accompanying "Adaptive Methods for Aggregated Domain Generalization"

Adaptive Methods for Aggregated Domain Generalization (AdaClust) Official Pytorch Implementation of Adaptive Methods for Aggregated Domain Generalizat

Xavier Thomas 15 Sep 20, 2022
A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation

##A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation. #USAGE To run the trained classifier on some images: python w

Alex Seewald 13 Nov 17, 2022
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL: Graph Contrastive Learning for PyTorch PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL com

GCL: Graph Contrastive Learning Library for PyTorch 594 Jan 08, 2023
Robust Partial Matching for Person Search in the Wild

APNet for Person Search Introduction This is the code of Robust Partial Matching for Person Search in the Wild accepted in CVPR2020. The Align-to-Part

Yingji Zhong 36 Dec 18, 2022
Continual learning with sketched Jacobian approximations

Continual learning with sketched Jacobian approximations This repository contains the code for reproducing figures and results in the paper ``Provable

Machine Learning and Information Processing Laboratory 1 Jun 30, 2022
FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

HKBU High Performance Machine Learning Lab 6 Nov 18, 2022
TransPrompt - Towards an Automatic Transferable Prompting Framework for Few-shot Text Classification

TransPrompt This code is implement for our EMNLP 2021's paper 《TransPrompt:Towards an Automatic Transferable Prompting Framework for Few-shot Text Cla

WangJianing 23 Dec 21, 2022
Weakly Supervised Posture Mining with Reverse Cross-entropy for Fine-grained Classification

Fine-grainedImageClassification Weakly Supervised Posture Mining with Reverse Cross-entropy for Fine-grained Classification We trained model here: lin

ZhenchaoTang 14 Oct 21, 2022
No-reference Image Quality Assessment(NIQA) Algorithms (BRISQUE, NIQE, PIQE, RankIQA, MetaIQA)

No-Reference Image Quality Assessment Algorithms No-reference Image Quality Assessment(NIQA) is a task of evaluating an image without a reference imag

Dae-Young Song 26 Jan 04, 2023
A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

張致強 14 Dec 02, 2022
Predictive Modeling on Electronic Health Records(EHR) using Pytorch

Predictive Modeling on Electronic Health Records(EHR) using Pytorch Overview Although there are plenty of repos on vision and NLP models, there are ve

81 Jan 01, 2023
High performance distributed framework for training deep learning recommendation models based on PyTorch.

High performance distributed framework for training deep learning recommendation models based on PyTorch.

340 Dec 30, 2022
Autotype on websites that have copy-paste disabled like Moodle, HackerEarth contest etc.

Autotype A quick and small python script that helps you autotype on websites that have copy paste disabled like Moodle, HackerEarth contests etc as it

Tushar 32 Nov 03, 2022