DCGAN LSGAN WGAN-GP DRAGAN PyTorch

Overview

Recommendation

  • Our GAN based work for facial attribute editing - AttGAN.

News

  • 8 April 2019: We re-implement these GANs by Tensorflow 2! The old version is here: v1 or in the "v1" directory.
  • PyTorch Version


GANs - Tensorflow 2

Tensorflow 2 implementations of DCGAN, LSGAN, WGAN-GP and DRAGAN.

Exemplar results

Fashion-MNIST

DCGAN LSGAN WGAN-GP DRAGAN

CelebA

DCGAN LSGAN
WGAN-GP DRAGAN

Anime

WGAN-GP DRAGAN

Usage

  • Environment

    • Python 3.6

    • TensorFlow 2.2, TensorFlow Addons 0.10.0

    • OpenCV, scikit-image, tqdm, oyaml

    • we recommend Anaconda or Miniconda, then you can create the TensorFlow 2.2 environment with commands below

      conda create -n tensorflow-2.2 python=3.6
      
      source activate tensorflow-2.2
      
      conda install scikit-image tqdm tensorflow-gpu=2.2
      
      conda install -c conda-forge oyaml
      
      pip install tensorflow-addons==0.10.0
    • NOTICE: if you create a new conda environment, remember to activate it before any other command

      source activate tensorflow-2.2
  • Datasets

  • Examples of training

    • Fashion-MNIST DCGAN

      CUDA_VISIBLE_DEVICES=0 python train.py --dataset=fashion_mnist --epoch=25 --adversarial_loss_mode=gan
    • CelebA DRAGAN

      CUDA_VISIBLE_DEVICES=0 python train.py --dataset=celeba --epoch=25 --adversarial_loss_mode=gan --gradient_penalty_mode=dragan
    • Anime WGAN-GP

      CUDA_VISIBLE_DEVICES=0 python train.py --dataset=anime --epoch=200 --adversarial_loss_mode=wgan --gradient_penalty_mode=wgan-gp --n_d=5
    • see more training exampls in commands.sh

    • tensorboard for loss visualization

      tensorboard --logdir ./output/fashion_mnist_gan/summaries --port 6006
Comments
  • GPU is full

    GPU is full

    Hello, when the code runs, the memory is full. What happened? My python version is 3.6, tensorflow version is 1.11, my GPU is 1080ti, thanks! the error is as follow: An error ocurred while starting the kernel 2019󈚥󈚦 08:40:45.229298: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2019󈚥󈚦 08:40:45.601632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582 pciBusID: 0000:65:00.0 totalMemory: 11.00GiB freeMemory: 9.10GiB 2019󈚥󈚦 08:40:45.603929: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0 2019󈚥󈚦 08:40:46.556261: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix: 2019󈚥󈚦 08:40:46.558110: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0 2019󈚥󈚦 08:40:46.558407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N 2019󈚥󈚦 08:40:46.558836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8789 MB memory) ‑> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:65:00.0, compute capability: 6.1)

    opened by lixingbao 9
  • About problem with generating image size

    About problem with generating image size

    Hello,can your code only generate 64×64 images? Can I generate an image of the specified size? For example: 256 × 256, if you can, what parameters need to be modified?thank you!

    opened by lixingbao 6
  • WGAN-GP does not work!!!

    WGAN-GP does not work!!!

    I have updated the code from TensorFlow 2.0-alpha to TensorFlow 2.0, everything works well except for WGAN-GP (it works in tf2.0-alpha). In tf2.0, The gradient penalty seems very unstable, but I cannot find out the problem. Does anybody help? I will be grateful.

    help wanted 
    opened by LynnHo 3
  • how about 3D data?

    how about 3D data?

    Hi!

    cartoon faces original size is [96, 96, 3],the number 3 means 3 channel RGB data. But if I have grayscale data with 3 slices, i.e the size is [121,145,3], Can I simply use this code? If not, what should I change based on this code?

    Thanks for your work! Look forward to your response.

    opened by KakaVlasic 3
  • c_iter isn't used

    c_iter isn't used

    c_iter is defined but not used in all of the WGAN files. What is the correct behaviour? i.e should the critic be optimised heavily initially or not?

    Also, can you confirm that you use a learning rate parameter of 0.0002, regardless of whether RMSProp or Adam is used as the optimiser?

    opened by davidADSP 3
  • About n_critic = 5

    About n_critic = 5

    i use the code whitch be used to train cartoon pictures with WGAN-GP. i don't know what the mean of n_critic = 5 , and why do you to set it. thanks.

    opened by tuoniaoren 3
  • Error while using celeba dataset

    Error while using celeba dataset

    I am getting this error while running train.py TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string. Please help with this. Thanks in advance

    opened by yksolanki9 2
  • Where do you freeze the gradient descent?

    Where do you freeze the gradient descent?

    Hello, I am confused about how do you freeze the gradient descent to the other model. When training d_step, I suppose the generator should be freezed, as f_logit is based on generator and used in d_loss; similarly, when training g_step, I suppose the discriminator should be freezed, as f_logit depens on discriminator.

    However, I do not see where you stop those gradients flowing to the unwanted part, either generator or discriminator. Would you please provide some hints for me? Thank you.

    opened by ybsave 2
  • do you try to use Resnet in wgan-gp?

    do you try to use Resnet in wgan-gp?

    Have you compared the difference between the network structure of DCGAN and the structure of Resnet in WGAN-GP?Is the effect of Resnet will be better than the structure of DCGAN.

    opened by tuoniaoren 2
  • running question

    running question

    do you meet the program stop without mistakes when the code running for some time ,and the GPU stops work.I changed the value of num_threads。(from 16 to 10)。i run it again.i don't know Is it because the value is too high。

    opened by tuoniaoren 2
  • License

    License

    Hi Zhenliang He, I wonder whether you would be willing to please license this code under an open source license? If so please add a license, or if not please just close this request. Thanks, Connelly

    opened by connellybarnes 2
  • A problem for your DCGAN architecture

    A problem for your DCGAN architecture

    Hi, - Your work is really interesting. But I have found there is a problem for your DCGAN that I didn't understand. You generate noise twice when train discriminator and generator for each iteration, like the blue lines in the following picture. In soumith code (includes some official DCGAN code), he only generate noise once: https://github.com/soumith/dcgan.torch. Could you please tell me the reason?

    image

    opened by RayGuo-C 1
  • NameError: name 'shape' is not defined

    NameError: name 'shape' is not defined

    Traceback (most recent call last): File "D:/github/DCGAN-LSGAN-WGAN-GP-DRAGAN-Tensorflow-2-master/DCGAN-LSGAN-WGAN-GP-DRAGAN-Tensorflow-2-master/train.py", line 91, in G = module.ConvGenerator(input_shape=(1, 1, args.z_dim), output_channels=shape[-1], n_upsamplings=n_G_upsamplings, name='G_%s' % args.dataset) NameError: name 'shape' is not defined please tell me why

    opened by Tonyztj 0
Releases(v1)
Owner
Zhenliang He
Zhenliang He
OpenABC-D: A Large-Scale Dataset For Machine Learning Guided Integrated Circuit Synthesis

OpenABC-D: A Large-Scale Dataset For Machine Learning Guided Integrated Circuit Synthesis Overview OpenABC-D is a large-scale labeled dataset generate

NYU Machine-Learning guided Design Automation (MLDA) 31 Nov 22, 2022
Real-time ground filtering algorithm of cloud points acquired using Terrestrial Laser Scanner (TLS)

This repository contains tools to simulate the ground filtering process of a registered point cloud. The repository contains two filtering methods. The first method uses a normal vector, and fit to p

5 Aug 25, 2022
Neuron class provides LNU (Linear Neural Unit), QNU (Quadratic Neural Unit), RBF (Radial Basis Function), MLP (Multi Layer Perceptron), MLP-ELM (Multi Layer Perceptron - Extreme Learning Machine) neurons learned with Gradient descent or LeLevenberg–Marquardt algorithm

Neuron class provides LNU (Linear Neural Unit), QNU (Quadratic Neural Unit), RBF (Radial Basis Function), MLP (Multi Layer Perceptron), MLP-ELM (Multi Layer Perceptron - Extreme Learning Machine) neu

Filip Molcik 38 Dec 17, 2022
Experiment about Deep Person Re-identification with EfficientNet-v2

We evaluated the baseline with Resnet50 and Efficienet-v2 without using pretrained models. Also Resnet50-IBN-A and Efficientnet-v2 using pretrained on ImageNet. We used two datasets: Market-1501 and

lan.nguyen2k 77 Jan 03, 2023
Official Codes for Graph Modularity:Towards Understanding the Cross-Layer Transition of Feature Representations in Deep Neural Networks.

Dynamic-Graphs-Construction Official Codes for Graph Modularity:Towards Understanding the Cross-Layer Transition of Feature Representations in Deep Ne

11 Dec 14, 2022
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

[CVPRW 2021] - Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation

Anirudh S Chakravarthy 6 May 03, 2022
A simple editor for captions in .SRT file extension

WaySRT A simple editor for captions in .SRT file extension The program doesn't use any external dependecies, just run: python way_srt.py {file_name.sr

Gustavo Lopes 3 Nov 16, 2022
Training neural models with structured signals.

Neural Structured Learning in TensorFlow Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured

955 Jan 02, 2023
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).

Scalable Incomplete Network Embedding ⠀⠀ A PyTorch implementation of Scalable Incomplete Network Embedding (ICDM 2018). Abstract Attributed network em

Benedek Rozemberczki 69 Sep 22, 2022
Official implementation for TTT++: When Does Self-supervised Test-time Training Fail or Thrive

TTT++ This is an official implementation for TTT++: When Does Self-supervised Test-time Training Fail or Thrive? TL;DR: Online Feature Alignment + Str

VITA lab at EPFL 39 Dec 25, 2022
I-BERT: Integer-only BERT Quantization

I-BERT: Integer-only BERT Quantization HuggingFace Implementation I-BERT is also available in the master branch of HuggingFace! Visit the following li

Sehoon Kim 139 Dec 27, 2022
traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation toolbox based on PyTorch.

traiNNer traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation to

202 Jan 04, 2023
Deep Implicit Moving Least-Squares Functions for 3D Reconstruction

DeepMLS: Deep Implicit Moving Least-Squares Functions for 3D Reconstruction This repository contains the implementation of the paper: Deep Implicit Mo

103 Dec 22, 2022
The comma.ai Calibration Challenge!

Welcome to the comma.ai Calibration Challenge! Your goal is to predict the direction of travel (in camera frame) from provided dashcam video. This rep

comma.ai 697 Jan 05, 2023
Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite.

tflite2tensorflow Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite. 1. Supported Layers No. TFLite Layer TF

Katsuya Hyodo 214 Dec 29, 2022
Official PyTorch implementation of "Adversarial Reciprocal Points Learning for Open Set Recognition"

Adversarial Reciprocal Points Learning for Open Set Recognition Official PyTorch implementation of "Adversarial Reciprocal Points Learning for Open Se

Guangyao Chen 78 Dec 28, 2022
Turning SymPy expressions into PyTorch modules.

sympytorch A micro-library as a convenience for turning SymPy expressions into PyTorch Modules. All SymPy floats become trainable parameters. All SymP

Patrick Kidger 89 Dec 13, 2022
tsflex - feature-extraction benchmarking

tsflex - feature-extraction benchmarking This repository withholds the benchmark results and visualization code of the tsflex paper and toolkit. Flow

PreDiCT.IDLab 5 Mar 25, 2022
This repository contains the code for Direct Molecular Conformation Generation (DMCG).

Direct Molecular Conformation Generation This repository contains the code for Direct Molecular Conformation Generation (DMCG). Dataset Download rdkit

25 Dec 20, 2022
Supporting code for "Autoregressive neural-network wavefunctions for ab initio quantum chemistry".

naqs-for-quantum-chemistry This repository contains the codebase developed for the paper Autoregressive neural-network wavefunctions for ab initio qua

Tom Barrett 24 Dec 23, 2022