This repository implements WGAN_GP.

Overview

Image_WGAN_GP

This repository implements WGAN_GP.

Image_WGAN_GP

This repository uses wgan to generate mnist and fashionmnist pictures. Firstly, you can download the datasets from main.py .

requirements

Before you run the code, you should install following packages for your environment.

You can see it in the requirements.txt.

install

pip install -r requirements.txt

torch>=0.4.0
torchvision
matplotlib
numpy
scipy
pillow
urllib3
scikit-image

Prepare the dataset

Before you run the code, you should prepare the dataset. You must replace the ROOT_PATH in main.py with your own path.

ROOT_PATH = '../..' # for linux
ROOT_PATH = 'D:/code/Image_WGAN_GP'  # for windows and change it into your work directory!

We provide the mnist , fashionmnist and cifar10 datasets. But you can download others , when you run the code. For example , download the cifar100, just add the following code in main.py and you should modify the models(We will finish it later).

opt.dataset == 'cifar100':
    os.makedirs(ROOT_PATH + "/data/cifar100", exist_ok=True)
    dataloader = torch.utils.data.DataLoader(
        datasets.CIFAR100(
            ROOT_PATH + "/data/cifar100",
            train=True,
            download=True,
            transform=transforms.Compose(
                [transforms.Resize(opt.img_size), transforms.ToTensor(), transforms.Normalize([0.5], [0.5])]
            ),
        ),
        batch_size=opt.batch_size,
        shuffle=True,
    )

The data will be saved in data directory.

Training

Using mnist dataset.

python main.py -data 'mnist' -n_epochs 300

Using fashionmnist dataset.

python main.py -data 'fashionmnist' -n_epochs 300

The generated images will be saved in images directory.

Training parameters

You can see details in config.py

"--n_epochs", "number of epochs of training"

"--batch_size", "size of the batches"

"--lr","adam: learning rate"

"--b1","adam: decay of first order momentum of gradient"

"--b2", "adam: decay of first order momentum of gradient"

"--n_cpu", "number of cpu threads to use during batch generation"

"--latent_dim", "dimensionality of the latent space"

"--img_size", "size of each image dimension"

"--channels","number of image channels"

"--n_critic", "number of training steps for discriminator per iter"

"--clip_value","lower and upper clip value for disc. weights"

"--sample_interval", "interval betwen image samples"

'--exp_name', 'output folder name; will be automatically generated if not specified'

'--pretrain_iterations', 'iterations for pre-training'

'--pretrain', 'if performing pre-training'

'--dataset', '-data', choices=['mnist', 'fashionmnist', 'cifar10']

Save params

The parameters will be save in results. And you can change the saving directory name in config.py

Wasserstein GAN GP

Improved Training of Wasserstein GANs

Authors

Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville

Abstract

Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.

[Paper]

wgan_gp

Owner
Lieon
Deep learning, Anomaly detection,Time series, Generative Adversarial Networks.
Lieon
[LREC] MMChat: Multi-Modal Chat Dataset on Social Media

MMChat This repo contains the code and data for the LREC2022 paper MMChat: Multi-Modal Chat Dataset on Social Media. Dataset MMChat is a large-scale d

Silver 47 Jan 03, 2023
A data annotation pipeline to generate high-quality, large-scale speech datasets with machine pre-labeling and fully manual auditing.

About This repository provides data and code for the paper: Scalable Data Annotation Pipeline for High-Quality Large Speech Datasets Development (subm

Appen Repos 86 Dec 07, 2022
Simple reimplemetation experiments about FcaNet

FcaNet-CIFAR An implementation of the paper FcaNet: Frequency Channel Attention Networks on CIFAR10/CIFAR100 dataset. how to run Code: python Cifar.py

76 Feb 04, 2021
load .txt to train YOLOX, same as Yolo others

YOLOX train your data you need generate data.txt like follow format (per line- one image). prepare one data.txt like this: img_path1 x1,y1,x2,y2,clas

LiMingf 18 Aug 18, 2022
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

Katsuya Hyodo 8 Oct 03, 2022
Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"

Dataset Distillation by Matching Training Trajectories Project Page | Paper This repo contains code for training expert trajectories and distilling sy

George Cazenavette 256 Jan 05, 2023
Drslmarkov - Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

1 Nov 24, 2022
Implementation of paper "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement"

DCS-Net This is the implementation of "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement" Steps to run the model Edit V

Jack Walters 10 Apr 04, 2022
Several simple examples for popular neural network toolkits calling custom CUDA operators.

Neural Network CUDA Example Several simple examples for neural network toolkits (PyTorch, TensorFlow, etc.) calling custom CUDA operators. We provide

WeiYang 798 Jan 01, 2023
This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-grained Classification".

HA-in-Fine-Grained-Classification This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-g

16 Oct 29, 2022
Convolutional 2D Knowledge Graph Embeddings resources

ConvE Convolutional 2D Knowledge Graph Embeddings resources. Paper: Convolutional 2D Knowledge Graph Embeddings Used in the paper, but do not use thes

Tim Dettmers 586 Dec 24, 2022
Implementation for paper "STAR: A Structure-aware Lightweight Transformer for Real-time Image Enhancement" (ICCV 2021).

STAR-pytorch Implementation for paper "STAR: A Structure-aware Lightweight Transformer for Real-time Image Enhancement" (ICCV 2021). CVF (pdf) STAR-DC

43 Dec 21, 2022
Beginner-friendly repository for Hacktober Fest 2021. Start your contribution to open source through baby steps. πŸ’œ

Hacktober Fest 2021 πŸŽ‰ Open source is changing the world – one contribution at a time! πŸŽ‰ This repository is made for beginners who are unfamiliar wit

Abhilash M Nair 32 Dec 11, 2022
Realistic lighting in ursina!

Ursina Lighting Realistic lighting in ursina! If you want to have realistic lighting in ursina, import the UrsinaLighting.py in your project and use t

17 Jul 07, 2022
PyTorch implementation of VAGAN: Visual Feature Attribution Using Wasserstein GANs

Prototypical Networks for Few shot Learning in PyTorch Simple alternative Implementation of Prototypical Networks for Few Shot Learning (paper, code)

Orobix 93 Aug 17, 2022
Code, final versions, and information on the Sparkfun Graphical Datasheets

Graphical Datasheets Code, final versions, and information on the SparkFun Graphical Datasheets. Generated Cells After Running Script Example Complete

SparkFun Electronics 102 Jan 05, 2023
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Zhiqiang Shen 16 Nov 04, 2020
Official PyTorch implementation of SyntaSpeech (IJCAI 2022)

SyntaSpeech: Syntax-Aware Generative Adversarial Text-to-Speech | | | | δΈ­ζ–‡ζ–‡ζ‘£ This repository is the official PyTorch implementation of our IJCAI-2022

Zhenhui YE 116 Nov 24, 2022
Code for the paper "PortraitNet: Real-time portrait segmentation network for mobile device" @ CAD&Graphics2019

PortraitNet Code for the paper "PortraitNet: Real-time portrait segmentation network for mobile device". @ CAD&Graphics 2019 Introduction We propose a

265 Dec 01, 2022
Le dataset des images du projet d'IA de 2021

face-mask-dataset-ilc-2021 Le dataset des images du projet d'IA de 2021, Indiquez vos id git dans la issue pour les droits TL;DR: Choisir 200 images J

7 Nov 15, 2021