Jittor 64*64 implementation of StyleGAN

Overview

StyleGanJittor (Tsinghua university computer graphics course)

Overview

Jittor 64*64 implementation of StyleGAN (Tsinghua university computer graphics course) This project is a repetition of StyleGAN based on python 3.8 + Jittor(计图) and The open source StyleGAN-Pytorch project. I train the model on the color_symbol_7k dataset for 40000 iterations. The model can generate 64×64 symbolic images.

StyleGAN is a generative adversarial network for image generation proposed by NVIDIA in 2018. According to the paper, the generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. The main improvement of this network model over previous models is the structure of the generator, including the addition of an eight-layer Mapping Network, the use of the AdaIn module, and the introduction of image randomness - these structures allow the generator to The overall features of the image are decoupled from the local features to synthesize images with better effects; at the same time, the network also has better latent space interpolation effects.

(Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 4401-4410.)

The training results are shown in Video1trainingResult.avi, Video2GenerationResult1.avi, and Video3GenerationResul2t.avi generated by the trained model.

The Checkpoint folder is the trained StyleGAN model, because it takes up a lot of storage space, the models have been deleted.The data folder is the color_symbol_7k dataset folder. The dataset is processed by the prepare_data file to obtain the LMDB database for accelerated training, and the database is stored in the mdb folder.The sample folder is the folder where the images are generated during the model training process, which can be used to traverse the training process. The generateSample folder is the sample image generated by calling StyleGenerator after the model training is completed.

The MultiResolutionDataset method for reading the LMDB database is defined in dataset.py, the Jittor model reproduced by Jittor is defined in model.py, train.py is used for the model training script, and VideoWrite.py is used to convert the generated image. output for video.

Environment and execution instructions

Project environment dependencies include jittor, ldbm, PIL, argparse, tqdm and some common python libraries.

First you need to unzip the dataset in the data folder. The model can be trained by the script in the terminal of the project environment python train.py --mixing "./mdb/color_symbol_7k_mdb"

Images can be generated based on the trained model and compared for their differences by the script python generate.py --size 64 --n_row 3 --n_col 5 --path './checkpoint/040000.model'

You can adjust the model training parameters by referring to the code in the args section of train.py and generate.py.

Details

The first is the data set preparation, using the LMDB database to accelerate the training. For model construction, refer to the model structure shown in the following figure in the original text, and the recurring Suri used in Pytorch open source version 1. Using the model-dependent framework shown in the second figure below, the original model is split into EqualConv2d, EqualLinear, StyleConvBlock , Convblock and other sub-parts are implemented, and finally built into a complete StyleGenerator and Discriminator.

image

image

In the model building and training part, follow the tutorial provided by the teaching assistant on the official website to help convert the torch method to the jittor method, and explore some other means to implement it yourself. Jittor's documentation is relatively incomplete, and some methods are different from Pytorch. In this case, I use a lower-level method for implementation.

For example: jt.sqrt(out.var(0, unbiased=False) + 1e-8) is used in the Discrimination part of the model to solve the variance of the given dimension of the tensor, and there is no corresponding var() in the Jittor framework method, so I use ((out-out.mean(0)).sqr().sum(0)+1e-8).sqrt() to implement the same function.

Results

Limited by the hardware, the model training time is long, and I don't have enough time to fine-tune various parameters, optimizers and various parameters, so the results obtained by training on Jittor are not as good as when I use the same model framework to train on Pytorch The result is good, but the progressive training process can be clearly seen from the video, and the generated symbols are gradually clear, and the results are gradually getting better.

Figures below are sample results obtained by training on Jittor and Pytorch respectively. For details, please refer to the video files in the folder. The training results of the same model and code on Pytorch can be found in the sample_torch folder.

figures by Jittor figures by Pytorch

To be continued

Owner
Song Shengyu
Song Shengyu
This repository contains the source codes for the paper AtlasNet V2 - Learning Elementary Structures.

AtlasNet V2 - Learning Elementary Structures This work was build upon Thibault Groueix's AtlasNet and 3D-CODED projects. (you might want to have a loo

Théo Deprelle 123 Nov 11, 2022
Send text to girlfriend in the morning

Girlfriend Text Send text to girlfriend (or really anyone with a phone number) in the morning 1. Configure your settings in utils.py. phone_number = "

Paras Adhikary 199 Oct 25, 2022
Airborne magnetic data of the Osborne Mine and Lightning Creek sill complex, Australia

Osborne Mine, Australia - Airborne total-field magnetic anomaly This is a section of a survey acquired in 1990 by the Queensland Government, Australia

Fatiando a Terra Datasets 1 Jan 21, 2022
Implementation of the paper "Fine-Tuning Transformers: Vocabulary Transfer"

Transformer-vocabulary-transfer Implementation of the paper "Fine-Tuning Transfo

LEYA 13 Nov 30, 2022
A PyTorch Image-Classification With AlexNet And ResNet50.

PyTorch 图像分类 依赖库的下载与安装 在终端中执行 pip install -r -requirements.txt 完成项目依赖库的安装 使用方式 数据集的准备 STL10 数据集 下载:STL-10 Dataset 存储位置:将下载后的数据集中 train_X.bin,train_y.b

FYH 4 Feb 22, 2022
Fine-tuning StyleGAN2 for Cartoon Face Generation

Cartoon-StyleGAN 🙃 : Fine-tuning StyleGAN2 for Cartoon Face Generation Abstract Recent studies have shown remarkable success in the unsupervised imag

Jihye Back 520 Jan 04, 2023
🏅 The Most Comprehensive List of Kaggle Solutions and Ideas 🏅

🏅 Collection of Kaggle Solutions and Ideas 🏅

Farid Rashidi 2.3k Jan 08, 2023
Learning View Priors for Single-view 3D Reconstruction (CVPR 2019)

Learning View Priors for Single-view 3D Reconstruction (CVPR 2019) This is code for a paper Learning View Priors for Single-view 3D Reconstruction by

Hiroharu Kato 38 Aug 17, 2022
A PyTorch implementation of the architecture of Mask RCNN

EDIT (AS OF 4th NOVEMBER 2019): This implementation has multiple errors and as of the date 4th, November 2019 is insufficient to be utilized as a reso

Sai Himal Allu 975 Dec 30, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
Per-Pixel Classification is Not All You Need for Semantic Segmentation

MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation Bowen Cheng, Alexander G. Schwing, Alexander Kirillov [arXiv] [Proj

Facebook Research 1k Jan 08, 2023
Random Erasing Data Augmentation. Experiments on CIFAR10, CIFAR100 and Fashion-MNIST

Random Erasing Data Augmentation =============================================================== black white random This code has the source code for

Zhun Zhong 654 Dec 26, 2022
Code and project page for ICCV 2021 paper "DisUnknown: Distilling Unknown Factors for Disentanglement Learning"

DisUnknown: Distilling Unknown Factors for Disentanglement Learning See introduction on our project page Requirements PyTorch = 1.8.0 torch.linalg.ei

Sitao Xiang 24 May 16, 2022
A Partition Filter Network for Joint Entity and Relation Extraction EMNLP 2021

EMNLP 2021 - A Partition Filter Network for Joint Entity and Relation Extraction

zhy 127 Jan 04, 2023
This is Unofficial Repo. Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection (CVPR 2021)

Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection This is a PyTorch implementation of the LipForensics paper. This is an U

Minha Kim 2 May 11, 2022
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification

DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification Created by Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, Ch

Yongming Rao 414 Jan 01, 2023
[NeurIPS 2021] “Improving Contrastive Learning on Imbalanced Data via Open-World Sampling”,

Improving Contrastive Learning on Imbalanced Data via Open-World Sampling Introduction Contrastive learning approaches have achieved great success in

VITA 24 Dec 17, 2022
Bilinear attention networks for visual question answering

Bilinear Attention Networks This repository is the implementation of Bilinear Attention Networks for the visual question answering and Flickr30k Entit

Jin-Hwa Kim 506 Nov 29, 2022
Deep Learning Models for Causal Inference

Extensive tutorials for learning how to build deep learning models for causal inference using selection on observables in Tensorflow 2.

Bernard J Koch 151 Dec 31, 2022
Voxel Transformer for 3D object detection

Voxel Transformer This is a reproduced repo of Voxel Transformer for 3D object detection. The code is mainly based on OpenPCDet. Introduction We provi

173 Dec 25, 2022