Datasets and pretrained Models for StyleGAN3 ...

Overview

Datasets and pretrained Models for StyleGAN3 ...

Dear arfiticial friend, this is a collection of artistic datasets and models that we have put together during our ongoing stylegan3 trip at the lucid layers studios. You can use the model snapshots for instant fun, or you can work with the source datasets to train your own models.
Some models include multiple snapshots. these can give interesting variations.

tip: best viewed maximized

Updates

This Document will be updated frequently since a lot of models are still in training on higher resolutions. You find information on each update in the "releases" section. (you may "watch" this repo for getting notified on new models)


... based on Wombo Dream:

All source images in this category were generated with Wombo Dream. thanks to the cheesy api implementation of adri326 we could remotely generate thousands of images. these images were cropped to center and scaled to 1024x1024. datasets in resolutions of 256, 512 and 1024 were generated and are available for download.
Most datasets are tied to one single textpromt with some minor variations. But you may mix them together in new datasets to train multi domain models with style mixing.

Thanks to the wombo creators for their generous quota limits. since wombo is a free service, we like to share all batches that have been created.

1. Mechanical devices from the future

Dataset
Name Mechanical devices from the future
Method Wombo Dream via Wombot
Image count 2169
Dataset download 256, 512, 1024
Samples
Model: 256 px
Method stylegan3-t, Transfer learning from Landscape256
Resolution 256x256
29 kimg
Download .pkl
 network-snapshot-000029 pkl
05 kimg
Download .pkl
  network-snapshot-000005 pkl

2. Vivid Flowers

Dataset
Name Vivid Flowers
Method Wombo Dream via Wombot
Image count 793
Dataset download 256, 512, 1024
Samples
Model: 256 px
Method stylegan3-t, Transfer learning from Landscape256
Resolution 256x256
68 kimg
Download .pkl
network-snapshot-000067 pkl
12 kimg
Download .pkl
network-snapshot-000012 pkl

3. Alien with Sunglasses

Dataset
Name Alien Sunglasses
Method Wombo Dream via Wombot
Image count 1600
Dataset download 256, 512, 1024
Samples
Model: 256 px
Method stylegan3-t, Transfer learning from Landscape256
Resolution 256x256
38 kimg
Download .pkl
network-snapshot-000038 pkl

4. forest daemons

Dataset
Name forest daemons
Method Wombo Dream via Wombot
Image count 794
Dataset download 256, 512, 1024
Samples
Model: 256 px
Method stylegan3-t, Transfer learning from Landscape256
Resolution 256x256
18 kimg
Download .pkl
network-snapshot-000018 pkl
03 kimg
Download .pkl
network-snapshot-000003 pkl

5. third eye watching you

Dataset
Name third eye watching you
Method Wombo Dream via Wombot
Image count 1363
Dataset download 256, 512, 1024
Samples
Model: coming soon ...

coming soon ...

6. mars spaceport

Dataset
Name mars spaceport
Method Wombo Dream via Wombot
Image count 710
Dataset download 256, 512, 1024
Samples
Model: coming soon ...

coming soon ...

7. scifi city

Dataset
Name scifi city
Method Wombo Dream via Wombot
Image count 1245
Dataset download 256, 512, 1024
Samples
Model: 256 px
Method stylegan3-t, Transfer learning from Landscape256
Resolution 256x256
210 kimg
Download .pkl
 network-snapshot-000210 pkl
018 kimg
Download .pkl
 network-snapshot-000018 pkl
013 kimg
Download .pkl
 network-snapshot-000013 pkl
008 kimg
Download .pkl
 network-snapshot-000008 pkl

8. scifi spaceship

Dataset
Name scifi spaceship
Method Wombo Dream via Wombot
Image count 1108
Dataset download 256, 512, 1024
Samples
Model: 256
Method stylegan3-t, Transfer learning from Landscape256
Resolution 256x256
168 kimg
Download .pkl
network-snapshot-000162 pkl
128 kimg
Download .pkl
network-snapshot-000128 pkl
13 kimg
Download .pkl
network-snapshot-000013 pkl

9. yellow comic alien

Dataset
Name yellow comic alien
Method Wombo Dream via Wombot
Image count 3984
Dataset download 256, 512, 1024
Samples
Model 256x256
Method stylegan3-t, Transfer learning from Landscape256
Resolution 256x256
19 kimg
Download .pkl
network-snapshot-000019 pkl
Model 512x512
Method stylegan3-t, Transfer learning from affhq
Resolution 512x512
236 kimg
Download .pkl
network-snapshot-000236 pkl
004 kimg
Download .pkl
network-snapshot-000004 pkl

10. eternal planet earth

Dataset
Name eternal planet earth
Method Wombo Dream via Wombot
Image count 1323
Dataset download 256, 512, 1024
Samples
Model: coming soon ...

11. mechanical landscape madness

Dataset
Name mechanical landscape madness
Method Wombo Dream via Wombot
Image count 1269
Dataset download 256, 512, 1024
Samples
Model 256x256
Method stylegan3-t, Transfer learning from Landscape256
Resolution 256x256
6 kimg
Download .pkl
network-snapshot-000006 pkl
5 kimg
Download .pkl
network-snapshot-000005 pkl

12. two aliens speaking

Dataset
Name two_aliens_speaking
Method Wombo Dream via Wombot
Image count 1159
Dataset download 256, 512, 1024
Samples
Model: coming soon...

Usage

We recomend to install the official StyleGAN3 repo to your local machine. then use the "visualizer.py" to start the gui. The GUI is very comfortable to use and allows easy visual inspection of the models, in realtime. (RTX card recomended).
If you have troubles for installing on windows with anaconda, try this edited enviroment.yml file. this works with the current pytorch release (cuda11).
Alternatively you may use a colab notebook to generate images/videos from the model. Just copy the links from this page to your favorite colab notebook.
Here is a basic notebook, pre-configured for many of our models:
Open In Colab

Progress

  • prepare all datasets in resolutions 256, 512, 1024
  • create a colab notebook for model testing
  • train some datasets in 256
  • train some datasets in 512
  • train some datasets in 1024
  • create multi domain datasets and models with style mixing

Contribution

If you do continue training on a model, or train a dataset in a high resolution, it would be great to include that in this list.
(please send me a link to your .pkl file in the "issues" tab)
Also, if you made some images or videos you like to share - we would love to see your work! put everything in the issues...

License

You are welcomed to use all files for your own purposes. please include a link to this repo in your work. Thank you.
(Terms and Conditions of Wombo, Nvidia and other contributors have to be considered seperately when doing commercial projects.)

You might also like...
Reference implementation of code generation projects from Facebook AI Research. General toolkit to apply machine learning to code, from dataset creation to model training and evaluation. Comes with pretrained models.

This repository is a toolkit to do machine learning for programming languages. It implements tokenization, dataset preprocessing, model training and m

YOLOv5 ๐Ÿš€ is a family of object detection architectures and models pretrained on the COCO dataset
YOLOv5 ๐Ÿš€ is a family of object detection architectures and models pretrained on the COCO dataset

YOLOv5 ๐Ÿš€ is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research int

Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.
Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.

Music Source Separation with Channel-wise Subband Phase Aware ResUnet (CWS-PResUNet) Introduction This repo contains the pretrained Music Source Separ

The PASS dataset: pretrained models and how to get the data -  PASS: Pictures without humAns for Self-Supervised Pretraining
The PASS dataset: pretrained models and how to get the data - PASS: Pictures without humAns for Self-Supervised Pretraining

The PASS dataset: pretrained models and how to get the data - PASS: Pictures without humAns for Self-Supervised Pretraining

Repository providing a wide range of self-supervised pretrained models for computer vision tasks.

Hierarchical Pretraining: Research Repository This is a research repository for reproducing the results from the project "Self-supervised pretraining

(ImageNet pretrained models) The official pytorch implemention of the TPAMI paper
(ImageNet pretrained models) The official pytorch implemention of the TPAMI paper "Res2Net: A New Multi-scale Backbone Architecture"

Res2Net The official pytorch implemention of the paper "Res2Net: A New Multi-scale Backbone Architecture" Our paper is accepted by IEEE Transactions o

Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.
Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.

Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.

Using pretrained language models for biomedical knowledge graph completion.

LMs for biomedical KG completion This repository contains code to run the experiments described in: Scientific Language Models for Biomedical Knowledg

A library for finding knowledge neurons in pretrained transformer models.
A library for finding knowledge neurons in pretrained transformer models.

knowledge-neurons An open source repository replicating the 2021 paper Knowledge Neurons in Pretrained Transformers by Dai et al., and extending the t

Comments
  • Apply the model on a interactive art demo

    Apply the model on a interactive art demo

    Thank you so much for sharing your artistic model to the community. I'm a media art student new to machine learning. I'm currently apply your 'yellow comic alien 004 kimg' model on a interactive art demo. It looks pretty cool and I'm really appriciate your time and effort! Here is a video preview: https://youtu.be/lVJnj5PLghs

    opened by HarryGuan22 2
  • transfer learning params

    transfer learning params

    Yo! Can you post what sg3 params you used for transfer learning? The results are impressive even after 005 kimg! I personally never managed to make transfer learning work, from afhqv2, maybe my datasets were too big I guess.

    opened by betterftr 0
  • Lucid Stylegan Error

    Lucid Stylegan Error

    Using colab, lucid sonic dreams, stylegan3. The name of the folder is stylegan2, but it contains stylegan3. Downloaded womboflowers pkl (256x256, 05 kimg). How to fix that issue? Is it compatible? Used it as a style, receiving following error:

    Preparing style... Loading networks from /content/drive/MyDrive/LSD/lucid-sonic-dreams/womboflowers2.pkl...

    KeyError Traceback (most recent call last) in () 37 motion_randomness = 0.8, 38 motion_harmonic = True, ---> 39 motion_percussive = False, 40 #random_seed=100 41 #class_complexity = 1

    3 frames /content/drive/MyDrive/LSD/lucid-sonic-dreams/stylegan2/torch_utils/persistence.py in _reconstruct_persistent_obj(meta) 191 192 assert meta.type == 'class' --> 193 orig_class = module.dict[meta.class_name] 194 decorator_class = persistent_class(orig_class) 195 obj = decorator_class.new(decorator_class)

    KeyError: 'FullyConnectedLayer'

    Before it "module" was used here: line 190: module = _src_to_module(meta.module_src)

    opened by Ennorath 0
Releases(v1.0.1)
Owner
lucid layers
lucid layers
Contrastive Language-Image Pretraining

CLIP [Blog] [Paper] [Model Card] [Colab] CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pair

OpenAI 11.5k Jan 08, 2023
An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax

Simple Transformer An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax. Note: The only ex

29 Jun 16, 2022
Autonomous Perception: 3D Object Detection with Complex-YOLO

Autonomous Perception: 3D Object Detection with Complex-YOLO LiDAR object detect

Thomas Dunlap 2 Feb 18, 2022
Unofficial pytorch implementation for Self-critical Sequence Training for Image Captioning. and others.

An Image Captioning codebase This is a codebase for image captioning research. It supports: Self critical training from Self-critical Sequence Trainin

Ruotian(RT) Luo 906 Jan 03, 2023
PyTorch implementation of the paper The Lottery Ticket Hypothesis for Object Recognition

LTH-ObjectRecognition The Lottery Ticket Hypothesis for Object Recognition Sharath Girish*, Shishira R Maiya*, Kamal Gupta, Hao Chen, Larry Davis, Abh

16 Feb 06, 2022
Differentiable Wavetable Synthesis

Differentiable Wavetable Synthesis

4 Feb 11, 2022
๐Ÿšฉ๐Ÿšฉ๐Ÿšฉ

My CTF Challenges 2021 AIS3 Pre-exam / MyFirstCTF Name Category Keywords Difficulty โ’ธโ“„โ“‹โ’พโ’น-โ‘ โ‘จ (MyFirstCTF Only) Reverse Baby โ˜… Piano Reverse C#, .NET โ˜…

6 Oct 28, 2021
[NeurIPS 2019] Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss

Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, Tengyu Ma This is the offi

Kaidi Cao 528 Jan 01, 2023
ELSED: Enhanced Line SEgment Drawing

ELSED: Enhanced Line SEgment Drawing This repository contains the source code of ELSED: Enhanced Line SEgment Drawing the fastest line segment detecto

Iago Suรกrez 125 Dec 31, 2022
Neon-erc20-example - Example of creating SPL token and wrapping it with ERC20 interface in Neon EVM

Example of wrapping SPL token by ERC2-20 interface in Neon Requirements Install

7 Mar 28, 2022
Disentangled Cycle Consistency for Highly-realistic Virtual Try-On, CVPR 2021

Disentangled Cycle Consistency for Highly-realistic Virtual Try-On, CVPR 2021 [WIP] The code for CVPR 2021 paper 'Disentangled Cycle Consistency for H

ChongjianGE 94 Dec 11, 2022
Implementation of CVAE. Trained CVAE on faces from UTKFace Dataset to produce synthetic faces with a given degree of happiness/smileyness.

Conditional Smiles! (SmileCVAE) About Implementation of AE, VAE and CVAE. Trained CVAE on faces from UTKFace Dataset. Using an encoding of the Smile-s

Raรบl Ortega 3 Jan 09, 2022
A spatial genome aligner for analyzing multiplexed DNA-FISH imaging data.

jie jie is a spatial genome aligner. This package parses true chromatin imaging signal from noise by aligning signals to a reference DNA polymer model

Bojing Jia 9 Sep 29, 2022
Implementation of Neural Distance Embeddings for Biological Sequences (NeuroSEED) in PyTorch

Neural Distance Embeddings for Biological Sequences Official implementation of Neural Distance Embeddings for Biological Sequences (NeuroSEED) in PyTo

Gabriele Corso 56 Dec 23, 2022
Automatic differentiation with weighted finite-state transducers.

GTN: Automatic Differentiation with WFSTs Quickstart | Installation | Documentation What is GTN? GTN is a framework for automatic differentiation with

100 Dec 29, 2022
PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners for self-supervised ViT.

MAE for Self-supervised ViT Introduction This is an unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners for self-sup

36 Oct 30, 2022
Hypersearch weight debugging and losses tutorial

tutorial Activate tensorboard option Running TensorBoard remotely When working on a remote server, you can use SSH tunneling to forward the port of th

1 Dec 11, 2021
Normalization Matters in Weakly Supervised Object Localization (ICCV 2021)

Normalization Matters in Weakly Supervised Object Localization (ICCV 2021) 99% of the code in this repository originates from this link. ICCV 2021 pap

Jeesoo Kim 10 Feb 01, 2022
A SAT-based sudoku solver

SAT Sudoku solver A SAT-based Sudoku solver made in the context of a small project in the "Logic Problem Solving" class in the first year at the Polyt

Alexandre Malfreyt 5 Apr 15, 2022
Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"

Topographic Variational Autoencoder Paper: https://arxiv.org/abs/2109.01394 Getting Started Install requirements with Anaconda: conda env create -f en

T. Andy Keller 69 Dec 12, 2022