v objective diffusion inference code for PyTorch.

Overview

v-diffusion-pytorch

v objective diffusion inference code for PyTorch, by Katherine Crowson (@RiversHaveWings) and Chainbreakers AI (@jd_pressman).

The models are denoising diffusion probabilistic models (https://arxiv.org/abs/2006.11239), which are trained to reverse a gradual noising process, allowing the models to generate samples from the learned data distributions starting from random noise. DDIM-style deterministic sampling (https://arxiv.org/abs/2010.02502) is also supported. The models are also trained on continuous timesteps. They use the 'v' objective from Progressive Distillation for Fast Sampling of Diffusion Models (https://openreview.net/forum?id=TIdIXIpzhoI).

Thank you to stability.ai for compute to train these models!

Dependencies

Model checkpoints:

  • CC12M 256x256, SHA-256 63946d1f6a1cb54b823df818c305d90a9c26611e594b5f208795864d5efe0d1f

A 602M parameter CLIP conditioned model trained on Conceptual 12M for 3.1M steps.

Sampling

Example

If the model checkpoints are stored in checkpoints/, the following will generate an image:

./clip_sample.py "the rise of consciousness" --model cc12m_1 --seed 0

If they are somewhere else, you need to specify the path to the checkpoint with --checkpoint.

CLIP conditioned/guided sampling

usage: clip_sample.py [-h] [--images [IMAGE ...]] [--batch-size BATCH_SIZE]
                      [--checkpoint CHECKPOINT] [--clip-guidance-scale CLIP_GUIDANCE_SCALE]
                      [--device DEVICE] [--eta ETA] [--model {cc12m_1}] [-n N] [--seed SEED]
                      [--steps STEPS]
                      [prompts ...]

prompts: the text prompts to use. Relative weights for text prompts can be specified by putting the weight after a colon, for example: "the rise of consciousness:0.5".

--batch-size: sample this many images at a time (default 1)

--checkpoint: manually specify the model checkpoint file

--clip-guidance-scale: how strongly the result should match the text prompt (default 500). If set to 0, the cc12m_1 model will still be CLIP conditioned and sampling will go faster and use less memory.

--device: the PyTorch device name to use (default autodetects)

--eta: set to 0 for deterministic (DDIM) sampling, 1 (the default) for stochastic (DDPM) sampling, and in between to interpolate between the two. DDIM is preferred for low numbers of timesteps.

--images: the image prompts to use (local files or HTTP(S) URLs). Relative weights for image prompts can be specified by putting the weight after a colon, for example: "image_1.png:0.5".

--model: specify the model to use (default cc12m_1)

-n: sample until this many images are sampled (default 1)

--seed: specify the random seed (default 0)

--steps: specify the number of diffusion timesteps (default is 1000, can lower for faster but lower quality sampling)

Comments
  • Generated images are completely black?! 😵 What am I doing wrong?

    Generated images are completely black?! 😵 What am I doing wrong?

    Hello, I am on Windows 10, and my gpu is a PNY Nvidia GTX 1660 TI 6 Gb. I installed V-Diffusion like so:

    • conda create --name v-diffusion python=3.8
    • conda activate v-diffusion
    • conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch (as per Pytorch website instructions)
    • pip install requests tqdm

    The problem is that when I launch the cfg_sample.py or clip_sample.py command lines, the generated images are completely black, although the inference process seems to run nicely and without errors.

    Things I've tried:

    • installing previous pytorch version with conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
    • removing V-Diffusion conda environment completely and recreating it anew
    • uninstalling nvidia drivers and performing a new clean driver install (I tried both Nvidia Studio drivers and Nvidia Game Ready drivers)
    • uninstalling and reinstalling Conda completely

    But nothing helped... and at this point I don't know what else to try...

    The only interesting piece of information I could gather is that for some reason this problem also happens with another text-to-image project called Big Sleep where similar to V-Diffusion the inference process appears to run correctly but the generated images are all black.

    I think there must be some simple detail I'm overlooking... which it's making me go insane... 😵 Please let me know something if you think you can help! THANKS !

    opened by illtellyoulater 10
  • what does this line mean in README?

    what does this line mean in README?

    A weight of 1 will sample images that match the prompt roughly as well as images usually match prompts like that in the training set.

    I can't wrap my head around this sentence. Could you please explain it with different wording? Thanks!

    opened by illtellyoulater 2
  • AttributeError: module 'torch' has no attribute 'special'

    AttributeError: module 'torch' has no attribute 'special'

    torch version: 1.8.1+cu111

    python ./cfg_sample.py "the rise of consciousness":5 -n 4 -bs 4 --seed 0 Using device: cuda:0 Traceback (most recent call last): File "./cfg_sample.py", line 154, in main() File "./cfg_sample.py", line 148, in main run_all(args.n, args.batch_size) File "./cfg_sample.py", line 136, in run_all steps = utils.get_spliced_ddpm_cosine_schedule(t) File "C:\Users\m\Desktop\v-diffusion-pytorch\diffusion\utils.py", line 75, in get_spliced_ddpm_cosine_schedule ddpm_part = get_ddpm_schedule(big_t + ddpm_crossover - cosine_crossover) File "C:\Users\m\Desktop\v-diffusion-pytorch\diffusion\utils.py", line 65, in get_ddpm_schedule log_snr = -torch.special.expm1(1e-4 + 10 * ddpm_t**2).log() AttributeError: module 'torch' has no attribute 'special'

    opened by tempdeltavalue 2
  • Add github action to automatically push to pypi on Release x.y.z commit

    Add github action to automatically push to pypi on Release x.y.z commit

    you need to create a token there https://pypi.org/manage/account/token/ and put it in there https://github.com/crowsonkb/v-diffusion-pytorch/settings/secrets/actions/new name it PYPI_PASSWORD

    The release will be triggered when you name your commit Release x.y.z I advise to change the version in setup.cfg in that commit

    opened by rom1504 0
  • [Question] What's the meaning of these equations in sample and cfg_model_fn(from sample.py )

    [Question] What's the meaning of these equations in sample and cfg_model_fn(from sample.py )

    Hello, thank you for your great work! I have a little puzzle in sample.py `# Get the model output (v, the predicted velocity) with torch.cuda.amp.autocast(): v = model(x, ts * steps[i], **extra_args).float()

        # Predict the noise and the denoised image
        pred = x * alphas[i] - v * sigmas[i]
        eps = x * sigmas[i] + v * alphas[i]`
    

    what the meaning ? Where it comes?

    opened by zhangquanwei962 0
  • Images don’t seem to evolve with each iteration

    Images don’t seem to evolve with each iteration

    Thanks for sharing such an amazing repo!

    I am testing a prompt like openAI “an astronaut riding a horse in a photorealistic style” to compare. But somehow the iterations seems to be stuck on the same image.

    This is my first test, so could very likely be that I am doing something wrong. Results and settings attached bellow…

    B5B8DE32-AF99-4D4C-BEB5-B9F131916845 2F55B7E2-7DB5-42CA-9B75-7384FDEB9303 B752B2AC-75A4-4F1C-A538-523B4249370E 6DC4FB56-9CDF-4F91-90A4-35C8F4D97FA5

    opened by alelordelo 0
  • [Question] Questions about `zero_embed` and `weights`

    [Question] Questions about `zero_embed` and `weights`

    Thanks for this great work. I'm recently interested in using diffusion model to generate images iteratively. I found your script cfg_sample.py was a nice implementation and I decided to learn from it. However, because I'm new in this field, I've encountered some problems quite hard to understand for me. It'd be great if some hints/suggestions are provided. Thank you!! My questions are listed below. They're about the script cfg_sample.py.

    1. I noticed in the codes, we've used zero_embed as one of the features for conditioning. What is the purpose of using it? Is it designed to allow the case of no prompt for input?
    2. I also notice that the weight of zero_embed is computed as 1 - sum(weights), I think the 1 is to make them sum to one, but actually the weight of zero_embed could be a negative number, should weights be normalized before all the intermediate noise maps are weighted?

    Thanks very much!!

    opened by Karbo123 4
  • Metrics on WikiArt model

    Metrics on WikiArt model

    Hi!

    I wanted to thank you for your work, especially since without you DiscoDiffusion wouldn't exist !

    Still, I was wondering if you had the metrics (Precision, Recall, FID and Inception Score) on the 256x256 WikiArt model ?

    opened by Maxim-Durand 0
  • Any idea on how to attach a clip model to a 64x64 unconditional model from openai/improved-diffusion?

    Any idea on how to attach a clip model to a 64x64 unconditional model from openai/improved-diffusion?

    Hey! love your work and been following your stuff for a while. I have finetuned a 64x64 unconditional model from openai/improved diffusion. checkpoint

    I was curious if you could lend any insight on how to connect CLIP guidance to my model? I have tried repurposing your notebook (https://colab.research.google.com/drive/12a_Wrfi2_gwwAuN3VvMTwVMz9TfqctNj#scrollTo=1YwMUyt9LHG1) however past 100 steps, my models seems to unconverge.

    I think perhaps because there is too much noise being added for the smaller image size? How might i fix this?

    opened by DeepTitan 0
Releases(v0.0.2)
Owner
Katherine Crowson
AI/generative artist.
Katherine Crowson
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Dec 31, 2022
Iterative Normalization: Beyond Standardization towards Efficient Whitening

IterNorm Code for reproducing the results in the following paper: Iterative Normalization: Beyond Standardization towards Efficient Whitening Lei Huan

Lei Huang 21 Dec 27, 2022
Safe Local Motion Planning with Self-Supervised Freespace Forecasting, CVPR 2021

Safe Local Motion Planning with Self-Supervised Freespace Forecasting By Peiyun Hu, Aaron Huang, John Dolan, David Held, and Deva Ramanan Citing us Yo

Peiyun Hu 90 Dec 01, 2022
A deep learning based semantic search platform that computes similarity scores between provided query and documents

semanticsearch This is a deep learning based semantic search platform that computes similarity scores between provided query and documents. Documents

1 Nov 30, 2021
Pretrained models for Jax/Haiku; MobileNet, ResNet, VGG, Xception.

Pre-trained image classification models for Jax/Haiku Jax/Haiku Applications are deep learning models that are made available alongside pre-trained we

Alper Baris CELIK 14 Dec 20, 2022
Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance Normalization

Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance Normalization Official PyTorch implementation for our URST (Ultra-Resolution Sty

czczup 148 Dec 27, 2022
Simple renderer for use with MuJoCo (>=2.1.2) Python Bindings.

Viewer for MuJoCo in Python Interactive renderer to use with the official Python bindings for MuJoCo. Starting with version 2.1.2, MuJoCo comes with n

Rohan P. Singh 62 Dec 30, 2022
Intro-to-dl - Resources for "Introduction to Deep Learning" course.

Introduction to Deep Learning course resources https://www.coursera.org/learn/intro-to-deep-learning Running on Google Colab (tested for all weeks) Go

Advanced Machine Learning specialisation by HSE 761 Dec 24, 2022
Cognate Detection Repository

Cognate Detection Repository Details This repository contains the data for two publications: Challenge Dataset of Cognates and False Friend Pairs from

Diptesh Kanojia 1 Apr 26, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8.1k Jan 06, 2023
OpenPCDet Toolbox for LiDAR-based 3D Object Detection.

OpenPCDet OpenPCDet is a clear, simple, self-contained open source project for LiDAR-based 3D object detection. It is also the official code release o

OpenMMLab 3.2k Dec 31, 2022
GMFlow: Learning Optical Flow via Global Matching

GMFlow GMFlow: Learning Optical Flow via Global Matching Authors: Haofei Xu, Jing Zhang, Jianfei Cai, Hamid Rezatofighi, Dacheng Tao We streamline the

Haofei Xu 298 Jan 04, 2023
Steer OpenAI's Jukebox with Music Taggers

TagBox Steer OpenAI's Jukebox with Music Taggers! The closest thing we have to VQGAN+CLIP for music! Unsupervised Source Separation By Steering Pretra

Ethan Manilow 34 Nov 02, 2022
This code finds bounding box of a single human mouth.

This code finds bounding box of a single human mouth. In comparison to other face segmentation methods, it is relatively insusceptible to open mouth conditions, e.g., yawning, surgical robots, etc. T

iThermAI 4 Nov 27, 2022
Moer Grounded Image Captioning by Distilling Image-Text Matching Model

Moer Grounded Image Captioning by Distilling Image-Text Matching Model Requirements Python 3.7 Pytorch 1.2 Prepare data Please use git clone --recurse

YE Zhou 60 Dec 16, 2022
Speckle-free Holography with Partially Coherent Light Sources and Camera-in-the-loop Calibration

Speckle-free Holography with Partially Coherent Light Sources and Camera-in-the-loop Calibration Project Page | Paper Yifan Peng*, Suyeon Choi*, Jongh

Stanford Computational Imaging Lab 19 Dec 11, 2022
Demo notebooks for Qiskit application modules demo sessions (Oct 8 & 15):

qiskit-application-modules-demo-sessions This repo hosts demo notebooks for the Qiskit application modules demo sessions hosted on Qiskit YouTube. Par

Qiskit Community 46 Nov 24, 2022
Aggragrating Nested Transformer Official Jax Implementation

NesT is a simple method, which aggragrates nested local transformers on image blocks. The idea makes vision transformers attain better accuracy, data efficiency, and convergence on the ImageNet bench

Google Research 169 Dec 20, 2022
(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

Haoxi Ran 264 Dec 23, 2022
Video Representation Learning by Recognizing Temporal Transformations. In ECCV, 2020.

Video Representation Learning by Recognizing Temporal Transformations [Project Page] Simon Jenni, Givi Meishvili, and Paolo Favaro. In ECCV, 2020. Thi

Simon Jenni 46 Nov 14, 2022