Vector Quantization, in Pytorch

Overview

Vector Quantization - Pytorch

A vector quantization library originally transcribed from Deepmind's tensorflow implementation, made conveniently into a package. It uses exponential moving averages to update the dictionary.

VQ has been successfully used by Deepmind and OpenAI for high quality generation of images (VQ-VAE-2) and music (Jukebox).

Install

$ pip install vector-quantize-pytorch

Usage

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 512,     # codebook size
    decay = 0.8,             # the exponential moving average decay, lower means the dictionary will change faster
    commitment = 1.          # the weight on the commitment loss
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x) # (1, 1024, 256), (1, 1024), (1)

Variants

This paper proposes to use multiple vector quantizers to recursively quantize the residuals of the waveform. You can use this with the ResidualVQ class and one extra initialization parameter.

import torch
from vector_quantize_pytorch import ResidualVQ

residual_vq = ResidualVQ(
    dim = 256,
    num_quantizers = 8,      # specify number of quantizers
    codebook_size = 1024,    # codebook size
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = residual_vq(x)

# (1, 1024, 256), (8, 1, 1024), (8, 1)
# (batch, seq, dim), (quantizer, batch, seq), (quantizer, batch)

Initialization

The SoundStream paper proposes that the codebook should be initialized by the kmeans centroids of the first batch. You can easily turn on this feature with one flag kmeans_init = True, for either VectorQuantize or ResidualVQ class

import torch
from vector_quantize_pytorch import ResidualVQ

residual_vq = ResidualVQ(
    dim = 256,
    codebook_size = 256,
    num_quantizers = 4,
    kmeans_init = True,   # set to True
    kmeans_iters = 10     # number of kmeans iterations to calculate the centroids for the codebook on init
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = residual_vq(x)

Increasing codebook usage

This repository will contain a few techniques from various papers to combat "dead" codebook entries, which is a common problem when using vector quantizers.

Lower codebook dimension

The Improved VQGAN paper proposes to have the codebook kept in a lower dimension. The encoder values are projected down before being projected back to high dimensional after quantization. You can set this with the codebook_dim hyperparameter.

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 256,
    codebook_dim = 16      # paper proposes setting this to 32 or as low as 8 to increase codebook usage
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x)

Cosine similarity

The Improved VQGAN paper also proposes to l2 normalize the codes and the encoded vectors, which boils down to using cosine similarity for the distance. They claim enforcing the vectors on a sphere leads to improvements in code usage and downstream reconstruction. You can turn this on by setting use_cosine_sim = True

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 256,
    use_cosine_sim = True   # set this to True
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x)

Expiring stale codes

Finally, the SoundStream paper has a scheme where they replace codes that have hits below a certain threshold with randomly selected vector from the current batch. You can set this threshold with threshold_ema_dead_code keyword.

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 512,
    threshold_ema_dead_code = 2  # should actively replace any codes that have an exponential moving average cluster size less than 2
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x)

Citations

@misc{oord2018neural,
    title   = {Neural Discrete Representation Learning},
    author  = {Aaron van den Oord and Oriol Vinyals and Koray Kavukcuoglu},
    year    = {2018},
    eprint  = {1711.00937},
    archivePrefix = {arXiv},
    primaryClass = {cs.LG}
}
@misc{zeghidour2021soundstream,
    title   = {SoundStream: An End-to-End Neural Audio Codec},
    author  = {Neil Zeghidour and Alejandro Luebs and Ahmed Omran and Jan Skoglund and Marco Tagliasacchi},
    year    = {2021},
    eprint  = {2107.03312},
    archivePrefix = {arXiv},
    primaryClass = {cs.SD}
}
@inproceedings{anonymous2022vectorquantized,
    title   = {Vector-quantized Image Modeling with Improved {VQGAN}},
    author  = {Anonymous},
    booktitle = {Submitted to The Tenth International Conference on Learning Representations },
    year    = {2022},
    url     = {https://openreview.net/forum?id=pfNyExj7z2},
    note    = {under review}
}
Comments
  • Quantizers are not DDP/AMP compliant

    Quantizers are not DDP/AMP compliant

    Hi Lucidrains,

    Thanks for the amazing work you do by implementing all those papers!

    Is there a plan to make the Quantizer be compliant with:

    • DDP - They need an all gather before calculating anything so the updates are exactly the same across all ranks
    • AMP - In my experience, if AMP touches upon the quantizers it screws up the gradient magnitudes making it NaN/Overflow

    If you want I can have a go at it.

    opened by danieltudosiu 7
  • Commitment Loss Problems

    Commitment Loss Problems

    Hello,

    First of all, thank you so much for this powerful implementation.

    I have been researching to train some VQ-VAE to generate faces from FFHQ 128x128 and I always have the same problem if I use the commitment loss (0.25) and the gamma (0.99) like in the original paper, the commitment loss seems to grow infinitely. I know you said that it is an auxiliary loss and that is not that important but is this normal behavior? If not, how can I avoid for that to happen in the case I wanted to use this loss?

    Thank you so much in advance!

    opened by pedrocg42 6
  • fix dimensions: the codebook must look at data by taking each time fr…

    fix dimensions: the codebook must look at data by taking each time fr…

    …ame individually. In SoundStream article: "This vector quantizer learns a codebook of N vectors to encode each D-dimensional frame of enc(x)."

    opened by wesbz 5
  • kmeans and ddp hangs

    kmeans and ddp hangs

    kmeans and ddp hangs for me. ddp is initialized by pytorch lightning in my case. I have several questions:

    In https://github.com/lucidrains/vector-quantize-pytorch/blob/master/vector_quantize_pytorch/vector_quantize_pytorch.py#L98

    all_num_samples = all_gather_sizes(local_samples, dim = 0) should it be dim = 1 (as dim 0 is the codebook dimension)?

    Then in https://github.com/lucidrains/vector-quantize-pytorch/blob/master/vector_quantize_pytorch/vector_quantize_pytorch.py#L93 it just hangs for me. I am not totally sure, but I believe distributed.broadcast in

    https://github.com/lucidrains/vector-quantize-pytorch/blob/master/vector_quantize_pytorch/vector_quantize_pytorch.py#L90

    is called with incompatible shapes. See https://pytorch.org/docs/stable/distributed.html#torch.distributed.broadcast

    tensor must have the same number of elements in all processes participating in the collective.

    opened by tasptz 4
  • Cannot Converge with L2 Loss

    Cannot Converge with L2 Loss

    I am trying to quantize the latent vector. To be specific, I use a Encoder to get the latent representation z of the input. Then I try to quantize z, then send z into Decoder.

    However, during my experiment, I found the reconstruction loss cannot decrease with L2 loss, namely, the EuclideanCodebook. The model can converge with cosine similarity. Have any idea about this phenomenon?

    I think cosine similarity only considers the direction of the vector, instead of the scale of the vector. I still want to use EuclideanCodebook.

    opened by kingnobro 3
  • Error when using gloo as DDP backend

    Error when using gloo as DDP backend

    Hello! Thank you for your great work on implementing VQ layer. When I use the VQ layer in DDP mode and use gloo as the backend as suggested in README, I got the following error: terminate called after throwing an instance of 'gloo::EnforceNotMet' what(): [enforce fail at ../third_party/gloo/gloo/transport/tcp/pair.cc:510] op.preamble.length <= op.nbytes. 8773632 vs 8386560

    Do you have any ideas on how to solve this problem?
    I also tried to use nccl as the backend, however the program only hangs forever...

    opened by Saltychtao 3
  • codebook initialization

    codebook initialization

    Hi, Thank you for this great work. It's quite useful!

    I have been having problems with index collapse and I'm not sure where it's coming from. But upon digging into the code, it seems that when we're not using k-means to initialize the codebook vectors, randn (normal distribution) is used to initialize them. The vqvae paper specifically uses uniform distribution for initialization, which allows the authors to ignore KL divergence when training.

    This is from the vqvae paper: "Since we assume a uniform prior for z, the KL term that usually appears in the ELBO is constant w.r.t. the encoder parameters and can thus be ignored for training."

    Is there any reason why you changed to Normal distribution here?

    Thanks!

    opened by ramyamounir 3
  • possible papers (and code) of interest

    possible papers (and code) of interest

    Have you had a look at bitsandbytes?

    https://github.com/TimDettmers/bitsandbytes

    https://arxiv.org/abs/2208.07339

    https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/

    Also this paper on tradeoffs for various 8 bit quantization formats,

    https://arxiv.org/pdf/2206.02915v1.pdf

    opened by Thomas-MMJ 2
  • RQ-VAE: How can I get a list of all learned codebook vectors (as indexed in the

    RQ-VAE: How can I get a list of all learned codebook vectors (as indexed in the "indices")?

    Hi Lucid, i am working on quantizing CLIP image embeddings with your RQ-VAE. It works pretty well.

    Next I want to take all learned codebook vectors and add them to the vocab of a GPT (as frozen token embeddings).

    The idea is to train a GPT with CLIP image embeddings in between texts, e.g. IMAGE-CAPTION or TEXT-IMAGE-TEXT-IMAGE- ... Flamingo-style).

    If this works, then GPT could maybe also learn to generate quantized CLIP IM embeddings token by token --> and then e.g. show images through a.) retrieval or b.) a DALLE 2 decoder :)

    ... So my question is: Once the RQ-VAE is trained and i can get the quantized reconstructions and indices - How can I get a list or tensor of the actual codebook? (all possible vectors from the rq-vocab) :)

    opened by christophschuhmann 2
  • Expire codes heuristic is replacing inputs

    Expire codes heuristic is replacing inputs

    Thanks for the implementation!

    One question, should this

    https://github.com/lucidrains/vector-quantize-pytorch/blob/ebce893fff695845f7fe0f04d1400d2c29b94f98/vector_quantize_pytorch/vector_quantize_pytorch.py#L177

    be actually self.expire_codes_(quantize)?

    opened by kashif 2
  • orthogonal regularization loss useless?

    orthogonal regularization loss useless?

    because the codebooks are not registered as trainable parameters, and the orthogonal loss is only a function of the codebooks, is the orthogonal loss entirely useless?

    opened by GallagherCommaJack 2
  • EMA update on CosineCodebook

    EMA update on CosineCodebook

    The original VIT-VQGAN paper does not seem to use EMA update for codebook learning since their codebook is unit-normalized vectors.

    Particularly, to my understanding, EMA update does not quite make sense when the encoder outputs and codebook vectors are unit-normalized ones.

    What's your take on this? Should we NOT use EMA update with CosineCodebook?

    opened by le4m 3
  • Loss and Backprop Details

    Loss and Backprop Details

    Hi,

    During training the vqvae backprops on multiple losses. While inputting feature maps to the model, we are given a loss, shoud I manually backpropagate and update weights through (the good ol' loss.backward() and optimizer.step()) this or is it handled implicitly?

    opened by Malik7115 3
  • Missing parameter of beta

    Missing parameter of beta

    Hi, in the original VQVAE paper, the commit_loss is defined as

    (quantize.detach()-x) ** 2 + beta * (quantize - x.detach() ** 2)
    

    where the beta is usually to be 0.25. But the commit_loss is defined as the following in your implementation:

    F.mse_loss(quantize.detach(), x)
    

    So I wonder if the parameter beta is set to be 1 by default or if the second term is missing? Thank you very much.

    opened by Corleone-Huang 1
  • No way of training the codebook

    No way of training the codebook

    Hi! Could you please explain how the codebook vectors are updated if the codebook vectors are not required to be orthogonal?

    1. embed tensors in both Euclidean and CosineSim codebooks are registered as buffers, so they can't be updated at all
    2. There is no loss on the codebook vectors that moves them closer to the input

    Am I missing something? It seems that right now there is no way of updating the codebook vectors without the orthogonal loss.

    opened by RafailFridman 5
  • Plugging vector-quantize-pytorch into taming-transformers

    Plugging vector-quantize-pytorch into taming-transformers

    Hi,

    I noticed your architecture could be plugged within the pipeline from https://github.com/CompVis/taming-transformers. I have proposed a code here (https://github.com/tanouch/taming-transformers) doing that. It enables to properly compare the different features proposed in your repo (Lower codebook dimension, Cosine similarity, Orthogonal regularization loss, etc) with the original formulation.

    The code from this repo can be seen in both files

    • taming-transformers/taming/models/vqgan.py
    • taming-transformers/taming/modules/vqvae/quantize.py

    As you can see, it is easy to launch a large scale training with your proposed architecture.

    I am not sure this issue belongs here or in the taming-transformers repo. However, I thought you might be interested. Thanks again for your work and these open-sourced repositeries !

    opened by tanouch 2
Releases(0.10.14)
Owner
Phil Wang
Working with Attention. It's all we need
Phil Wang
Computations and statistics on manifolds with geometric structures.

Geomstats Code Continuous Integration Code coverage (numpy) Code coverage (autograd, tensorflow, pytorch) Documentation Community NEWS: Geomstats is r

875 Dec 31, 2022
Adapter-BERT: Parameter-Efficient Transfer Learning for NLP.

Adapter-BERT: Parameter-Efficient Transfer Learning for NLP.

Google Research 340 Jan 03, 2023
An example of time series augmentation methods with Keras

Time Series Augmentation This is a collection of time series data augmentation methods and an example use using Keras. News 2020/04/16: Repository Cre

九州大学 ヒューマンインタフェース研究室 229 Jan 02, 2023
Torch implementation of SegNet and deconvolutional network

Torch implementation of SegNet and deconvolutional network

Fedor Chervinskii 5 Jul 17, 2020
Code for KDD'20 "Generative Pre-Training of Graph Neural Networks"

GPT-GNN: Generative Pre-Training of Graph Neural Networks GPT-GNN is a pre-training framework to initialize GNNs by generative pre-training. It can be

Ziniu Hu 346 Dec 19, 2022
Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.

Machine Learning Sleep Schedule Tracker What is it? Convolutional neural network web app trained to track our infant’s sleep schedule using our Google

g-parki 7 Jul 15, 2022
Simple-System-Convert--C--F - Simple System Convert With Python

Simple-System-Convert--C--F REQUIREMENTS Python version : 3 HOW TO USE Run the c

Jonathan Santos 2 Feb 16, 2022
Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression

Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression We provide the code used in our paper "How Good are Low-Rank Approximation

Aristeidis (Ares) Panos 0 Dec 13, 2021
Nested cross-validation is necessary to avoid biased model performance in embedded feature selection in high-dimensional data with tiny sample sizes

Pruner for nested cross-validation - Sphinx-Doc Nested cross-validation is necessary to avoid biased model performance in embedded feature selection i

1 Dec 15, 2021
Attention-based Transformation from Latent Features to Point Clouds (AAAI 2022)

Attention-based Transformation from Latent Features to Point Clouds This repository contains a PyTorch implementation of the paper: Attention-based Tr

12 Nov 11, 2022
Offical code for the paper: "Growing 3D Artefacts and Functional Machines with Neural Cellular Automata" https://arxiv.org/abs/2103.08737

Growing 3D Artefacts and Functional Machines with Neural Cellular Automata Video of more results: https://www.youtube.com/watch?v=-EzztzKoPeo Requirem

Robotics Evolution and Art Lab 51 Jan 01, 2023
NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

4.8k Jan 07, 2023
LUKE -- Language Understanding with Knowledge-based Embeddings

LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transf

Studio Ousia 587 Dec 30, 2022
A mini lib that implements several useful functions binding to PyTorch in C++.

Torch-gather A mini library that implements several useful functions binding to PyTorch in C++. What does gather do? Why do we need it? When dealing w

maxwellzh 8 Sep 07, 2022
A simple consistency training framework for semi-supervised image semantic segmentation

PseudoSeg: Designing Pseudo Labels for Semantic Segmentation PseudoSeg is a simple consistency training framework for semi-supervised image semantic s

Google Interns 143 Dec 13, 2022
Who calls the shots? Rethinking Few-Shot Learning for Audio (WASPAA 2021)

rethink-audio-fsl This repo contains the source code for the paper "Who calls the shots? Rethinking Few-Shot Learning for Audio." (WASPAA 2021) Table

Yu Wang 34 Dec 24, 2022
Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase

Ranger-Deep-Learning-Optimizer Ranger - a synergistic optimizer combining RAdam (Rectified Adam) and LookAhead, and now GC (gradient centralization) i

Less Wright 1.1k Dec 21, 2022
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

101 Nov 25, 2022
Unofficial PyTorch Implementation of AHDRNet (CVPR 2019)

AHDRNet-PyTorch This is the PyTorch implementation of Attention-guided Network for Ghost-free High Dynamic Range Imaging (CVPR 2019). The official cod

Yutong Zhang 4 Sep 08, 2022
Goal of the project : Detecting Temporal Boundaries in Sign Language videos

MVA RecVis course final project : Goal of the project : Detecting Temporal Boundaries in Sign Language videos. Sign language automatic indexing is an

Loubna Ben Allal 6 Dec 21, 2022