PyTorch implementation of normalizing flow models

Overview

Normalizing Flows

This is a PyTorch implementation of several normalizing flows, including a variational autoencoder. It is used in the articles A Gradient Based Strategy for Hamiltonian Monte Carlo Hyperparameter Optimization and Resampling Base Distributions of Normalizing Flows.

Implemented Flows

Methods of Installation

The latest version of the package can be installed via pip

pip install --upgrade git+https://github.com/VincentStimper/normalizing-flows.git

If you want to use a GPU, make sure that PyTorch is set up correctly by by following the instructions at the PyTorch website.

To run the example notebooks clone the repository first

git clone https://github.com/VincentStimper/normalizing-flows.git

and then install the dependencies.

pip install -r requirements_examples.txt
Comments
  • Replication of comparable glow with papers

    Replication of comparable glow with papers

    Hi, Thanks for developing this package. I find it very neat and flexible and would like to use it for my research. I noticed that in the paper "Resampling Base Distributions of Normalizing Flows", the bpd of your glow can reach 3.2~3.3, which is comparable to the original paper.

    I was wondering if it is possible to share your training scripts and details to train glow on cifar10 to achieve the above bpd. The current example notebook is too sketchy and only results in bpd of 3.8. Thanks very much!

    opened by prclibo 1
  • Changed ActNorm flag to buffer to allow saving

    Changed ActNorm flag to buffer to allow saving

    Without it being a buffer, when you load a model and sample from it, the flow thinks it is the first run through so overwrites all the trained ActNorm parameters.

    opened by arc82 1
  • Fix deprecation warning

    Fix deprecation warning

    Fixes the deprecation warning documented in https://github.com/VincentStimper/normalizing-flows/issues/12.


    Sanity check: Running this before the change:

    import normflows as nf
    import torch
    
    torch.manual_seed(42)
    
    flow = nf.NormalizingFlow(
        nf.distributions.DiagGaussian(1, trainable=False),
        [
            nf.flows.AutoregressiveRationalQuadraticSpline(1, 1, 1),
            nf.flows.LULinearPermute(1)
        ]
    )
    
    with torch.no_grad():
        samples_flow, _ = flow.sample(4)
    
    print(samples_flow)
    

    gives:

    tensor([[0.4528],
            [0.6410],
            [0.5200],
            [0.5567]])
    

    After the change, the output stays the same.

    opened by timothygebhard 0
  • Sampling from flow raises deprecation warning

    Sampling from flow raises deprecation warning

    Running the following minimal example:

    import normflows as nf
    import torch
    
    torch.manual_seed(42)
    
    flow = nf.NormalizingFlow(
        nf.distributions.DiagGaussian(1, trainable=False),
        [
            nf.flows.AutoregressiveRationalQuadraticSpline(1, 1, 1),
            nf.flows.LULinearPermute(1)
        ]
    )
    
    with torch.no_grad():
        samples_flow, _ = flow.sample(4)
    
    print(samples_flow)
    

    raises a UserWarning about an upcoming deprecation:

    /Users/timothy/Desktop/normalizing-flows/normflows/flows/mixing.py:437: UserWarning: torch.triangular_solve is deprecated in favor of torch.linalg.solve_triangular and will be removed in a future PyTorch release.
    torch.linalg.solve_triangular has its arguments reversed and does not return a copy of one of the inputs.
    X = torch.triangular_solve(B, A).solution
    should be replaced with
    X = torch.linalg.solve_triangular(A, B). (Triggered internally at  /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:2189.)
      outputs, _ = torch.triangular_solve(
    

    I will submit a PR shortly that fixes the issue 🙂

    opened by timothygebhard 0
  • Vberenz/mkdocs

    Vberenz/mkdocs

    Added mkdocs structure, and refactored the docstrings (and applied black)

    to install the dependencies for documentation building:

    pip install -e ".[docs]"
    

    To see the doc:

    mkdocs serve
    

    This starts a live server. Modifications of the documentation are rendered live (excluded the modications to docstrings)

    To build the docs:

    mkdocs build
    

    this will create the site folder (including index.html)

    To expend the docs:

    Markdown files can be added in the docs folder, then the "nav" section of the mkdocs.yml file has to be updated, e.g.

    nav:
      - about: index.md
      - API: references.md
      - my other page: mymarkdown.md
    
    • good to know: markdown can be used in the docstrings.
    • apparently, deploying the documentation online on github after built is as simple as calling mkdocs gh-deploy (I did not try it yet)

    I still need to do:

    • continuous build on github (documentation is rebuilt and deployed at each merge into master)
    • a correction pass on the docstrings (I updated them, but did not check them one by one yet)
    • the layout is not so nice (especially for the API), needs to be improved
    • apparently mkdocs allows to display jupyter notebooks, I need to dig
    opened by vincentberenz 0
  • feat: add optional gradient clipping to HMC flow

    feat: add optional gradient clipping to HMC flow

    Add the option to clip the gradient of the target log prob within HMC. For some target distributions, the log prob may have some very large gradients which can cause numerical instability - the gradient clipping can help with this.

    opened by lollcat 0
  • Added minor fixes for bugs and warnings

    Added minor fixes for bugs and warnings

    This commit made three changes to the original repo:

    1. Fixes the warning regarding the 'is' keyword:
    /home/donglin/Github/normalizing-flows/normflow/nets.py:45: SyntaxWarning: "is" with a literal. Did you mean "=="?
      if output_fn is "sigmoid":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:47: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "relu":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:49: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "tanh":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:51: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "clampexp":
    
    1. Fixes the warning regarding 'torch.qr':
    /home/donglin/Github/normalizing-flows/normflow/flows.py:616: UserWarning: torch.qr is deprecated in favor of torch.linalg.qr and will be removed in a future PyTorch release.
    The boolean parameter 'some' has been replaced with a string parameter 'mode'.
    Q, R = torch.qr(A, some)
    should be replaced with
    Q, R = torch.linalg.qr(A, 'reduced' if some else 'complete') (Triggered internally at  /opt/conda/conda-bld/pytorch_1623448278899/work/aten/src/ATen/native/BatchLinearAlgebra.cpp:1940.)
      Q = torch.qr(torch.randn(self.num_channels, self.num_channels))[0]
    
    1. Eliminated the "nf.util.ToDevice" call during the data pre-processing in glow.ipynb
    opened by Donglin-Wang2 0
  • RuntimeError: output with shape [32] doesn't match the broadcast shape [1, 32]

    RuntimeError: output with shape [32] doesn't match the broadcast shape [1, 32]

    Hi, when doing experiments, I'd suggest doing some other tutorials, for example for the ClassCondFlow, as while trying to one on my own, I keep encountering this error.

    opened by maulberto3 0
  • Normalizing Flow vs Normalizing Flow VAE behavior

    Normalizing Flow vs Normalizing Flow VAE behavior

    I can't help but to wonder why the NormalizingFlow class use the flows' inverse method when computing forward_kl, but, on the contrary, when using the NormalizingFlowVAE, it uses the flows' forward method.

    This way, when trying to fit MNIST with NormalizingFlow, when training and passing a batch of say (64, 784) images I get the following error:

         34 for i in range(len(self.flows) - 1, -1, -1):
         35     z, log_det = self.flows[i].inverse(z)
    ---> 36     log_q += log_det
         37 log_q += self.q0.log_prob(z)
         38 return -torch.mean(log_q)
    
    RuntimeError: output with shape [64] doesn't match the broadcast shape [1, 64]
    

    Any help/suggestion?

    opened by maulberto3 0
  • Inconsistency between log_q and log_p in Encoder and NormalizingFlowVAE

    Inconsistency between log_q and log_p in Encoder and NormalizingFlowVAE

    In NormalizingFlowVAE class in core.py, this line says that the encoder outputs log_q:

    z, log_q = self.q0(x, num_samples=num_samples)

    Suppose that, as in this example, the encoder Gaussian (q0) is parameterized by an MLP. Looking at distributions.encoder.py source code, the forward method of NNDiagGaussian class says that it outputs log_p:

    return z, log_p

    Inconsistency or not?

    opened by maulberto3 0
  • NormalizingFlow class in core.py does not provide context in forward_kld

    NormalizingFlow class in core.py does not provide context in forward_kld

    Thank you for a repo that's easy to handle with a normalizing flow of one's choice!

    I would like to implement a normalizing flow that optimizes multiple target distributions at once depending on the context I would provide to it. Yet, currently, afai, no context can be providided in the .forward_kld method of the NormalizingFlow class.

    Would be great if that's added!

    Cheers,

    Yves

    opened by ybernaerts 3
  • Decoupled sampling and generation interfaces

    Decoupled sampling and generation interfaces

    Hi, this PR added some new interface to get better control during sampling, e.g. to repeatedly generate on same latent code when training the model. Please check if it is useful as a merge:)

    opened by prclibo 0
Releases(v1.5)
  • v1.5(Dec 21, 2022)

    A rendered documentation is added to the repository, which is available on https://vincentstimper.github.io/normalizing-flows/.

    Test were added for several flow modules, which can be run via pytest. With these new tests, several bugs were detected and fixed. The current coverage is about 61%. More tests will be added in the future as well as automated testing and coverage analysis on GitHub.

    Moreover, the code is adapted to the syntax of newer PyTorch Versions.

    Source code(tar.gz)
    Source code(zip)
  • v1.4(Jul 26, 2022)

    The package is now available on PyPI, which means that it can just be installed with

    pip install normflows
    

    from now on. The code was reformatted to conform to the black coding style.

    Moreover, the following fixes and additions are included:

    • The computation of the alpha-divergence objective was corrected.
    • A bug regarding sampling from the mixture of Gaussian base distribution was fixed.
    • A flow layer to warp periodic variables was added.
    • The dependency from the Residual Flow repository was removed.
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Apr 5, 2022)

    The code was reorganized to be more hierarchical and readable. Also all required functionality for Neural Spline Flows were added to the repository to remove the dependency on the original Neural Spline Flow repository.

    Furthermore, the following features were introduced:

    • Class to reverse a flow layer
    • Class to build a chain of flow layers
    • Affine Masked Autoregressive Flows (MAF)
    • Circular Neural Spline Flows
    • Neural Spline Flows with circular and non-circular coordinates
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Feb 6, 2022)

  • v1.0(Nov 25, 2021)

    Normalizing flow library comprising the most popular flow architectures, among them Real NVP, Glow, Neural Spline Flow, and Residual Flow.

    Source code(tar.gz)
    Source code(zip)
Owner
Vincent Stimper
PhD student in Machine Learning at the University of Cambridge and the Max Planck Institute for Intelligent Systems
Vincent Stimper
Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation

Official Code Implementation of The Paper : XAI for Transformers: Better Explanations through Conservative Propagation For the SST-2 and IMDB expermin

Ameen Ali 23 Dec 30, 2022
Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression

Regression Transformer Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression . Development se

International Business Machines 27 Jan 05, 2023
Python implementation of "Single Image Haze Removal Using Dark Channel Prior"

##Dependencies pillow(~2.6.0) Numpy(~1.9.0) If the scripts throw AttributeError: __float__, make sure your pillow has jpeg support e.g. try: $ sudo ap

Joyee Cheung 73 Dec 20, 2022
This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider

SBEVNet: End-to-End Deep Stereo Layout Estimation This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by D

Divam Gupta 19 Dec 17, 2022
Code for "Learning Graph Cellular Automata"

Learning Graph Cellular Automata This code implements the experiments from the NeurIPS 2021 paper: "Learning Graph Cellular Automata" Daniele Grattaro

Daniele Grattarola 37 Oct 26, 2022
Log4j JNDI inj. vuln scanner

Log-4-JAM - Log 4 Just Another Mess Log4j JNDI inj. vuln scanner Requirements pip3 install requests_toolbelt Usage # make sure target list has http/ht

Ashish Kunwar 66 Nov 09, 2022
MM1 and MMC Queue Simulation using python - Results and parameters in excel and csv files

implementation of MM1 and MMC Queue on randomly generated data and evaluate simulation results then compare with analytical results and draw a plot curve for them, simulate some integrals and compare

Mohamadreza Rezaei 1 Jan 19, 2022
ArcaneGAN by Alex Spirin

ArcaneGAN by Alex Spirin

Alex 617 Dec 28, 2022
Code that accompanies the paper Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance

Semi-supervised Deep Kernel Learning This is the code that accompanies the paper Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data

58 Oct 26, 2022
Implementation of the paper titled "Using Sampling to Estimate and Improve Performance of Automated Scoring Systems with Guarantees"

Using Sampling to Estimate and Improve Performance of Automated Scoring Systems with Guarantees Implementation of the paper titled "Using Sampling to

MIDAS, IIIT Delhi 2 Aug 29, 2022
Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)

Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion This repository contains a pytorch implementation of "Learning to Listen: Modeling

50 Dec 17, 2022
Example of semantic segmentation in Keras

keras-semantic-segmentation-example Example of semantic segmentation in Keras Single class example: Generated data: random ellipse with random color o

53 Mar 23, 2022
For IBM Quantum Challenge 2021 (May 20 - 26)

IBM Quantum Challenge 2021 Introduction Commemorating the 40-year anniversary of the Physics of Computation conference, and 5-year anniversary of IBM

Qiskit Community 140 Jan 01, 2023
An self sufficient AI that crawls the web to learn how to generate art from keywords

Roxx-IO - The Smart Artist AI! TO DO / IDEAS Implement Web-Scraping Functionality Figure out a less annoying (and an off button for it) text to speech

Tatz 5 Mar 21, 2022
Gin provides a lightweight configuration framework for Python

Gin Config Authors: Dan Holtmann-Rice, Sergio Guadarrama, Nathan Silberman Contributors: Oscar Ramirez, Marek Fiser Gin provides a lightweight configu

Google 1.7k Jan 03, 2023
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [Official Paddle Implementation] [Huggingface Gradio Demo] [Unofficial

442 Dec 16, 2022
Azua - build AI algorithms to aid efficient decision-making with minimum data requirements.

Project Azua 0. Overview Many modern AI algorithms are known to be data-hungry, whereas human decision-making is much more efficient. The human can re

Microsoft 197 Jan 06, 2023
A scikit-learn-compatible module for estimating prediction intervals.

|Anaconda|_ MAPIE - Model Agnostic Prediction Interval Estimator MAPIE allows you to easily estimate prediction intervals using your favourite sklearn

SimAI 584 Dec 27, 2022
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Facebook Research 182 Dec 30, 2022
Attentional Focus Modulates Automatic Finger‑tapping Movements

"Attentional Focus Modulates Automatic Finger‑tapping Movements", in Scientific Reports

Xingxun Jiang 1 Dec 02, 2021