FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows

Overview

Copyright © Meta Platforms, Inc

This source code is licensed under the MIT license found in the LICENSE.txt file in the root directory of this source tree.

Overview

FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows.

Installing

An easy way to get started is to install from source:

git clone https://github.com/facebookincubator/flowtorch.git
cd flowtorch
pip install -e .

Further Information

We refer you to the FlowTorch website for more information about installation, using the library, and becoming a contributor. Here is a handy guide:

Comments
  • Ported Jacobian and inverse tests for Bijector from Pyro

    Ported Jacobian and inverse tests for Bijector from Pyro

    This PR, ports the two most important (and complex) tests from Pyro for bijectors: comparing the numerical Jacobian to the analytical one, and confirming that the Bijector.inverse method is correct for invertible bijectors.

    enhancement CLA Signed Merged 
    opened by stefanwebb 20
  • Lazy parameters and bijectors with metaclasses

    Lazy parameters and bijectors with metaclasses

    Motivation

    Shape information for a normalizing flow only becomes known when the base distribution has been specified. We have been searching for an ideal solution to express the delayed instantiation of Bijector and Params for this purpose. Several possible solutions are outlined in #57.

    Changes proposed

    The purpose of this PR is to showcase a prototype for a solution that uses metaclasses to express delayed instantiation. This works by intercepting .__call__ when a class initiated and returning a lazy wrapper around the class and bound arguments if only partial arguments are given to .__init__. If all arguments are given then the actual object is initialized. The lazy wrapper can have additional arguments bound to it, and will only become non-lazy when all the arguments are filled (or have defaults).

    enhancement CLA Signed refactor 
    opened by stefanwebb 16
  • Docusaurus v2/API docs integration + Meta rebranding

    Docusaurus v2/API docs integration + Meta rebranding

    Motivation

    The API docs are currently lacking content and use an inflexible system to specify which modules to include.

    Also, I am unable to make the repo public until I have rebranded FB as Meta Platforms

    Changes proposed

    I have integrated the new API markdown autogen tool with Docusaurus v2 styling into the website. It uses a general configuration file with regular expressions to specify what to include/exclude, displays a box/label for each symbol, plus it's signature if it has one and its raw docstring.

    The remaining tasks are parsing/formatting the docstring, adding symbol lists to module pages, and some small cosmetic fixes.

    I also completed the Meta rebranding in the copyright notices etc.

    Test Plan

    cd website
    yarn build
    yarn serve
    
    CLA Signed 
    opened by stefanwebb 12
  • Autogenerating imports for for `flowtorch.parameters` and `flowtorch.bijectors`

    Autogenerating imports for for `flowtorch.parameters` and `flowtorch.bijectors`

    Motivation

    It is tiresome to have to add new components to init.py for bijectors, distributions and parameters. We should be able to automatically generate it!

    Changes proposed

    Autogen for distributions was completed in a previous PR. This one completes it for parameters and bijectors.

    I also uncovered and fixed a bug in how utils.list_bijectors(), utils.list_distributions(), and utils.list_parameters were working

    CLA Signed Merged 
    opened by stefanwebb 10
  • First sample scripts

    First sample scripts

    Motivation

    We would like a number of example scripts to demonstrate how to use FlowTorch.

    Changes proposed

    I have created a new folder, /samples, and added the simple example from the landing page of the website. At the moment it is a Python script, although I think in the future they will be converted into Jupyter notebooks that are mirrored on Colab.

    Test Plan

    The sample plots figures that demonstrate learning is working.

    CLA Signed Merged 
    opened by stefanwebb 10
  • Parameterless bijectors

    Parameterless bijectors

    This PR migrates code from pyro.distributions.transforms and torch.distributions.transforms for parameterless bijectors.

    These are easy so I wanted to get them all over now!

    enhancement CLA Signed Merged 
    opened by stefanwebb 10
  • Empty params class: `flowtorch.params.Empty`

    Empty params class: `flowtorch.params.Empty`

    This PR adds a flowtorch.params.Empty class that will be used for flowtorch.bijectors.FixedBijector bijectors like Sigmoid, Exp, etc. that don't have any learnable parameters.

    I have fixed a number of other things in order to get all the tests running!

    enhancement CLA Signed 
    opened by stefanwebb 10
  • Autoregressive Bijector type

    Autoregressive Bijector type

    Motivation

    See #22 and #6.

    Changes proposed

    This PR implements a new bijectors.Autoregressive meta bijector. We then refactor bijectors.AffineAutoregressive as a class that inherits from bijectors.Affine and bijectors.Autoregressive.

    This change makes it easy to implement new autoregressive bijectors, like spline and neural autoregressive flow. All you have to do is implement the corresponding element-wise operator and inherit from the two

    CLA Signed Merged 
    opened by stefanwebb 9
  • Test that type hints are present for all Bijector classes' methods

    Test that type hints are present for all Bijector classes' methods

    Motivation

    mypy is excellent for checking types and preventing bugs, however it is not applied if type hints aren't declared for a function, method etc. Enforcing this via a unit test should lead to better code!

    Changes proposed

    I've written a unit test that will raise an exception when a methods arguments do not have type hints. Also, added stubs for additional tests on a Bijector/Params definition

    CLA Signed unit tests 
    opened by stefanwebb 9
  • Fixes pypi release, configured against test pypi

    Fixes pypi release, configured against test pypi

    Summary: Separates a pypi release workflow based off of github releases (these create tags so we dont get dev version numbers from setuptools_scm)

    Differential Revision: D28419348

    CLA Signed fb-exported Merged 
    opened by feynmanliang 9
  • CI installs stable Pytorch release

    CI installs stable Pytorch release

    Our CI was installing the nightly build of PyTorch, since 1.8.1 hadn't been released and we needed newly developed features in torch.distributions.constraints.

    Now that 1.8.1 is out, I have changed the config file to install the stable release.

    This PR contains the same changes from the flowtorch.params.Empty one so it can pass tests - @feynmanliang could you please merge the other one first? Then only the relevant changes should appear here

    CLA Signed Merged 
    opened by stefanwebb 9
  • Multivariate Bijectors Tutorial issue

    Multivariate Bijectors Tutorial issue

    Issue Description

    The Multivariate Bijectors tutorial notebook has an issue: someone hit a keyboard interrupt and so it's not complete.

    Steps to Reproduce

    No steps to reproduce needed, here's a snapshot stright from this github (https://github.com/facebookincubator/flowtorch/blob/main/tutorials/multivariate_bijections.ipynb) image

    Expected Behavior

    Users should expect the tutorial to be complete.

    opened by maulberto3 1
  • Issue with log_prob values not exported to Cuda

    Issue with log_prob values not exported to Cuda

    Issue Description

    A clear and concise description of the issue. If it's a feature request, please add [Feature Request] to the title.

    Not able to get all the data into 'device (CUDA)'. Facing problem at 'loss = -dist_y.log_prob(data).mean()'. Looks like data cant be transferred to GPU. Do we need to regester data as buffer and work around it?

    Error: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:1! (when checking argument for argument mat1 in method wrapper_addmm)

    Steps to Reproduce

    Please provide steps to reproduce the issue attaching any error messages and stack traces.

    dataset = torch.tensor(data_train, dtype=torch.float)
    trainloader = torch.utils.data.DataLoader(dataset, batch_size=1024)
    for steps in range(t_steps):
        step_loss=0
        for i, data in enumerate(trainloader):
            data = data.to(device)
            if i==0:
                print(data.shape)
                #p_getsizeof(data)
            try:
                optimizer.zero_grad()
                loss = -dist_y.log_prob(data).mean()
                loss.backward()
                optimizer.step()
            except ValueError as e:
                print('Error')
                print('Skipping thatbatch')
    

    Expected Behavior

    What did you expect to happen?

    Matrices should be computated in the CUDA device and not show a conflit of data being at 2 different place.

    System Info

    Please provide information about your setup

    • PyTorch Version (run print(torch.__version__)
    • Python version

    Additional Context

    opened by bigmb 2
  • [WIP] Conv1x1

    [WIP] Conv1x1

    Motivation

    Proposes a 1x1 convolution bijector.

    Test Plan

    from flowtorch.bijectors import Conv1x1Bijector
    import torch
    
    
    def test(LU_decompose, zero_init):
        c = Conv1x1Bijector(LU_decompose=LU_decompose, zero_init=zero_init)
        c = c(shape=torch.Size([10, 20, 20]))
        for p in c.parameters():
            p.data += torch.randn_like(p)/5
    
        x = torch.randn(1, 10, 20, 20)
        y = c.forward(x)
        yp = y.detach_from_flow()
        xp = c.inverse(yp)
        assert (xp/x-1).norm() < 1e-2
    
        assert (xp-x).norm()
        
    for LU_decompose in (True, False):
        for zero_init in (True, False):
            test(True, True)
    
    

    Important

    This PR is branched out from the coupling layer. I'll update the branch once the review of the coupling layer is completed.

    CLA Signed 
    opened by vmoens 0
  • Split Bijector

    Split Bijector

    Motivation

    We introduce the Split Bijector, which allows to split a tensor in half, process one half through a sequence of transformations and normalize the other.

    Changes proposed

    The new class first splits the tensor, then passes the outputs to the _param_fn and then to the transform itself. The introduction of a _forward_pre_ops and _inverse_pre_ops methods is necessary as, in the inverse case, we need to first pass the input through the transform inverse to then pass it through the convolutional layer that will give us the normalizing constants. This breaks the _param_fb(...) -> _inverse(...) logic, as we need to do something before _param_fn. As this might be the case for the forward pass too, we introduced a similar _forward_pre_ops method.

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [x] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    • [ ] The title of my pull request is a short description of the requested changes.
    CLA Signed 
    opened by vmoens 1
  • Split bijector

    Split bijector

    A splitting bijector splits an input x in two equal parts, x1 and x2 (see for instance Glow paper): image

    Of those, only x1 is passed to the remaining part of the flow. x2 on the other hand is "normalized" by a location and scale determined by x1. The transform usually looks like this

    def _forward(self, x):
        x1, x2 = x.chunk(2, -1)
        loc, scale = some_parametric_fun(x1)
        x2 = (x2 - loc) / scale
        log_abs_det_jacobian = scale.reciprocal().log().sum()  # part of the jacobian that accounts for the transform of x2
        log_abs_det_jacobian += self.normal.log_prob(x2).sum()  # since x2 will disappear, we can include its prior log-lik here
        return x1, log_abs_det_jacobian
    

    The _inverse is done like this

    def _inverse(self, y):
        x1 = y
        loc, scale = some_parametric_fun(x1)
        x2 = torch.randn_like(x1)  # since we fit x2 to a gaussian in forward
        log_abs_det_jacobian += self.normal.log_prob(x2).sum()  
        x2 = x2 * scale + loc
        log_abs_det_jacobian = scale.reciprocal().log().sum()  
        return torch.cat([x1, x2], -1), log_abs_det_jacobian
    

    However, I personally find this coding very confusing: First and foremost, it messes up with the logic y = flow(x) -> dist.log_prob(y). What if we don't want a normal? That seems orthogonal to the bijector responsibility to me. Second, it includes in the LADJ a normal log-likelihood, which should come from the prior. Third, it makes the _inverse stochastic, but that should not be the case. Finally, it has an input of -- say -- dimension d and an output of d/2 (and conversely for _inverse).

    For some models (e.g. Glow), when generating data, we don't sample from a Gaussian with unit variance but from a Gaussian with some decreased temperature (e.g. an SD of 0.9 or something). With this logic, we'd have to tell every split layer in a flow to modify the self.normal scale!

    What I would suggest is this: we could use SplitBijector as a wrapper around another bijector. The way that would work is this:

    class SplitBijector(Bijector):
        def __init__(self, bijector):
             ...
             self.bijector = bijector
    
        def _forward(self, x):
            x1, x2 = x.chunk(2, -1)
            loc, scale = some_parametric_fun(x1)
            y2 = (x2 - loc) / scale
            log_abs_det_jacobian = scale.reciprocal().log().sum()  # part of the jacobian that accounts for the transform of x2
            y1 = self.bijector.forward(x1)
            log_abs_det_jacobian += self.bijector.log_abs_det_jacobian(x1, y1)
            y = torch.cat([y1, y2], 0)
            return y, log_abs_det_jacobian
    

    The _inverse would follow. Of course bijector must have the same input and output space! That way, we solve all of our problems: input and output space match, no weird stuff happen with a nested normal log-density, the prior density is only called out of the bijector, and one can tweak it at will without caring about what will happen in the bijector.

    enhancement 
    opened by vmoens 1
Releases(0.8)
  • 0.8(Apr 27, 2022)

    • Fixed a bug in distributions.Flow.parameters() where it returned duplicate parameters
    • Several tutorials converted from .mdx to .ipynb format in anticipation of new tutorial system
    • Removed yarn.lock
    Source code(tar.gz)
    Source code(zip)
  • 0.7(Apr 25, 2022)

    This release add two new minor features.

    A new class flowtorch.bijectors.Invert can be used to swap the forward and inverse operator of a Bijector. This is useful to turn, for example, Inverse Autoregressive Flow (IAF) into Masked Autoregressive Flow (MAF).

    Bijector objects are now nn.Modules, which amongst other benefits allows easily saving and loading of state.

    Source code(tar.gz)
    Source code(zip)
  • 0.6(Mar 3, 2022)

    This small release fixes a bug in bijectors.ops.Spline where the sign of log(det(J)) was inverted for the .inverse method. It also fixes the unit tests so that they pick up this error in the future.

    Source code(tar.gz)
    Source code(zip)
  • 0.5(Feb 3, 2022)

    In this release, we add caching of intermediate values for Bijectors.

    What this means is that you can often reduce computation by calculating log|det(J)| at the same time as y = f(x). It's also useful for performing variational inference on Bijectors that don't have an explicit inverse. The mechanism by which this is achieved is a subclass of torch.Tensor called BijectiveTensor that bundles together (x, y, context, bundle, log_det_J).

    Special shout out to @vmoens for coming up with this neat solution and taking the implementation lead! Looking forward to your future contributions 🥳

    Source code(tar.gz)
    Source code(zip)
  • 0.4(Nov 18, 2021)

    Implementations of Inverse Autoregressive Flow and Neural Spline Flow.

    Basic content for website.

    Some unit tests for bijectors and distributions.

    Source code(tar.gz)
    Source code(zip)
Owner
Meta Incubator
We work hard to contribute our work back to the web, mobile, big data, & infrastructure communities. NB: members must have two-factor auth.
Meta Incubator
Code for "Adversarial Training for a Hybrid Approach to Aspect-Based Sentiment Analysis

HAABSAStar Code for "Adversarial Training for a Hybrid Approach to Aspect-Based Sentiment Analysis". This project builds on the code from https://gith

1 Sep 14, 2020
Compare outputs between layers written in Tensorflow and layers written in Pytorch

Compare outputs of Wasserstein GANs between TensorFlow vs Pytorch This is our testing module for the implementation of improved WGAN in Pytorch Prereq

Hung Nguyen 72 Dec 20, 2022
SAPIEN Manipulation Skill Benchmark

ManiSkill Benchmark SAPIEN Manipulation Skill Benchmark (abbreviated as ManiSkill, pronounced as "Many Skill") is a large-scale learning-from-demonstr

Hao Su's Lab, UCSD 107 Jan 08, 2023
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing

Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing Paper Introduction Multi-task indoor scene understanding is widely considered a

62 Dec 05, 2022
Implementation of paper "Graph Condensation for Graph Neural Networks"

GCond A PyTorch implementation of paper "Graph Condensation for Graph Neural Networks" Code will be released soon. Stay tuned :) Abstract We propose a

Wei Jin 66 Dec 04, 2022
A curated list of long-tailed recognition resources.

Awesome Long-tailed Recognition A curated list of long-tailed recognition and related resources. Please feel free to pull requests or open an issue to

Zhiwei ZHANG 542 Jan 01, 2023
WebUAV-3M: A Benchmark Unveiling the Power of Million-Scale Deep UAV Tracking

WebUAV-3M: A Benchmark Unveiling the Power of Million-Scale Deep UAV Tracking [Paper Link] Abstract In this work, we contribute a new million-scale Un

25 Jan 01, 2023
A python package to perform same transformation to coco-annotation as performed on the image.

coco-transform-util A python package to perform same transformation to coco-annotation as performed on the image. Installation Way 1 $ git clone https

1 Jan 14, 2022
Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling

VAE with Volume-Preserving Flows This is a PyTorch implementation of two volume-preserving flows as described in the following papers: Tomczak, J. M.,

Jakub Tomczak 87 Dec 26, 2022
Code for Domain Adaptive Video Segmentation via Temporal Consistency Regularization in ICCV 2021

Domain Adaptive Video Segmentation via Temporal Consistency Regularization Updates 08/2021: check out our domain adaptation for sematic segmentation p

36 Dec 12, 2022
On Generating Extended Summaries of Long Documents

ExtendedSumm This repository contains the implementation details and datasets used in On Generating Extended Summaries of Long Documents paper at the

Georgetown Information Retrieval Lab 76 Sep 05, 2022
Vision-Language Pre-training for Image Captioning and Question Answering

VLP This repo hosts the source code for our AAAI2020 work Vision-Language Pre-training (VLP). We have released the pre-trained model on Conceptual Cap

Luowei Zhou 373 Jan 03, 2023
This is the repository for Learning to Generate Piano Music With Sustain Pedals

SusPedal-Gen This is the official repository of Learning to Generate Piano Music With Sustain Pedals Demo Page Dataset The dataset used in this projec

Joann Ching 12 Sep 02, 2022
Anime Face Detector using mmdet and mmpose

Anime Face Detector This is an anime face detector using mmdetection and mmpose. (To avoid copyright issues, I use generated images by the TADNE model

198 Jan 07, 2023
Pytorch implementation of our paper under review — Lottery Jackpots Exist in Pre-trained Models

Lottery Jackpots Exist in Pre-trained Models (Paper Link) Requirements Python = 3.7.4 Pytorch = 1.6.1 Torchvision = 0.4.1 Reproduce the Experiment

Yuxin Zhang 27 Jun 28, 2022
Automatic labeling, conversion of different data set formats, sample size statistics, model cascade

Simple Gadget Collection for Object Detection Tasks Automatic image annotation Conversion between different annotation formats Obtain statistical info

llt 4 Aug 24, 2022
PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

Saim Wani 4 May 08, 2022
Faster RCNN with PyTorch

Faster RCNN with PyTorch Note: I re-implemented faster rcnn in this project when I started learning PyTorch. Then I use PyTorch in all of my projects.

Long Chen 1.6k Dec 23, 2022
Official repo for SemanticGAN https://nv-tlabs.github.io/semanticGAN/

SemanticGAN This is the official code for: Semantic Segmentation with Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalizat

151 Dec 28, 2022