Bayesian Neural Networks in PyTorch

Overview

We present the new scheme to compute Monte Carlo estimator in Bayesian VI settings with almost no memory cost in GPU, regardles of the number of samples. Our method is described in the paper (UAI2021): "Graph Reparameterizations for Enabling 1000+ Monte Carlo Iterations in Bayesian Deep Neural Networks".

In addition, we provide an implementation framework to make your deterministic network Bayesian in PyTorch.

If you like our work, please click on a star. If you use our code in your research projects, please cite our paper above.

Bayesify your Neural Network

There are 3 main files which help you to Bayesify your deterministic network:

  1. bayes_layers.py - file contains a bayesian implementation of convolution(1d, 2d, 3d, transpose) and linear layers, according to approx posterior from Location-Scale family, i.e. which has 2 parameters mu and sigma. This file contains general definition, independent of specific distribution, as long as distribution contains 2 parameters mu and sigma. It uses forward method defined in vi_posteriors.py file. One of the main arguments for redefined classes is approx_post, which defined which posterior class to use from vi_posteriors.py. Please, specify this name same way as defined class in vi_posteriors.py. For example, if vi_posteriors.py contains class Gaus, then approx_post='Gaus'.

  2. vi_posteriors.py - file describes forward method, including kl term, for different approximate posterior distributions. Current implementation contains following disutributions:

  • Radial
  • Gaus

If you would like to implement your own class of distrubtions, in vi_posteriors.py copy one of defined classes and redefine following functions: forward(obj, x, fun=""), get_kl(obj, n_mc_iter, device).

It also contains usefull Utils class which provides

  • definition of loss functions:
    • get_loss_categorical
    • get_loss_normal,
  • different beta coefficients: get_beta for KL term and
  • allows to turn on/off computing the KL term, with function set_compute_kl. this is useful, when you perform testing/evaluation, and kl term is not required to be computed. In that case it accelerates computations.

Below is an example to bayesify your own network. Note the forward method, which handles situations if a layer is not of a Bayesian type, and thus, does not return kl term, e.g. ReLU(x).

import bayes_layers as bl # important for defining bayesian layers
class YourBayesNet(nn.Module):
    def __init__(self, num_classes, in_channels, 
                 **bayes_args):
        super(YourBayesNet, self).__init__()
        self.conv1 = bl.Conv2d(in_channels, 64,
                               kernel_size=11, stride=4,
                               padding=5,
                               **bayes_args)
        self.classifier = bl.Linear(1*1*128,
                                    num_classes,
                                    **bayes_args)
        self.layers = [self.conv1, nn.ReLU(), self.classifier]
        
    def forward(self, x):
        kl = 0
        for layer in self.layers:
            tmp = layer(x)
            if isinstance(tmp, tuple):
                x, kl_ = tmp
                kl += kl_
            else:
                x = tmp

        x = x.view(x.size(0), -1)
        logits, _kl = self.classifier.forward(x)
        kl += _kl
        
        return logits, kl

Then later in the main file during training, you can either use one of the loss functions, defined in utils as following:

output, kl = model(inputs)
kl = kl.mean()  # if several gpus are used to split minibatch

loss, _ = vi.Utils.get_loss_categorical(kl, output, targets, beta=beta) 
#loss, _ = vi.Utils.get_loss_normal(kl, output, targets, beta=beta) 
loss.backward()

or design your own, e.g.

loss = kl_coef*kl - loglikelihood
loss.backward()
  1. uncertainty_estimate.py - file describes set of functions to perform uncertainty estimation, e.g.
  • get_prediction_class - function which return the most common class in iterations
  • summary_class - function creates a summary file with statistics

Current implementation of networks for different problems

Classification

Script bayesian_dnn_class/main.py is the main executable code and all standard DNN models are located in bayesian_dnn_class/models, and are:

  • AlexNet
  • Fully Connected
  • DenseNet
  • ResNet
  • VGG
Owner
Jurijs Nazarovs
PhD student in statistics at the UW-Madison.
Jurijs Nazarovs
PyTorch implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Erik Linder-Norén 13.4k Jan 08, 2023
Official implementation of VQ-Diffusion

Official implementation of VQ-Diffusion: Vector Quantized Diffusion Model for Text-to-Image Synthesis

Microsoft 592 Jan 03, 2023
PyTorch trainer and model for Sequence Classification

PyTorch-trainer-and-model-for-Sequence-Classification After cloning the repository, modify your training data so that the training data is a .csv file

NhanTieu 2 Dec 09, 2022
Code for models used in Bashiri et al., "A Flow-based latent state generative model of neural population responses to natural images".

A Flow-based latent state generative model of neural population responses to natural images Code for "A Flow-based latent state generative model of ne

Sinz Lab 5 Aug 26, 2022
A self-supervised 3D representation learning framework named viewpoint bottleneck.

Pointly-supervised 3D Scene Parsing with Viewpoint Bottleneck Paper Created by Liyi Luo, Beiwen Tian, Hao Zhao and Guyue Zhou from Institute for AI In

63 Aug 11, 2022
Simple and Distributed Machine Learning

Synapse Machine Learning SynapseML (previously MMLSpark) is an open source library to simplify the creation of scalable machine learning pipelines. Sy

Microsoft 3.9k Dec 30, 2022
Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback

CoSMo.pytorch Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback, Seungmin Lee*, Dongwan Kim*, Bohyung

Seung Min Lee 54 Dec 08, 2022
Scale-aware Automatic Augmentation for Object Detection (CVPR 2021)

SA-AutoAug Scale-aware Automatic Augmentation for Object Detection Yukang Chen, Yanwei Li, Tao Kong, Lu Qi, Ruihang Chu, Lei Li, Jiaya Jia [Paper] [Bi

DV Lab 182 Dec 29, 2022
Language Used: Python . Made in Jupyter(Anaconda) notebook.

FACE-DETECTION-ATTENDENCE-SYSTEM Made in Jupyter(Anaconda) notebook. Language Used: Python Steps to perform before running the program : Install Anaco

1 Jan 12, 2022
Perform zero-order Hankel Transform for an 1D array (float or real valued).

perform zero-order Hankel Transform for an 1D array (float or real valued). An discrete form of Parseval theorem is guaranteed. Suit for iterative problems.

1 Jan 17, 2022
RobustVideoMatting and background composing in one model by using onnxruntime.

RVM_onnx_compose RobustVideoMatting and background composing in one model by using onnxruntime. Usage pip install -r requirements.txt python infer_cam

Quantum Liu 4 Apr 07, 2022
phylotorch-bito is a package providing an interface to BITO for phylotorch

phylotorch-bito phylotorch-bito is a package providing an interface to BITO for phylotorch Dependencies phylotorch BITO Installation Get the source co

Mathieu Fourment 2 Sep 01, 2022
Face Alignment using python

Face Alignment Face Alignment using python Input Image Aligned Face Aligned Face Aligned Face Input Image Aligned Face Input Image Aligned Face Instal

Sajjad Aemmi 28 Nov 23, 2022
Datasets, tools, and benchmarks for representation learning of code.

The CodeSearchNet challenge has been concluded We would like to thank all participants for their submissions and we hope that this challenge provided

GitHub 1.8k Dec 25, 2022
Implementation of PersonaGPT Dialog Model

PersonaGPT An open-domain conversational agent with many personalities PersonaGPT is an open-domain conversational agent cpable of decoding personaliz

ILLIDAN Lab 42 Jan 01, 2023
Convex optimization for fun and profit.

CFMM Optimal Routing This repository contains the code needed to generate the figures used in the paper Optimal Routing for Constant Function Market M

Guillermo Angeris 183 Dec 29, 2022
Anderson Acceleration for Deep Learning

Anderson Accelerated Deep Learning (AADL) AADL is a Python package that implements the Anderson acceleration to speed-up the training of deep learning

Oak Ridge National Laboratory 7 Nov 24, 2022
Small repo describing how to use Hugging Face's Wav2Vec2 with PyCTCDecode

🤗 Transformers Wav2Vec2 + PyCTCDecode Introduction This repo shows how 🤗 Transformers can be used in combination with kensho-technologies's PyCTCDec

Patrick von Platen 102 Oct 22, 2022
Open-World Entity Segmentation

Open-World Entity Segmentation Project Website Lu Qi*, Jason Kuen*, Yi Wang, Jiuxiang Gu, Hengshuang Zhao, Zhe Lin, Philip Torr, Jiaya Jia This projec

DV Lab 410 Jan 03, 2023
Repo for the paper Extrapolating from a Single Image to a Thousand Classes using Distillation

Extrapolating from a Single Image to a Thousand Classes using Distillation by Yuki M. Asano* and Aaqib Saeed* (*Equal Contribution) Extrapolating from

Yuki M. Asano 16 Nov 04, 2022