Image-to-image regression with uncertainty quantification in PyTorch

Overview

im2im-uq

A platform for image-to-image regression with rigorous, distribution-free uncertainty quantification.


An algorithmic MRI reconstruction with uncertainty. A rapidly acquired but undersampled MR image of a knee (A) is fed into a model that predicts a sharp reconstruction (B) along with a calibrated notion of uncertainty (C). In (C), red means high uncertainty and blue means low uncertainty. Wherever the reconstruction contains hallucinations, the uncertainty is high; see the hallucination in the image patch (E), which has high uncertainty in (F), and does not exist in the ground truth (G).

Summary

This repository provides a convenient way to train deep-learning models in PyTorch for image-to-image regression---any task where the input and output are both images---along with rigorous uncertainty quantification. The uncertainty quantification takes the form of an interval for each pixel which is guaranteed to contain most true pixel values with high-probability no matter the choice of model or the dataset used (it is a risk-controlling prediction set). The training pipeline is already built to handle more than one GPU and all training/calibration should run automatically.

The basic workflow is

  • Define your dataset in core/datasets/.
  • Create a folder for your experiment experiments/new_experiment, along with a file experiments/new_experiment/config.yml defining the model architecture, hyperparameters, and method of uncertainty quantification. You can use experiments/fastmri_test/config.yml as a template.
  • Edit core/scripts/router.py to point to your data directory.
  • From the root folder, run wandb sweep experiments/new_experiment/config.yml, and run the resulting sweep.
  • After the sweep is complete, models will be saved in experiments/new_experiment/checkpoints, the metrics will be printed to the terminal, and outputs will be in experiments/new_experiment/output/. See experiments/fastmri_test/plot.py for an example of how to make plots from the raw outputs.

Following this procedure will train one or more models (depending on config.yml) that perform image-to-image regression with rigorous uncertainty quantification.

There are two pre-baked examples that you can run on your own after downloading the open-source data: experiments/fastmri_test/config.yml and experiments/temca_test/config.yml. The third pre-baked example, experiments/bsbcm_test/config.yml, reiles on data collected at Berkeley that has not yet been publicly released (but will be soon).

Paper

Image-to-Image Regression with Distribution-Free Uncertainty Quantification and Applications in Imaging

@article{angelopoulos2022image,
  title={Image-to-Image Regression with Distribution-Free Uncertainty Quantification and Applications in Imaging},
  author={Angelopoulos, Anastasios N and Kohli, Amit P and Bates, Stephen and Jordan, Michael I and Malik, Jitendra and Alshaabi, Thayer and Upadhyayula, Srigokul and Romano, Yaniv},
  journal={arXiv preprint arXiv:2202.05265},
  year={2022}
}

Installation

You will need to execute

conda env create -f environment.yml
conda activate im2im-uq

You will also need to go through the Weights and Biases setup process that initiates when you run your first sweep. You may need to make an account on their website.

Reproducing the results

FastMRI dataset

  • Download the FastMRI dataset to your machine and unzip it. We worked with the knee_singlecoil_train dataset.
  • Edit Line 71 of core/scripts/router to point to the your local dataset.
  • From the root folder, run wandb sweep experiments/fastmri_test/config.yml
  • After the run is complete, run cd experiments/fastmri_test/plot.py to plot the results.

TEMCA2 dataset

  • Download the TEMCA2 dataset to your machine and unzip it. We worked with sections 3501 through 3839.
  • Edit Line 78 of core/scripts/router to point to the your local dataset.
  • From the root folder, run wandb sweep experiments/temca_test/config.yml
  • After the run is complete, run cd experiments/temca_test/plot.py to plot the results.

Adding a new experiment

If you want to extend this code to a new experiment, you will need to write some code compatible with our infrastructure. If adding a new dataset, you will need to write a valid PyTorch dataset object; you need to add a new model architecture, you will need to specify it; and so on. Usually, you will want to start by creating a folder experiments/new_experiment along with a config file experiments/new_experiment/config.yml. The easiest way is to start from an existing config, like experiments/fastmri_test/config.yml.

Adding new datasets

To add a new dataset, use the following procedure.

  • Download the dataset to your machine.
  • In core/datasets, make a new folder for your dataset core/datasets/new_dataset.
  • Make a valid PyTorch Dataset class for your new dataset. The most critical part is writing a __get_item__ method that returns an image-image pair in CxHxW order; see core/datasets/bsbcm/BSBCMDataset.py for a simple example.
  • Make a file core/datasets/new_dataset/__init__.py and export your dataset by adding the line from .NewDataset.py import NewDatasetClass (substituting in your filename and classname appropriately).
  • Edit core/scripts/router.py to load your new dataset, near Line 64, following the pattern therein. You will also need to import your dataset object.
  • Populate your new config file experiments/new_experiment/config.yml with the correct directories and experiment name.
  • Execute wandb sweep experiments/new_experiment/config.yml and proceed as normal!

Adding new models

In our system, there are two parts to a model---the base architecture, which we call a trunk (e.g. a U-Net), and the final layer. Defining a trunk is as simple as writing a regular PyTorch nn.module and adding it near Line 87 of core/scripts/router.py (you will also need to import it); see core/models/trunks/unet.py for an example.

The process for adding a final layer is a bit more involved. The final layer is simply a Pytorch nn.module, but it also must come with two functions: a loss function and a nested prediction set function. See core/models/finallayers/quantile_layer.py for an example. The steps are:

  • Create a final layer nn.module object. The final layer should also have a heuristic notion of uncertainty built in, like quantile outputs.
  • Specify the loss function is used to train a network with this final layer.
  • Specify a nested prediction set function that uses output of the final layer to form a prediction set. The prediction set should scale up and down with a free factor lam, which will later be calibrated. The function should have the same prototype as that on Line 34 of core/models/finallayers/quantile_layer.py for an example.
  • After creating the new final layer and related functions, add it to core/models/add_uncertainty.py as in Line 59.
  • Edit wandb sweep experiments/new_experiment/config.yml to include your new final layer, and run the sweep as normal!
Owner
Anastasios Angelopoulos
Ph.D. student at UC Berkeley AI Research.
Anastasios Angelopoulos
Conversion between units used in magnetism

convmag Conversion between various units used in magnetism The conversions between base units available are: T - G : 1e4

0 Jul 15, 2021
Deep GPs built on top of TensorFlow/Keras and GPflow

GPflux Documentation | Tutorials | API reference | Slack What does GPflux do? GPflux is a toolbox dedicated to Deep Gaussian processes (DGP), the hier

Secondmind Labs 107 Nov 02, 2022
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (Local-Lip)

Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (Local-Lip) Introduction TL;DR: We propose an efficient and trainabl

17 Dec 01, 2022
ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees

ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees This repository is the official implementation of the empirica

Kuan-Lin (Jason) Chen 2 Oct 02, 2022
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a-Service". Being busy recently, the code in this repo and this tutoria

Tianxiang Sun 149 Jan 04, 2023
A U-Net combined with a variational auto-encoder that is able to learn conditional distributions over semantic segmentations.

Probabilistic U-Net + **Update** + An improved Model (the Hierarchical Probabilistic U-Net) + LIDC crops is now available. See below. Re-implementatio

Simon Kohl 498 Dec 26, 2022
Prososdy Morph: A python library for manipulating pitch and duration in an algorithmic way, for resynthesizing speech.

ProMo (Prosody Morph) Questions? Comments? Feedback? Chat with us on gitter! A library for manipulating pitch and duration in an algorithmic way, for

Tim 71 Jan 02, 2023
Repositorio oficial del curso IIC2233 Programación Avanzada 🚀✨

IIC2233 - Programación Avanzada Evaluación Las evaluaciones serán efectuadas por medio de actividades prácticas en clases y tareas. Se calculará la no

IIC2233 @ UC 0 Dec 15, 2022
Simultaneous Detection and Segmentation

Simultaneous Detection and Segmentation This is code for the ECCV Paper: Simultaneous Detection and Segmentation Bharath Hariharan, Pablo Arbelaez,

Bharath Hariharan 96 Jul 20, 2022
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

Super Resolution Examples We run this script under TensorFlow 2.0 and the TensorLayer2.0+. For TensorLayer 1.4 version, please check release. 🚀 🚀 🚀

TensorLayer Community 2.9k Jan 08, 2023
Checking fibonacci - Generating the Fibonacci sequence is a classic recursive problem

Fibonaaci Series Generating the Fibonacci sequence is a classic recursive proble

Moureen Caroline O 1 Feb 15, 2022
This repo contains the implementation of YOLOv2 in Keras with Tensorflow backend.

Easy training on custom dataset. Various backends (MobileNet and SqueezeNet) supported. A YOLO demo to detect raccoon run entirely in brower is accessible at https://git.io/vF7vI (not on Windows).

Huynh Ngoc Anh 1.7k Dec 24, 2022
Reinforcement Learning for Automated Trading

Reinforcement Learning for Automated Trading This thesis has been realized for the obtention of the Master's in Mathematical Engineering at the Polite

Pierpaolo Necchi 80 Jun 19, 2022
Contrastive Multi-View Representation Learning on Graphs

Contrastive Multi-View Representation Learning on Graphs This work introduces a self-supervised approach based on contrastive multi-view learning to l

Kaveh 208 Dec 23, 2022
Where2Act: From Pixels to Actions for Articulated 3D Objects

Where2Act: From Pixels to Actions for Articulated 3D Objects The Proposed Where2Act Task. Given as input an articulated 3D object, we learn to propose

Kaichun Mo 69 Nov 28, 2022
Gym-TORCS is the reinforcement learning (RL) environment in TORCS domain with OpenAI-gym-like interface.

Gym-TORCS Gym-TORCS is the reinforcement learning (RL) environment in TORCS domain with OpenAI-gym-like interface. TORCS is the open-rource realistic

naoto yoshida 400 Dec 27, 2022
Generate image analogies using neural matching and blending

neural image analogies This is basically an implementation of this "Image Analogies" paper, In our case, we use feature maps from VGG16. The patch mat

Adam Wentz 3.5k Jan 08, 2023
Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis"

Beyond the Spectrum Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis" by Yang He, Ning Yu, Margret Keu

Yang He 27 Jan 07, 2023
Python Interview Questions

Python Interview Questions Clone the code to your computer. You need to understand the code in main.py and modify the content in if __name__ =='__main

ClassmateLin 575 Dec 28, 2022
Can we learn gradients by Hamiltonian Neural Networks?

Can we learn gradients by Hamiltonian Neural Networks? This project was carried out as part of the Optimization for Machine Learning course (CS-439) a

2 Aug 22, 2022