A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization

Overview

Website, Tutorials, and Docs     

 
Uncertainty Toolbox

A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization.
Also: a glossary of useful terms and a collection of relevant papers and references.

 
Many machine learning methods return predictions along with uncertainties of some form, such as distributions or confidence intervals. This begs the questions: How do we determine which predictive uncertanties are best? What does it mean to produce a best or ideal uncertainty? Are our uncertainties accurate and well calibrated?

Uncertainty Toolbox provides standard metrics to quantify and compare predictive uncertainty estimates, gives intuition for these metrics, produces visualizations of these metrics/uncertainties, and implements simple "re-calibration" procedures to improve these uncertainties. This toolbox currently focuses on regression tasks.

Toolbox Contents

Uncertainty Toolbox contains:

  • Glossary of terms related to predictive uncertainty quantification.
  • Metrics for assessing quality of predictive uncertainty estimates.
  • Visualizations for predictive uncertainty estimates and metrics.
  • Recalibration methods for improving the calibration of a predictor.
  • Paper list: publications and references on relevant methods and metrics.

Installation

Uncertainty Toolbox requires Python 3.6+. For a lightweight installation of the package only, run:

pip install git+https://github.com/uncertainty-toolbox/uncertainty-toolbox

For a full installation with examples and tests, run:

git clone https://github.com/uncertainty-toolbox/uncertainty-toolbox.git
cd uncertainty-toolbox
pip install -e .

To verify correct installation, you can run the test suite via:

source shell/run_all_tests.sh

Quick Start

import uncertainty_toolbox as uct

# Load an example dataset of 100 predictions, uncertainties, and ground truth values
predictions, predictions_std, y, x = uct.data.synthetic_sine_heteroscedastic(100)

# Compute all uncertainty metrics
metrics = uct.metrics.get_all_metrics(predictions, predictions_std, y)

This example computes metrics for a vector of predicted values (predictions) and associated uncertainties (predictions_std, a vector of standard deviations), taken with respect to a corresponding set of ground truth values y.

Colab notebook: You can also take a look at this Colab notebook, which walks through a use case of Uncertainty Toolbox.

Metrics

Uncertainty Toolbox provides a number of metrics to quantify and compare predictive uncertainty estimates. For example, the get_all_metrics function will return:

  1. average calibration: mean absolute calibration error, root mean squared calibration error, miscalibration area.
  2. adversarial group calibration: mean absolute adversarial group calibration error, root mean squared adversarial group calibration error.
  3. sharpness: expected standard deviation.
  4. proper scoring rules: negative log-likelihood, continuous ranked probability score, check score, interval score.
  5. accuracy: mean absolute error, root mean squared error, median absolute error, coefficient of determination, correlation.

Visualizations

The following plots are a few of the visualizations provided by Uncertainty Toolbox. See this example for code to reproduce these plots.

Overconfident (too little uncertainty)

Underconfident (too much uncertainty)

Well calibrated

And here are a few of the calibration metrics for the above three cases:

Mean absolute calibration error (MACE) Root mean squared calibration error (RMSCE) Miscalibration area (MA)
Overconfident 0.19429 0.21753 0.19625
Underconfident 0.20692 0.23003 0.20901
Well calibrated 0.00862 0.01040 0.00865

Recalibration

The following plots show the results of a recalibration procedure provided by Uncertainty Toolbox, which transforms a set of predictive uncertainties to improve average calibration. The algorithm is based on isotonic regression, as proposed by Kuleshov et al.

See this example for code to reproduce these plots.

Recalibrating overconfident predictions

Mean absolute calibration error (MACE) Root mean squared calibration error (RMSCE) Miscalibration area (MA)
Before Recalibration 0.19429 0.21753 0.19625
After Recalibration 0.01124 0.02591 0.01117

Recalibrating underconfident predictions

Mean absolute calibration error (MACE) Root mean squared calibration error (RMSCE) Miscalibration area (MA)
Before Recalibration 0.20692 0.23003 0.20901
After Recalibration 0.00157 0.00205 0.00132

Contributing

We welcome and greatly appreciate contributions from the community! Please see our contributing guidelines for details on how to help out.

Citation

If you found this toolbox helpful, please cite the following paper:

@article{chung2021uncertainty,
  title={Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification},
  author={Chung, Youngseog and Char, Ian and Guo, Han and Schneider, Jeff and Neiswanger, Willie},
  journal={arXiv preprint arXiv:2109.10254},
  year={2021}
}

Additionally, here are papers that led to the development of the toolbox:

@article{chung2020beyond,
  title={Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification},
  author={Chung, Youngseog and Neiswanger, Willie and Char, Ian and Schneider, Jeff},
  journal={arXiv preprint arXiv:2011.09588},
  year={2020}
}

@article{tran2020methods,
  title={Methods for comparing uncertainty quantifications for material property predictions},
  author={Tran, Kevin and Neiswanger, Willie and Yoon, Junwoong and Zhang, Qingyang and Xing, Eric and Ulissi, Zachary W},
  journal={Machine Learning: Science and Technology},
  volume={1},
  number={2},
  pages={025006},
  year={2020},
  publisher={IOP Publishing}
}

Acknowledgments

Development of Uncertainty Toolbox is supported by the following organizations.

               

   

Comments
  • Use flit, conda-souschef, and grayskull to make PyPI/Anaconda uploads straightforward.

    Use flit, conda-souschef, and grayskull to make PyPI/Anaconda uploads straightforward.

    #59 #58 Right now, I have a version of uncertainty_toolbox uploaded to PyPI and Anaconda.

    pip install uncertainty_toolbox
    
    conda install -c sgbaird uncertainty_toolbox
    

    The basic instructions (after some one-time setup) are to install flit (e.g. conda install flit), update the version number in uncertainty_toolbox/__init__.py and run:

    flit publish
    

    to upload a new version to PyPI. I probably need to add other people's usernames to PyPI so I'm not the only one that can upload new versions.

    For the Anaconda upload, install conda-souschef and grayskull, run a slightly customized script, and build it within the scratch folder.

    conda install conda-souschef grayskull
    python run_grayskull.py
    cd scratch
    conda build .
    

    For this, you probably have to set a few things with conda first, such as automatic uploads when building, credentials, and configuring it to look in certain channels. These are all one-time setup instructions. I also have some GitHub workflow code in mat_discover that can take care of the uploads (and testing) automatically when you make a new release. Just need to change a couple lines and add credentials to GitHub secrets.

    opened by sgbaird 7
  • Added more UTs for util.py for dev branch

    Added more UTs for util.py for dev branch

    New added UTs consider:

    • function calls with no arguments for both assert_is_flat_same_shape() and assert_is_positive()

    • new UTs for assert_is_positive() function

    • based on assert_is_positive() description, a UT with 2D ndarray was included too.

    @willieneis @YoungseogChung @IanChar @HanGuo97

    Test execution output: image

    opened by marcemq 5
  • Using this package on machine learning results

    Using this package on machine learning results

    Hi,

    Thanks for making this package available to us! I have a simple question:

    I have a data set split into train/validation sets. A regressor (or classifier) is trained, and then by using the validation set I get an estimate Y_prediction. From the same validation set, I have the Y_true. So, how would you suggest to compute the standard deviation on predictions from a regressor (or classifier)?

    Many thanks,

    Ivan

    opened by ivan-marroquin 4
  • Installation on google colab

    Installation on google colab

    Hello I have tried to install the package in google colab according to your instruction. It dose not work. would you help me?

    Thanks for the great package

    opened by dara1400 4
  • uncertainty quantification of a neural network

    uncertainty quantification of a neural network

    Hi, I have a trained neural network. How can I used the Uncertainty Toolbox to quantify uncertainty of the neural network. How should I calculate predictions_std (a vector of standard deviations)?

    opened by admodadmod 3
  • Added more UTs for util.py

    Added more UTs for util.py

    New added UTs consider:

    • function calls with no arguments for both assert_is_flat_same_shape() and assert_is_positive()
    • new UTs for assert_is_positive() function
    • based on assert_is_positive() description, a UT with 2D ndarray was included too.

    @willieneis @YoungseogChung @IanChar @HanGuo97

    Test execution output: image

    opened by marcemq 2
  • Convert from symmetric confidence intervals to standard deviation

    Convert from symmetric confidence intervals to standard deviation

    Incorporating some code into another repository, and wanted to check with some people with (very likely) a more thorough statistics background than myself.

    Conversion from standard deviation to confidence intervals takes place in: https://github.com/uncertainty-toolbox/uncertainty-toolbox/blob/b2f342f6606d1d667bf9583919a663adf8643efe/uncertainty_toolbox/metrics_scoring_rule.py#L187

    What is the inverse conversion from symmetric confidence intervals back to standard deviation? (e.g. using scipy.stats.norm but doesn't have to be)

    question 
    opened by sgbaird 2
  • "layman" definition of interval score using width and coverage

    I'm trying to figure out how to describe the math behind interval score in a simple way. Based on reading the paper and related literature, I get that it incorporates width and coverage. Is the idea that it penalizes the following two scenarios:

    • wide intervals
    • when many predictions fall outside of the intervals

    In other words, the interval should be as narrow as possible while having the interval "cross the parity line" as frequently as possible, correct? How does this play out in the math?

    opened by sgbaird 2
  • MACE vs ECE

    MACE vs ECE

    Hello, thanks for sharing your code!

    I noticed that you implement the metric "MACE", while I was more familiar with the term "ECE". Is there a difference between the two?

    opened by tfjgeorge 2
  • Quantile Regression

    Quantile Regression

    Hello!

    Thanks for this convenient tool! I was wondering how you best suggest using it for quantile regression. I see that a lot of functions take in a pred-mean and std. However, I thought it would also be nice if there were additional support for quantiles and not only std + mean support. Is there a way of hacking my way through what is now so that I could use the quantile intervals I get instead?

    opened by velezbeltran 1
  • [Feature Request] Option for bivariate distribution plots in lieu of scatter plots

    [Feature Request] Option for bivariate distribution plots in lieu of scatter plots

    When dealing with "large" datasets of over 100 data points, scatter plots can become cluttered. Additionally, it's difficult to really distinguish points that are on top of each other. So a cluster of 1000 data points could look the same as a cluster of 100, but they mean very different things.

    Seaborn's bivariate displots can get around this issue by showing a density of points rather than each individual point. It'd be nice to have the option to make such plots for, say, plot_xy or plot_residuals_vs_stdevs.

    I acknowledge this would be a messy request, but it'd be a feature I'd use.

    opened by KevinTran-TRI 1
  • Make requirements even more lightweight.

    Make requirements even more lightweight.

    Some requirements are not needed in the regular requirements right now.

    1. Shapely is no longer needed. The imports in several files first need to be removed, and plot_calibration (in viz.py) needs to use the new calibration calculation code.
    2. Black and py-test can be added to the dev requirements instead of regular requirements.
    opened by IanChar 0
  • Pytorch GPU Acceleration

    Pytorch GPU Acceleration

    @tailintalent found that many of the evaluation metrics can be greatly sped up with gpu support via pytorch. This code is currently in an experimental branch called "torch". We will keep it separate from the main package for now. In order to be merged into main we will need the following:

    • The code relying on torch needs to be separated out because we do not want to make torch a mandatory install. This needs to be done in the most robust way possible, ideally separating by pinpointing the exact operations where torch provides a speedup rather than making a torch version of the pre-existing code.
    • Set up an optional installation that allows users to select the torch-enabled version of the toolbox.
    • Add options to the functions that allows users to select to use the torch version of the metric rather than the numpy version.
    enhancement 
    opened by IanChar 1
  • Should the axes on the calibration curves be switched?

    Should the axes on the calibration curves be switched?

    I've had at least two different folks come to me being very confused about the shapes of calibration curves. After a few minutes of discussion, it turns out the confusion arose from how we plot the observed values are on the y-axis and predicted values are on the x-axis. This is the opposite of conventional parity plots, where observations are on the x-axis and predictions are on the y-axis.

    What do folks thing about flipping the axes? I know this would be different than the original Kuleshov paper's graphs (and also the paper that @willieneis and I worked on together), but a part of me would rather plot what folks would expect to minimize future confusion.

    opened by KevinTran-TRI 2
  • `Could not find lib geos_c.dll or load any of its variants []` when trying to import uncertainty_toolbox (shapely issue)

    `Could not find lib geos_c.dll or load any of its variants []` when trying to import uncertainty_toolbox (shapely issue)

    After running:

    pip install git+https://github.com/uncertainty-toolbox/uncertainty-toolbox
    

    on Windows 10, VS Code, inside of conda environment, I get:

    Could not find lib geos_c.dll or load any of its variants [].
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\shapely\geos.py", line 54, in load_dll
        raise OSError(
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\shapely\geos.py", line 178, in <module>
        _lgeos = load_dll("geos_c.dll")
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\shapely\coords.py", line 10, in <module>
        from shapely.geos import lgeos
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\shapely\geometry\base.py", line 20, in <module>
        from shapely.coords import CoordinateSequence
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\shapely\geometry\__init__.py", line 4, in <module>
        from .base import CAP_STYLE, JOIN_STYLE
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\uncertainty_toolbox\metrics_calibration.py", line 10, in <module>
        from shapely.geometry import Polygon, LineString
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\uncertainty_toolbox\metrics.py", line 9, in <module>
        from uncertainty_toolbox.metrics_calibration import (
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\uncertainty_toolbox\__init__.py", line 9, in <module>
        from .metrics import (
      File "C:\Users\sterg\Documents\GitHub\sparks-baird\VickersHardnessPrediction\hv_prediction.py", line 21, in <module>
        import uncertainty_toolbox as uct
    

    By installing shapely from conda (https://stackoverflow.com/questions/56813083/oserror-could-not-find-geos-c-dll-or-load-any-of-its-variants),

    conda install shapely
    

    For some reason (not sure if this would be the same for everyone), I need to do an uninstall and reinstall it one more time:

    conda uninstall shapely
    conda install shapely
    

    and it seems to work OK now.

    opened by sgbaird 0
Releases(v0.1.0)
  • v0.1.0(Sep 22, 2021)

    Initial release v0.1.0 for Uncertainty Toolbox.

    Highlights

    • Metrics for assessing quality of predictive uncertainty estimates.
    • Visualizations for predictive uncertainty estimates and metrics.
    • Recalibration methods for improving the calibration of a predictor.
    • Website with a tutorial on how to use Uncertainty Toolbox.
    • Documentation and API reference for Uncertainty Toolbox.
    • Glossary of terms related to predictive uncertainty quantification.
    • Publications and references on relevant methods and metrics.
    Source code(tar.gz)
    Source code(zip)
Owner
Uncertainty Toolbox
A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization.
Uncertainty Toolbox
Conformer: Local Features Coupling Global Representations for Visual Recognition

Conformer: Local Features Coupling Global Representations for Visual Recognition (arxiv) This repository is built upon DeiT and timm Usage First, inst

Zhiliang Peng 378 Jan 08, 2023
Official repository for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'21, Oral Presentation)

Official PyTorch Implementation for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'2021, Oral Presentation) HOTR: End-to-

Kakao Brain 114 Nov 28, 2022
Baseline powergrid model for NY

Baseline-powergrid-model-for-NY Table of Contents About The Project Built With Usage License Contact Acknowledgements About The Project As the urgency

Anderson Energy Lab at Cornell 6 Nov 24, 2022
mmfewshot is an open source few shot learning toolbox based on PyTorch

OpenMMLab FewShot Learning Toolbox and Benchmark

OpenMMLab 514 Dec 28, 2022
Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)

Active Learning for Deep Object Detection via Probabilistic Modeling This repository is the official PyTorch implementation of Active Learning for Dee

NVIDIA Research Projects 130 Jan 06, 2023
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022
Deep Convolutional Generative Adversarial Networks

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks Alec Radford, Luke Metz, Soumith Chintala All images in t

Alec Radford 3.4k Dec 29, 2022
Class-Balanced Loss Based on Effective Number of Samples. CVPR 2019

Class-Balanced Loss Based on Effective Number of Samples Tensorflow code for the paper: Class-Balanced Loss Based on Effective Number of Samples Yin C

Yin Cui 546 Jan 08, 2023
Automatic differentiation with weighted finite-state transducers.

GTN: Automatic Differentiation with WFSTs Quickstart | Installation | Documentation What is GTN? GTN is a framework for automatic differentiation with

100 Dec 29, 2022
PiRank: Learning to Rank via Differentiable Sorting

PiRank: Learning to Rank via Differentiable Sorting This repository provides a reference implementation for learning PiRank-based models as described

54 Dec 17, 2022
Joint detection and tracking model named DEFT, or ``Detection Embeddings for Tracking.

DEFT: Detection Embeddings for Tracking DEFT: Detection Embeddings for Tracking, Mohamed Chaabane, Peter Zhang, J. Ross Beveridge, Stephen O'Hara

Mohamed Chaabane 253 Dec 18, 2022
Detector for Log4Shell exploitation attempts

log4shell-detector Detector for Log4Shell exploitation attempts Idea The problem with the log4j CVE-2021-44228 exploitation is that the string can be

Florian Roth 729 Dec 25, 2022
Official implementation for paper: A Latent Transformer for Disentangled Face Editing in Images and Videos.

A Latent Transformer for Disentangled Face Editing in Images and Videos Official implementation for paper: A Latent Transformer for Disentangled Face

InterDigital 108 Dec 09, 2022
The PyTorch implementation of paper REST: Debiased Social Recommendation via Reconstructing Exposure Strategies

REST The PyTorch implementation of paper REST: Debiased Social Recommendation via Reconstructing Exposure Strategies. Usage Download dataset Download

DMIRLAB 2 Mar 13, 2022
Pytorch Lightning 1.2k Jan 06, 2023
MetaBalance: Improving Multi-Task Recommendations via Adapting Gradient Magnitudes of Auxiliary Tasks

MetaBalance: Improving Multi-Task Recommendations via Adapting Gradient Magnitudes of Auxiliary Tasks Introduction This repo contains the pytorch impl

Meta Research 38 Oct 10, 2022
A JAX-based research framework for writing differentiable numerical simulators with arbitrary discretizations

jaxdf - JAX-based Discretization Framework Overview | Example | Installation | Documentation ⚠️ This library is still in development. Breaking changes

UCL Biomedical Ultrasound Group 65 Dec 23, 2022
LBK 35 Dec 26, 2022
DeepAL: Deep Active Learning in Python

DeepAL: Deep Active Learning in Python Python implementations of the following active learning algorithms: Random Sampling Least Confidence [1] Margin

Kuan-Hao Huang 583 Jan 03, 2023
Geometric Vector Perceptrons --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Implementation of equivariant GVP-GNNs as described in Learning from Protein Structure with Geometric Vector Perceptrons b

Dror Lab 142 Dec 29, 2022