The code for replicating the experiments from the LFI in SSMs with Unknown Dynamics paper.

Overview

Likelihood-Free Inference in State-Space Models with Unknown Dynamics

This package contains the codes required to run the experiments in the paper. The simulators used for the State-Space Models in the experiments are implemented based on Engine for Likelihood-free Inference (ELFI) models.

Installation

We recommend using an Anaconda environment. To create and activate the conda environment with all dependencies installed, run:

conda create -c conda-forge --name env --file lfi-requirements.txt
conda activate env
pip install -e .
pip install sbi blitz-bayesian-pytorch stable_baselines3

For the GP-SSM and PR-SSM methods, we recommend creating a separate environment, in which one should install tensorflow, and then clone the 'custom_multiouput' branch of the GPflow from https://github.com/ialong/GPflow. Once GPflow is installed, one should clone GPt from https://github.com/ialong/GPt and execute 'experiments/run_gpssms.py', the code will complete 30 repletions of experiments with tractable likelihoods.

Running the experiments

The experiment scripts can be found in the 'experiments/' folder. To run the experiments on one of the considered SSM, one should run the 'run_experiment.py' script with the following arguments (options are in the parentheses): --sim ('lgssm', 'toy', 'sv', 'umap', 'gaze'), --meth ('bnn', 'qehvi', 'blr', 'SNPE', 'SNLE', 'SNRE'), --seed (any seed number), --budget (available simulation budget for each new state), --tasks (number of tasks considered/ moving window size for LMC-BNN, LMC-qEHVI and LMC-BLR methods). For instance:

python3 experiments/run_experiment.py --sim=lgssm --meth=bolfi --seed=0 --budget=2 --tasks=2

The results will be saved in the corresponding folders 'experiments/[sim]/[meth]-w[tasks]-s[budget]/'. To build plots and output the results, one should run 'collect_plots.py' script with specified arguments: --type ('inf' in case of evaluating state inference quality or 'traj' in case of evaluating the generated trajectories), --tasks (the number of tasks used by the methods). For example:

python3 experiments/collect_results.py --type=inf --tasks=2

The plots with experiment results will be stored in 'experiments/plots'.

Implementing custom simulators

The simulators for all experiments can be found in elfi/examples. Example implementations used in the paper are found in gaze_selection.py, umap_tasks.py, LGSSM.py (LG), dynamic_toy_model.py (NN), and stochastic_volatility.py (SV). To create a new SSM, implement a new class that inherits from elfi.DynamicProcess with custom generating function for observations, create_model(), and update_dynamic().

The code for all methods can be found in 'elfi/methods/dynamic_parameter_inference.py' and 'elfi/methods/bo/mogp.py'.

Citation


Owner
Alex Aushev
Alex Aushev
A python script to convert images to animated sus among us crewmate twerk jifs as seen on r/196

img_sussifier A python script to convert images to animated sus among us crewmate twerk jifs as seen on r/196 Examples How to use install python pip i

41 Sep 30, 2022
This is the PyTorch implementation of GANs N’ Roses: Stable, Controllable, Diverse Image to Image Translation

Official PyTorch repo for GAN's N' Roses. Diverse im2im and vid2vid selfie to anime translation.

1.1k Jan 01, 2023
Flower - A Friendly Federated Learning Framework

Flower - A Friendly Federated Learning Framework Flower (flwr) is a framework for building federated learning systems. The design of Flower is based o

Adap 1.8k Jan 01, 2023
Code base for reproducing results of I.Schubert, D.Driess, O.Oguz, and M.Toussaint: Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics. NeurIPS (2021)

Learning to Execute (L2E) Official code base for completely reproducing all results reported in I.Schubert, D.Driess, O.Oguz, and M.Toussaint: Learnin

3 May 18, 2022
MultiSiam: Self-supervised Multi-instance Siamese Representation Learning for Autonomous Driving

MultiSiam: Self-supervised Multi-instance Siamese Representation Learning for Autonomous Driving Code will be available soon. Motivation Architecture

Kai Chen 24 Apr 19, 2022
COIN the currently largest dataset for comprehensive instruction video analysis.

COIN Dataset COIN is the currently largest dataset for comprehensive instruction video analysis. It contains 11,827 videos of 180 different tasks (i.e

86 Dec 28, 2022
Retinal vessel segmentation based on GT-UNet

Retinal vessel segmentation based on GT-UNet Introduction This project is a retinal blood vessel segmentation code based on UNet-like Group Transforme

Kent0n 27 Dec 18, 2022
Transformers based fully on MLPs

Awesome MLP-based Transformers papers An up-to-date list of Transformers based fully on MLPs without attention! Why this repo? After transformers and

Fawaz Sammani 35 Dec 30, 2022
One Million Scenes for Autonomous Driving

ONCE Benchmark This is a reproduced benchmark for 3D object detection on the ONCE (One Million Scenes) dataset. The code is mainly based on OpenPCDet.

148 Dec 28, 2022
COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping Version 1.0 COVINS is an accurate, scalable, and versatile vis

ETHZ V4RL 183 Dec 27, 2022
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Ibai Gorordo 99 Dec 31, 2022
Galileo library for large scale graph training by JD

近年来,图计算在搜索、推荐和风控等场景中获得显著的效果,但也面临超大规模异构图训练,与现有的深度学习框架Tensorflow和PyTorch结合等难题。 Galileo(伽利略)是一个图深度学习框架,具备超大规模、易使用、易扩展、高性能、双后端等优点,旨在解决超大规模图算法在工业级场景的落地难题,提

JD Galileo Team 128 Nov 29, 2022
Implement A3C for Mujoco gym envs

pytorch-a3c-mujoco Disclaimer: my implementation right now is unstable (you ca refer to the learning curve below), I'm not sure if it's my problems. A

Andrew 70 Dec 12, 2022
Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Zhengzhong Tu 5 Sep 16, 2022
The repository offers the official implementation of our BMVC 2021 paper in PyTorch.

CrossMLP Cascaded Cross MLP-Mixer GANs for Cross-View Image Translation Bin Ren1, Hao Tang2, Nicu Sebe1. 1University of Trento, Italy, 2ETH, Switzerla

Bingoren 16 Jul 27, 2022
[ICCV 2021] HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration

HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration Introduction The repository contains the source code and pre-tr

Intelligent Sensing, Perception and Computing Group 55 Dec 14, 2022
DANet for Tabular data classification/ regression.

Deep Abstract Networks A pyTorch implementation for AAAI-2022 paper DANets: Deep Abstract Networks for Tabular Data Classification and Regression. Bri

Ronnie Rocket 55 Sep 14, 2022
A toolkit for Lagrangian-based constrained optimization in Pytorch

Cooper About Cooper is a toolkit for Lagrangian-based constrained optimization in Pytorch. This library aims to encourage and facilitate the study of

Cooper 34 Jan 01, 2023
A Tensorflow implementation of the Text Conditioned Auxiliary Classifier Generative Adversarial Network for Generating Images from text descriptions

A Tensorflow implementation of the Text Conditioned Auxiliary Classifier Generative Adversarial Network for Generating Images from text descriptions

Ayushman Dash 93 Aug 04, 2022
The Habitat-Matterport 3D Research Dataset - the largest-ever dataset of 3D indoor spaces.

Habitat-Matterport 3D Dataset (HM3D) The Habitat-Matterport 3D Research Dataset is the largest-ever dataset of 3D indoor spaces. It consists of 1,000

Meta Research 62 Dec 27, 2022