An implementation of the AdaOPS (Adaptive Online Packing-based Search), which is an online POMDP Solver used to solve problems defined with the POMDPs.jl generative interface.

Overview

AdaOPS

Build Status

Coverage Status

codecov.io

An implementation of the AdaOPS (Adaptive Online Packing-guided Search), which is an online POMDP Solver used to solve problems defined with the POMDPs.jl generative interface. The paper of AdaOPS was published on NeurIPS'2021.

If you are trying to use this package and require more documentation, please file an issue!

Installation

Press ] key to enter the package management mode of Julia. Then, execute the following code.

pkg> add "POMDPs"
pkg> registry add "https://github.com/JuliaPOMDP/Registry.git"
pkg> add AdaOPS

Usage

using POMDPs, POMDPModels, POMDPSimulators, AdaOPS

pomdp = TigerPOMDP()

solver = AdaOPSSolver(bounds=IndependentBounds(-20.0, 0.0))
planner = solve(solver, pomdp)

for (s, a, o) in stepthrough(pomdp, planner, "s,a,o", max_steps=10)
    println("State was $s,")
    println("action $a was taken,")
    println("and observation $o was received.\n")
end

For minimal examples of problem implementations, see this notebook and the POMDPs.jl generative docs.

Solver Options

Solver options can be found in the AdaOPSSolver docstring and accessed using Julia's built in documentation system (or directly in the Solver source code). Each option has its own docstring and can be set with a keyword argument in the AdaOPSSolver constructor.

Belief Packing

delta

A δ-packing of observation branches will be generated, i.e., the belief nodes with L1 distance less than delta are merged.

Adaptive Particle Filter

The core idea of the adaptive particle filter is that it can change the number of particles adaptively and use more particles to estimate the belief when needed.

grid

grid is used to split the state space into multidimensional bins, so that KLD-Sampling can estimate the particle numbers according to the number of bins occupied. First, a function for converting a state to a multidimensional vector should be implemented, i.e., Base.convert(::Type{SVector{D, Float64}},::S), where D is the dimension of the resulted vector. Then, we define a StateGrid to discretize or split the state space. A StateGrid is consist of a vector of cutpoints in each dimension. These cutpoints divide the whole space into small tiles. In each dimension, a number of intervals constitute the grid, and each of these intervals is left-closed and right-open with the endpoints be cutpoints with the exception of the left-most interval. For example, a StateGrid can be defined as StateGrid([dim1_cutpoints], [dim2_cutpoints], [dim3_cutpoints]). All states lie in one tile will be taken as the same. With the number of tiles (bins) occupied, we can estimate the number of particles using KLD-Sampling.

max_occupied_bins

max_occupied_bins is the maximum number of bins occupied by a belief. Normally, it is exactly the grid size. However, in some domains, such as Roomba, only states within the room is accessible, and the corresponding bins will never be occupied.

min_occupied_bins

min_occupied_bins is the minimum number of bins occupied by a belief. Normally, it default to 2. A belief occupying min_occupied_bins tiles will be estimated with m_min particles. Increasing min_occupied_bins indicates that a belief need to occupy more bins so as to be estimated by the same amount of particles.

m_min

m_min is the minimum number of particles used for approximating beliefs.

m_max

m_max is the maximum number of particles used for approximating a belief. Normally, m_max is set to be big enough so that KLD-Sampling determines the number of particles used. When the KLD-Sampling is disabled, i.e. grid=StateGrid(), m_max will be sampled during the resampling.

zeta

zeta is the target error when estimating a belief. Spcifically, we use KLD Sampling to calculate the number of particles needed, where zeta is the targe Kullback-Leibler divergence between the estimated belief and the true belief. In AdaOPS, zeta is automatically adjusted according to the minimum number of bins occupied such that the minimum number of particles KLD-Sampling method suggests is exactly m_min.

Bounds

Dependent bounds

The bound passed into AdaOPSSolver can be a function in the form of lower_bound, upper_bound = f(pomdp, wpf_belief), or any other objects for which a AdaOPS.bounds(obj::OBJECT, pomdp::POMDP, b::WPFBelief, max_depth::Int, bounds_warning::Bool) function is implemented.

Independent bounds

In most cases, the recommended way to specify bounds is with an IndependentBounds object, i.e.

AdaOPSSolver(bounds=IndependentBounds(lower, upper))

where lower and upper are either a number, a function or some other objects (see below).

Often, the lower bound is calculated with a default policy, this can be accomplished using a PORollout, FORollout or RolloutEstimator. For the in-depth details, please refer to BasicPOMCP. Note that when mixing the Rollout structs from this package with those from BasicPOMCP, you should prefix the struct name with module name.

Both the lower and upper bounds can be initialized with value estimations using a FOValue or POValue. FOValue support any offline MDP Solver or Policy. POValue support any offline POMDP Solver or Policy.

If lower or upper is a function, it should handle two arguments. The first is the POMDP object and the second is the WPFBelief. To access the state particles in a WPFBelief b, use particles(b). To access the corresponding weights of particles in a WPFBelief b, use weights(b). All AbstractParticleBelief APIs are supported for WPFBelief. More details can be found in the solver source code.

If an object o is passed in, AdaOPS.bound(o, pomdp::POMDP, b::WPFBelief, max_depth::Int) will be called.

In most cases, the check_terminal and consistency_fix_thresh keyword arguments of IndependentBounds should be used to add robustness (see the IndependentBounds docstring for more info). When using rollout-base bounds, you can specify max_depth keyword argument to set the max depth of rollout.

Example

For the BabyPOMDP from POMDPModels, bounds setup might look like this:

using POMDPModels
using POMDPPolicies
using BasicPOMCP

always_feed = FunctionPolicy(b->true)
lower = FORollout(always_feed)

function upper(pomdp::BabyPOMDP, b::WPFBelief)
    if all(s==true for s in particles(b)) # all particles are hungry
        return pomdp.r_hungry # the baby is hungry this time, but then becomes full magically and stays that way forever
    else
        return 0.0 # the baby magically stays full forever
    end
end

solver = AdaOPSSolver(bounds=IndependentBounds(lower, upper))

Visualization

D3Trees.jl can be used to visualize the search tree, for example

using POMDPs, POMDPModels, POMDPModelTools, D3Trees, AdaOPS

pomdp = TigerPOMDP()

solver = AdaOPSSolver(bounds=(-20.0, 0.0), tree_in_info=true)
planner = solve(solver, pomdp)
b0 = initialstate(pomdp)

a, info = action_info(planner, b0)
inchrome(D3Tree(info[:tree], init_expand=5))

will create an interactive tree.

Analysis

Two utilities, namely info_analysis and hist_analysis, are provided for getting a sense of how the algorithm is working. info_analysis takes the infomation returned from action_info(planner, b0). It will first visualize the tree if the tree_in_info option is turned on. Then it will show stats such as number nodes expanded, total explorations, average observation branches, and so on. hist_analysis takes the hist from HistoryRecorder simulator. It will show similar stats as info_analysis but in the form of figures. It should be noted that HistoryRecoder will store the tree of each single step, which makes it memory-intensive. An example is shown as follows.

using POMDPs, AdaOPS, RockSample, POMDPSimulators, ParticleFilters, POMDPModelTools

m = RockSamplePOMDP(11, 11)

b0 = initialstate(m)
s0 = rand(b0)

bound = AdaOPS.IndependentBounds(FORollout(RSExitSolver()), FOValue(RSMDPSolver()), check_terminal=true, consistency_fix_thresh=1e-5)

solver = AdaOPSSolver(bounds=bound,
                        delta=0.3,
                        m_min=30,
                        m_max=200,
                        tree_in_info=true,
                        num_b=10_000
                        )

adaops = solve(solver, m)
a, info = action_info(adaops, b0)
info_analysis(info)

num_particles = 30000
@time hist = simulate(HistoryRecorder(max_steps=90), m, adaops, SIRParticleFilter(m, num_particles), b0, s0)
hist_analysis(hist)
@show undiscounted_reward(hist)

Reference

@inproceedings{wu2021adaptive,
  title={Adaptive Online Packing-guided Search for POMDPs},
  author={Wu, Chenyang and Yang, Guoyu and Zhang, Zongzhang and Yu, Yang and Li, Dong and Liu, Wulong and others},
  booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
  year={2021}
}
You might also like...
AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

Minimal PyTorch implementation of Generative Latent Optimization from the paper
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Online Pseudo Label Generation by Hierarchical Cluster Dynamics for Adaptive Person Re-identification

Online Pseudo Label Generation by Hierarchical Cluster Dynamics for Adaptive Person Re-identification

Deep Text Search is an AI-powered multilingual text search and recommendation engine with state-of-the-art transformer-based multilingual text embedding (50+ languages).
Deep Text Search is an AI-powered multilingual text search and recommendation engine with state-of-the-art transformer-based multilingual text embedding (50+ languages).

Deep Text Search - AI Based Text Search & Recommendation System Deep Text Search is an AI-powered multilingual text search and recommendation engine w

Camview - A CLI-tool used to stream CCTV online footage based on URL params
Camview - A CLI-tool used to stream CCTV online footage based on URL params

CamView A CLI-tool used to stream CCTV online footage based on URL params Get St

Wordplay, an artificial Intelligence based crossword puzzle solver.

Wordplay, AI based crossword puzzle solver A crossword is a word puzzle that usually takes the form of a square or a rectangular grid of white- and bl

Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity
Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity

Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity, such as gratings, photonic-crystal slabs, metasurfaces, surface-emitting lasers, nano-antennas, and more.

A SAT-based sudoku solver
A SAT-based sudoku solver

SAT Sudoku solver A SAT-based Sudoku solver made in the context of a small project in the "Logic Problem Solving" class in the first year at the Polyt

Comments
  • TagBot trigger issue

    TagBot trigger issue

    This issue is used to trigger TagBot; feel free to unsubscribe.

    If you haven't already, you should update your TagBot.yml to include issue comment triggers. Please see this post on Discourse for instructions and more details.

    If you'd like for me to do this for you, comment TagBot fix on this issue. I'll open a PR within a few hours, please be patient!

    opened by JuliaTagBot 7
  • Unintuitive default timeout warning threshold

    Unintuitive default timeout warning threshold

    https://github.com/LAMDA-POMDP/AdaOPS.jl/blob/e8593ebec82f064efd08b2b34301dbb1011281aa/src/AdaOPS.jl#L143

    https://github.com/LAMDA-POMDP/AdaOPS.jl/blob/e8593ebec82f064efd08b2b34301dbb1011281aa/src/planner.jl#L15

    Default results in timeout warnings whenever planning time exceeds 2*T_max^2 . For planning times less than 1 second this results in far too many warnings printed to screen.

    For example, if we have T_max = 0.01, then the warning is triggered whenever planning time exceeds 0.0002s, which would be at every action call, provided that the max_trials option is set to be sufficiently high.

    Also, it may be worth looking at this issue seeing as CPUTime seems to have a lot of overhead, especially for problems with quick gen functions

    opened by WhiffleFish 1
  • CompatHelper: bump compat for

    CompatHelper: bump compat for "Distributions" to "0.25"

    This pull request changes the compat entry for the Distributions package from 0.22 - 0.24 to 0.22 - 0.24, 0.25.

    This keeps the compat entries for earlier versions.

    Note: I have not tested your package with this new compat entry. It is your responsibility to make sure that your package tests pass before you merge this pull request.

    opened by github-actions[bot] 1
  • CompatHelper: add new compat entry for

    CompatHelper: add new compat entry for "POMDPModels" at version "0.4"

    This pull request sets the compat entry for the POMDPModels package to 0.4.

    This is a brand new compat entry. Previously, you did not have a compat entry for the POMDPModels package.

    Note: I have not tested your package with this new compat entry. It is your responsibility to make sure that your package tests pass before you merge this pull request. Note: Consider tagging a patch release immediately after merging this PR, as downstream packages may depend on this for tests to pass.

    opened by github-actions[bot] 0
Releases(v0.5.3)
Vehicles Counting using YOLOv4 + DeepSORT + Flask + Ngrok

A project for counting vehicles using YOLOv4 + DeepSORT + Flask + Ngrok

Duong Tran Thanh 37 Dec 16, 2022
PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

Saim Wani 4 May 08, 2022
A curated list of long-tailed recognition resources.

Awesome Long-tailed Recognition A curated list of long-tailed recognition and related resources. Please feel free to pull requests or open an issue to

Zhiwei ZHANG 542 Jan 01, 2023
Torch implementation of "Enhanced Deep Residual Networks for Single Image Super-Resolution"

NTIRE2017 Super-resolution Challenge: SNU_CVLab Introduction This is our project repository for CVPR 2017 Workshop (2nd NTIRE). We, Team SNU_CVLab, (B

Bee Lim 625 Dec 30, 2022
Code for the Convolutional Vision Transformer (ConViT)

ConViT : Vision Transformers with Convolutional Inductive Biases This repository contains PyTorch code for ConViT. It builds on code from the Data-Eff

Facebook Research 418 Jan 06, 2023
Implementation of Online Label Smoothing in PyTorch

Online Label Smoothing Pytorch implementation of Online Label Smoothing (OLS) presented in Delving Deep into Label Smoothing. Introduction As the abst

83 Dec 14, 2022
Localization Distillation for Object Detection

Localization Distillation for Object Detection This repo is based on mmDetection. This is the code for our paper: Localization Distillation

274 Dec 26, 2022
cisip-FIRe - Fast Image Retrieval

Fast Image Retrieval (FIRe) is an open source image retrieval project release by Center of Image and Signal Processing Lab (CISiP Lab), Universiti Malaya. This project implements most of the major bi

CISiP Lab 39 Nov 25, 2022
Matthew Colbrook 1 Apr 08, 2022
[ICML'21] Estimate the accuracy of the classifier in various environments through self-supervision

What Does Rotation Prediction Tell Us about Classifier Accuracy under Varying Testing Environments? [Paper] [ICML'21 Project] PyTorch Implementation T

24 Oct 26, 2022
This repo contains the pytorch implementation for Dynamic Concept Learner (accepted by ICLR 2021).

DCL-PyTorch Pytorch implementation for the Dynamic Concept Learner (DCL). More details can be found at the project page. Framework Grounding Physical

Zhenfang Chen 31 Jan 06, 2023
3D dataset of humans Manipulating Objects in-the-Wild (MOW)

MOW dataset [Website] This repository maintains our 3D dataset of humans Manipulating Objects in-the-Wild (MOW). The dataset contains 512 images in th

Zhe Cao 28 Nov 06, 2022
一个多模态内容理解算法框架,其中包含数据处理、预训练模型、常见模型以及模型加速等模块。

Overview 架构设计 插件介绍 安装使用 框架简介 方便使用,支持多模态,多任务的统一训练框架 能力列表: bert + 分类任务 自定义任务训练(插件注册) 框架设计 框架采用分层的思想组织模型训练流程。 DATA 层负责读取用户数据,根据 field 管理数据。 Parser 层负责转换原

Tencent 265 Dec 22, 2022
U-Net: Convolutional Networks for Biomedical Image Segmentation

Deep Learning Tutorial for Kaggle Ultrasound Nerve Segmentation competition, using Keras This tutorial shows how to use Keras library to build deep ne

Yihui He 401 Nov 21, 2022
PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)

PyExplainer PyExplainer is a local rule-based model-agnostic technique for generating explanations (i.e., why a commit is predicted as defective) of J

AI Wizards for Software Management (AWSM) Research Group 14 Nov 13, 2022
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation

PyGRANSO PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation Please check https://ncvx.org/PyGRANSO for detailed instructions (introd

SUN Group @ UMN 26 Nov 16, 2022
This is the winning solution of the Endocv-2021 grand challange.

Endocv2021-winner [Paper] This is the winning solution of the Endocv-2021 grand challange. Dependencies pytorch # tested with 1.7 and 1.8 torchvision

Vajira Thambawita 14 Dec 03, 2022
Advancing mathematics by guiding human intuition with AI

Advancing mathematics by guiding human intuition with AI This repo contains two colab notebooks which accompany the paper, available online at https:/

DeepMind 315 Dec 26, 2022
DL course co-developed by YSDA, HSE and Skoltech

Deep learning course This repo supplements Deep Learning course taught at YSDA and HSE @fall'21. For previous iteration visit the spring21 branch. Lec

Yandex School of Data Analysis 1.3k Dec 30, 2022
A flexible submap-based framework towards spatio-temporally consistent volumetric mapping and scene understanding.

Panoptic Mapping This package contains panoptic_mapping, a general framework for semantic volumetric mapping. We provide, among other, a submap-based

ETHZ ASL 194 Dec 20, 2022