Robust, modular and efficient implementation of advanced Hamiltonian Monte Carlo algorithms

Overview

AdvancedHMC.jl

AdvancedHMC-CI DOI Coverage Status Stable Dev

AdvancedHMC.jl provides a robust, modular and efficient implementation of advanced HMC algorithms. An illustrative example for AdvancedHMC's usage is given below. AdvancedHMC.jl is part of Turing.jl, a probabilistic programming library in Julia. If you are interested in using AdvancedHMC.jl through a probabilistic programming language, please check it out!

Interfaces

  • IMP.hmc: an experimental Python module for the Integrative Modeling Platform, which uses AdvancedHMC in its backend to sample protein structures.

NEWS

  • We presented a paper for AdvancedHMC.jl at AABI 2019 in Vancouver, Canada. (abs, pdf, OpenReview)
  • We presented a poster for AdvancedHMC.jl at StanCon 2019 in Cambridge, UK. (pdf)

API CHANGES

  • [v0.2.22] Three functions are renamed.
    • Preconditioner(metric::AbstractMetric) -> MassMatrixAdaptor(metric) and
    • NesterovDualAveraging(δ, integrator::AbstractIntegrator) -> StepSizeAdaptor(δ, integrator)
    • find_good_eps -> find_good_stepsize
  • [v0.2.15] n_adapts is no longer needed to construct StanHMCAdaptor; the old constructor is deprecated.
  • [v0.2.8] Two Hamiltonian trajectory sampling methods are renamed to avoid a name clash with Distributions.
    • Multinomial -> MultinomialTS
    • Slice -> SliceTS
  • [v0.2.0] The gradient function passed to Hamiltonian is supposed to return a value-gradient tuple now.

A minimal example - sampling from a multivariate Gaussian using NUTS

using AdvancedHMC, Distributions, ForwardDiff

# Choose parameter dimensionality and initial parameter value
D = 10; initial_θ = rand(D)

# Define the target distribution
ℓπ(θ) = logpdf(MvNormal(zeros(D), ones(D)), θ)

# Set the number of samples to draw and warmup iterations
n_samples, n_adapts = 2_000, 1_000

# Define a Hamiltonian system
metric = DiagEuclideanMetric(D)
hamiltonian = Hamiltonian(metric, ℓπ, ForwardDiff)

# Define a leapfrog solver, with initial step size chosen heuristically
initial_ϵ = find_good_stepsize(hamiltonian, initial_θ)
integrator = Leapfrog(initial_ϵ)

# Define an HMC sampler, with the following components
#   - multinomial sampling scheme,
#   - generalised No-U-Turn criteria, and
#   - windowed adaption for step-size and diagonal mass matrix
proposal = NUTS{MultinomialTS, GeneralisedNoUTurn}(integrator)
adaptor = StanHMCAdaptor(MassMatrixAdaptor(metric), StepSizeAdaptor(0.8, integrator))

# Run the sampler to draw samples from the specified Gaussian, where
#   - `samples` will store the samples
#   - `stats` will store diagnostic statistics for each sample
samples, stats = sample(hamiltonian, proposal, initial_θ, n_samples, adaptor, n_adapts; progress=true)

Parallel sampling

AdvancedHMC enables parallel sampling (either distributed or multi-thread) via Julia's parallel computing functions. It also supports vectorized sampling for static HMC and has been discussed in more detail in the documentation here.

The below example utilizes the @threads macro to sample 4 chains across 4 threads.

# Ensure that julia was launched with appropriate number of threads
println(Threads.nthreads())

# Number of chains to sample
nchains = 4

# Cache to store the chains
chains = Vector{Any}(undef, nchains)

# The `samples` from each parallel chain is stored in the `chains` vector 
# Adjust the `verbose` flag as per need
Threads.@threads for i in 1:nchains
  samples, stats = sample(hamiltonian, proposal, initial_θ, n_samples, adaptor, n_adapts; verbose=false)
  chains[i] = samples
end

API and supported HMC algorithms

An important design goal of AdvancedHMC.jl is modularity; we would like to support algorithmic research on HMC. This modularity means that different HMC variants can be easily constructed by composing various components, such as preconditioning metric (i.e. mass matrix), leapfrog integrators, trajectories (static or dynamic), and adaption schemes etc. The minimal example above can be modified to suit particular inference problems by picking components from the list below.

Hamiltonian mass matrix (metric)

  • Unit metric: UnitEuclideanMetric(dim)
  • Diagonal metric: DiagEuclideanMetric(dim)
  • Dense metric: DenseEuclideanMetric(dim)

where dim is the dimensionality of the sampling space.

Integrator (integrator)

  • Ordinary leapfrog integrator: Leapfrog(ϵ)
  • Jittered leapfrog integrator with jitter rate n: JitteredLeapfrog(ϵ, n)
  • Tempered leapfrog integrator with tempering rate a: TemperedLeapfrog(ϵ, a)

where ϵ is the step size of leapfrog integration.

Proposal (proposal)

  • Static HMC with a fixed number of steps (n_steps) (Neal, R. M. (2011)): StaticTrajectory(integrator, n_steps)
  • HMC with a fixed total trajectory length (trajectory_length) (Neal, R. M. (2011)): HMCDA(integrator, trajectory_length)
  • Original NUTS with slice sampling (Hoffman, M. D., & Gelman, A. (2014)): NUTS{SliceTS,ClassicNoUTurn}(integrator)
  • Generalised NUTS with slice sampling (Betancourt, M. (2017)): NUTS{SliceTS,GeneralisedNoUTurn}(integrator)
  • Original NUTS with multinomial sampling (Betancourt, M. (2017)): NUTS{MultinomialTS,ClassicNoUTurn}(integrator)
  • Generalised NUTS with multinomial sampling (Betancourt, M. (2017)): NUTS{MultinomialTS,GeneralisedNoUTurn}(integrator)

Adaptor (adaptor)

  • Adapt the mass matrix metric of the Hamiltonian dynamics: mma = MassMatrixAdaptor(metric)
    • This is lowered to UnitMassMatrix, WelfordVar or WelfordCov based on the type of the mass matrix metric
  • Adapt the step size of the leapfrog integrator integrator: ssa = StepSizeAdaptor(δ, integrator)
    • It uses Nesterov's dual averaging with δ as the target acceptance rate.
  • Combine the two above naively: NaiveHMCAdaptor(mma, ssa)
  • Combine the first two using Stan's windowed adaptation: StanHMCAdaptor(mma, ssa)

Gradients

AdvancedHMC supports both AD-based (Zygote, Tracker and ForwardDiff) and user-specified gradients. In order to use user-specified gradients, please replace ForwardDiff with ℓπ_grad in the Hamiltonian constructor, where the gradient function ℓπ_grad should return a tuple containing both the log-posterior and its gradient.

All the combinations are tested in this file except from using tempered leapfrog integrator together with adaptation, which we found unstable empirically.

The sample function signature in detail

function sample(
    rng::Union{AbstractRNG, AbstractVector{<:AbstractRNG}},
    h::Hamiltonian,
    κ::HMCKernel,
    θ::AbstractVector{<:AbstractFloat},
    n_samples::Int,
    adaptor::AbstractAdaptor=NoAdaptation(),
    n_adapts::Int=min(div(n_samples, 10), 1_000);
    drop_warmup=false,
    verbose::Bool=true,
    progress::Bool=false,
)

Draw n_samples samples using the proposal κ under the Hamiltonian system h

  • The randomness is controlled by rng.
    • If rng is not provided, GLOBAL_RNG will be used.
  • The initial point is given by θ.
  • The adaptor is set by adaptor, for which the default is no adaptation.
    • It will perform n_adapts steps of adaptation, for which the default is 1_000 or 10% of n_samples, whichever is lower.
  • drop_warmup specifies whether to drop samples.
  • verbose controls the verbosity.
  • progress controls whether to show the progress meter or not.

Note that the function signature of the sample function exported by AdvancedHMC.jl differs from the sample function used by Turing.jl. We refer to the documentation of Turing.jl for more details on the latter.

Citing AdvancedHMC.jl

If you use AdvancedHMC.jl for your own research, please consider citing the following publication:

Kai Xu, Hong Ge, Will Tebbutt, Mohamed Tarek, Martin Trapp, Zoubin Ghahramani: "AdvancedHMC.jl: A robust, modular and efficient implementation of advanced HMC algorithms.", Symposium on Advances in Approximate Bayesian Inference, 2020. (abs, pdf)

with the following BibTeX entry:

@inproceedings{xu2020advancedhmc,
  title={AdvancedHMC. jl: A robust, modular and efficient implementation of advanced HMC algorithms},
  author={Xu, Kai and Ge, Hong and Tebbutt, Will and Tarek, Mohamed and Trapp, Martin and Ghahramani, Zoubin},
  booktitle={Symposium on Advances in Approximate Bayesian Inference},
  pages={1--10},
  year={2020},
  organization={PMLR}
}

If you using AdvancedHMC.jl directly through Turing.jl, please consider citing the following publication:

Hong Ge, Kai Xu, and Zoubin Ghahramani: "Turing: a language for flexible probabilistic inference.", International Conference on Artificial Intelligence and Statistics, 2018. (abs, pdf)

with the following BibTeX entry:

@inproceedings{ge2018turing,
  title={Turing: A language for flexible probabilistic inference},
  author={Ge, Hong and Xu, Kai and Ghahramani, Zoubin},
  booktitle={International Conference on Artificial Intelligence and Statistics},
  pages={1682--1690},
  year={2018},
  organization={PMLR}
}

References

  1. Neal, R. M. (2011). MCMC using Hamiltonian dynamics. Handbook of Markov chain Monte Carlo, 2(11), 2. (arXiv)

  2. Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434.

  3. Girolami, M., & Calderhead, B. (2011). Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2), 123-214. (arXiv)

  4. Betancourt, M. J., Byrne, S., & Girolami, M. (2014). Optimizing the integrator step size for Hamiltonian Monte Carlo. arXiv preprint arXiv:1411.6669.

  5. Betancourt, M. (2016). Identifying the optimal integration time in Hamiltonian Monte Carlo. arXiv preprint arXiv:1601.00225.

  6. Hoffman, M. D., & Gelman, A. (2014). The No-U-Turn Sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1), 1593-1623. (arXiv)

Comments
  • Adopt AbstractMCMC.jl interface

    Adopt AbstractMCMC.jl interface

    Things to discuss

    • [x] AFAIK, the way to customize the logging in AbstractMCMC.jl is to pass progress=false to the underlying AbstractMCMC.mcmcsample and then use the callback keyword argument to log the progress. So the question is: should we do this so as to preserve the current logging functionality?
    • [x] To replicate the current summarization functionality (e.g. inform the user of average acceptance rates and EBFMI) as a post-sample step, we can overload StatsBase.sample and then perform this step after the call to AbstractMCMC.mcmcsample. Should we do this?
    opened by torfjelde 38
  • Use LogDensityProblems.jl

    Use LogDensityProblems.jl

    Turing.jl has somewhat "recently" started using LogDensityProblems.jl under the hood to simply all the handling of AD. This PR does the same for AdvancedHMC.jl, which in turn means that we can just plug a Turing.LogDensityFunction into AHMC and sample, in theory making the HMC implementation, etc. in Turing.jl redundant + we get additional AD-backends for free, e.g. Enzyme.

    IMO we might even want to consider making a bridge-package between AbstractMCMC.jl and LogDensityProblems.jl simply containing this LogDensityModel.jl, given how this implementation would be very useful in other packages implementing samplers, e.g. AdvancedMH.jl. Thoughts on this @devmotion @yebai @xukai92 ?

    A couple of (fixable) caveats:

    • We're loosing the ForwardDiff implementation for AbstractMatrix.
    • The DiffResults.jl helpers in LogDensityProblems.jl seems to be me a bit overly restrictive, e.g. https://github.com/tpapp/LogDensityProblems.jl/blob/a6a570751d0ee79345e92efd88062a0e6d59ef1b/src/DiffResults_helpers.jl#L14-L18 I believe will convert a ComponentVector into Vector, thus dropping the named dimensions. (@tpapp is there a reason why we can't just use similar(x)?)

    EDIT: This now depends on https://github.com/TuringLang/AbstractMCMC.jl/pull/110.

    opened by torfjelde 21
  • Hamiltonian type RFC

    Hamiltonian type RFC

    Current design of type system for HMC:

    • A type Metric to indicate which metric space we are working on
    • A type Hamiltonian which contains the metric space, the log density function of the target distribution and its gradient function
    • A type Leapfrog (a sub-type of AbstractIntegrator) to indicate we are using leapfrog integrator
      • We may want other integrators later
    • A type Trajectory to indicate the way we build trajectory
      • StaticTrajectory is just HMC and NoUTurnTrajectory will be NUTS
    • A type Proposal to wrap Trajectory with what kind of trajectory sampler is used
      • ~~Question: do we want this type, or may be an alternative type called Proposal, which build abstraction on proposal distribution directly.~~
    • HMC samplers are just different combination of these. This package is not aiming to provide integrated samplers but those building blocks. However, some simple examples like normal HMC and NUTS will be there.
      • [x] HMC
      • [x] NUTS

    Side note:

    • All functions are supposed to be immutable
    • During adaptation, we are meant to keep constructing new instances for Metric and Leapfrog types as their parameters are changing. However, after adaptation, all instances should be fixed.
    • I hope this implementation is type stable on its own.
    opened by xukai92 21
  • DynamicHMC is (~5 times) faster than Turing's HMC implementation

    DynamicHMC is (~5 times) faster than Turing's HMC implementation

    Time used for collecting 2 million samples

    • Turing.NUTS: 3318.293540 seconds (33.35 G allocations: 1.299 TiB, 29.35% gc time)
    • Turing.DynamicNUTS: 848.741643 seconds (7.86 G allocations: 251.076 GiB, 32.13% gc time)

    using the following

    
    using DynamicHMC, Turing, Test
    
    @model gdemo(x, y) = begin
      s ~ InverseGamma(2,3)
      m ~ Normal(0,sqrt(s))
      x ~ Normal(m, sqrt(s))
      y ~ Normal(m, sqrt(s))
      return s, m
    end
    
    mf = gdemo(1.5, 2.0)
    
    @time chn1 = sample(mf, DynamicNUTS(2000000));
    
    @time chn2 = sample(mf, Turing.NUTS(2000000, 0.65));
    
    
    opened by yebai 17
  • Support DiffEq ODE integrators

    Support DiffEq ODE integrators

    @ChrisRackauckas pointed out a paper (https://arxiv.org/abs/1912.03253) that uses the Calvo and Sanz-Sara methods in HMC. These integrators are already implemented in DiffEq (https://docs.juliadiffeq.org/latest/solvers/dynamical_solve/#Symplectic-Integrators-1). It would be nice to interface over them in AHMC.

    opened by xukai92 15
  • Add more statistics and move jitter to transition

    Add more statistics and move jitter to transition

    See #123 for discussion

    Adds the following statistics:

    • [x] hamiltonian_energy_error
    • [x] max_hamiltonian_energy_error
    • [x] is_adapt
    • [x] nom_step_size

    To-do:

    • [x] test that stats are returned, in particular that is_adapt starts as true and switches to false after adaptation is over
    • [x] refactor code around jitter so that the step size used is accessible for stats. I'd appreciate input on this.

    I'll submit a PR to Turing so that the new stats fields show up in internals instead of parameters. Looks like Turing doesn't use AdvancedHMC's sample function, so I'll need to add an is_adapt over there.

    opened by sethaxen 15
  • TagBot trigger issue

    TagBot trigger issue

    This issue is used to trigger TagBot; feel free to unsubscribe.

    If you haven't already, you should update your TagBot.yml to include issue comment triggers. Please see this post on Discourse for instructions and more details.

    If you'd like for me to do this for you, comment TagBot fix on this issue. I'll open a PR within a few hours, please be patient!

    opened by JuliaTagBot 14
  • Support of `theta_init` of `Matrix` type / GPU-level multiple chain support

    Support of `theta_init` of `Matrix` type / GPU-level multiple chain support

    To address

    • Support of theta_init of Matrix type / GPU-level multiple chain support #92

    Progress and goal

    • [x] StaticTrajectory
    • [x] HMCDA
    • [x] UnitEuclideanMetric
    • [x] DiagEuclideanMetric
    • [x] UnitPreconditioner
    • [x] DiagPreconditioner
    • [x] NesterovDualAveraging
      • Does not work with HMCDA because it could yeild different number of steps for each chain.
    • [x] Support a vector of random number generators
    • [x] Try to merge transition

    This PR does NOT aim to support

    • NUTS: different chain could have different depth - not sure how to implement
    • DenseEuclideanMetric: require more effort to deal with 3-dimensional tensors and not frequently used - low priority
    • DensePreconditioner: same as above
    opened by xukai92 13
  • More robust and modular way of detecting divergence

    More robust and modular way of detecting divergence

    Approximation error of leapfrog integration (i.e. accumulated Hamiltonian energy error) can sometimes explode, for example when the curvature of the current region is very high. This type of approximation error is sometimes called divergence [1] since it shifts a leapfrog simulation away from the correct solution.

    In Turing, this type of errors is currently caught a relatively ad-hoc function called is_valid,

    https://github.com/TuringLang/AdvancedHMC.jl/blob/734c0fa6d802a852d6b0bff7fc9f70a049a3f367/src/integrator.jl#L7

    is_valid can catch cases where one or more elements of the parameter vector is either nan or inf. This has several drawbacks

    • it's not general enough for catching all leapfrog approximation errors, e.g. when the parameter vector is valid but Hamiltonian energy is invalid.
    • it may be delayed because numerical errors can appear in Hamiltonian energy earlier than in parameter vectors
    • it's hard to propagate the exception around, i.e. at the moment we use a heuristic to find the previous valid point before approximation/numerical error happens
      • https://github.com/TuringLang/AdvancedHMC.jl/blob/734c0fa6d802a852d6b0bff7fc9f70a049a3f367/src/integrator.jl#L26
      • https://github.com/TuringLang/AdvancedHMC.jl/blob/734c0fa6d802a852d6b0bff7fc9f70a049a3f367/src/integrator.jl#L45

    Therefore, we might want to refactor the current code a bit for a more robust mechanism for handling leapfrog approximation errors. Perhaps we can learn from the DynamicHMC implementation:

    https://github.com/tpapp/DynamicHMC.jl/blob/master/src/buildingblocks.jl#L168

    [1]: Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434.

    enhancement help wanted 
    opened by yebai 13
  • Support for complex numbers

    Support for complex numbers

    Hi @xukai92,

    As discussed on Slack, adding support for complex numbers would allow us to use many models from physics. The code below is a quantum model of human judgment based on Wang et al. (2014). The model has a single parameter theta, which rotates the basis vectors. Thank you for looking into this. This feature would be very useful for me and others as well.

    function quantum_model(S, θ)
        N = length(S)
        H = zeros(N, N) # Hamiltonian
        idx = CartesianIndex.(2:N,1:(N-1))
        dind = diagind(H)
        H[idx] .= 1.0
        H .+= H'
        H[dind] .= 1:N
        U = exp(-1im * θ * H) 
        na = 1
        PA = zeros(N, N)
        PA[dind] .= [ones(na); zeros(N - na)]
        PnA = I - PA
        Sa = PA * S
        Sa ./= sqrt(Sa' * Sa) #normalized projection A
        Sna = PnA * S; 
        Sna ./= sqrt(Sna' * Sna) # normalized projection B
    
        nb = 1 # dim of B subspace
        PB = zeros(ComplexF64, N, N)
        PB[dind] = [zeros(nb, 1); ones(N - nb, 1)] # projector for B in B coord
        PB .= U * PB * U' # projector for B in A coord
        PnB = I - PB
    
        Sb = PB * S
        Sb ./= sqrt(Sb' * Sb)
        Snb = PnB * S
        Snb ./= sqrt(Snb' * Snb)
        
        pAtB = (S' * PA * S) * (Sa' * PB * Sa) # prob A then B
        pAtnB = (S' * PA * S) * (Sa' * PnB * Sa)
        pnAtB = (S' * PnA * S)*(Sna' * PB * Sna)
        pnAtnB = (S' * PnA * S) * (Sna' * PnB * Sna)
        # pAtB + pAtnB + pnAtB + pnAtnB == 1
        pBtA = (S' * PB * S)*(Sb' * PA * Sb) # prob B then A
        pBtnA = (S' * PB * S) * (Sb' * PnA * Sb)
        pnBtA = (S' * PnB*S) * (Snb' * PA * Snb)
        pnBtnA = (S' * PnB * S) * (Snb' * PnA * Snb)
        # pBtA + pBtnA + pnBtA + pnBtnA == 1
        # order 1
        c_probs1 = [pAtB, pAtnB , pnAtB, pnAtnB]
        # order 2
        c_probs2 = [pBtA, pnBtA, pBtnA, pnBtnA]
        return map(real, c_probs1), map(real, c_probs2)
    end
    
    function simulate(S, θ, n_sim)
        p1,p2 = quantum_model(S, θ)
        y1 = rand(Multinomial(n_sim, p1))
        y2 = rand(Multinomial(n_sim, p2))
        return y1, y2
    end
    
    using Distributions, Turing, LinearAlgebra
    import Distributions: logpdf, loglikelihood
    
    
    """
    Simplified model based on 
        Wang, Z., Solloway, T., Shiffrin, R. M., & Busemeyer, J. R. (2014). 
        Context effects produced by question orders reveal quantum nature of human 
        judgments. Proceedings of the National Academy of Sciences, 111(26), 9431-9436.
    
    """
    struct Quantum{T1,T2} <: ContinuousUnivariateDistribution
        θ::T1
        S::T2
        n::Int64
    end
    
    function logpdf(d::Quantum, data)
        p = quantum_model(d.S, d.θ)
        LL = @. logpdf(Multinomial(d.n, p), data)
        return sum(LL)
    end
    
    loglikelihood(d::Quantum, data::Tuple{Vector{Int64}, Vector{Int64}}) = logpdf(d, data)
    
    
    # number of observations per condition
    n_sim = 100
    # dimensionality of Hilbert Space
    N = 4
    # state vector
    S = fill(sqrt(.25), N)
    # rotation
    θ = 2.0
    data = simulate(S, θ, n_sim)
    
    @model model(data, S, n_sim) = begin
        θ ~ Truncated(Normal(2, 2), 0.0, Inf)
        data ~ Quantum(θ, S, n_sim)
    end
    
    # Settings of the NUTS sampler.
    n_samples = 1000
    delta = 0.85
    n_adapt = 1000
    n_chains = 4
    specs = NUTS(n_adapt, delta)
    # Start sampling.
    chain = sample(model(data, S, n_sim), specs, MCMCThreads(), n_samples, n_chains, progress=true)
    
    opened by itsdfish 12
  • Type design for `PhasePoint` and `DualValue`.

    Type design for `PhasePoint` and `DualValue`.

    This PR aims at introducing some additional types that are discussed in https://github.com/TuringLang/AdvancedHMC.jl/issues/16. More specifically, the following types are introduced:

    • PhasePoint: stores θ, r and cached Potential energy (and its gradient). https://github.com/TuringLang/AdvancedHMC.jl/issues/17
    • DualValue: stores log density logπ(θ), and cache its gradient.
    • ~~LogDensityFunction: stores logπ and its gradient function ∂logπ∂θ.~~

    TODOs

    • [x] Refactor numerical error handling code leapfrog using PhasePoint
    • [x] Remove DualFunction?
    • [x] Refactor build_tree using PhasePoint
    • ~~Return more information for each step #59~~
    • ~~Add a Termination type.~~
    opened by yebai 12
  • Compatibility with MCMCChains

    Compatibility with MCMCChains

    This is a very minor pull request for conveniently bundling samples from AdvancedHMC.jl into an Chains object from MCMCChains.jl. Very similar to the interface in AdvancedMH.jl, it should make it easier to switch from one to the other.

    opened by kaandocal 2
  • Step size initialization uses GLOBAL RNG instead of reproducible RNG from caller

    Step size initialization uses GLOBAL RNG instead of reproducible RNG from caller

    https://github.com/TuringLang/AdvancedHMC.jl/blob/6a55a3f3f341c90a70491267635acb51dd989463/src/trajectory.jl#L770

    This caused an actual reproducibility problem, see Discourse thread: https://discourse.julialang.org/t/stablerng-in-turing-is-not-producing-reproducible-output/92032

    To get RNG from caller, it might need to be perculated down from callers, with a few other changes needed.

    opened by getzdan 0
  • Implement Riemannian HMC

    Implement Riemannian HMC

    This is a draft to implement Riemannian HMC. There are many things to discuss. I put the high-level points here while leaving more specific ones in the code.

    • Do we want to unify all function signatures for Hamiltonian to a non-separable one (i.e. position dependent)?
    • Do we need to update our metric abstraction? Looks like we should decouple unit/diagonal/dense vs Euclidean/Riemannian.
    • Where should SoftAbs best locate? I current implemented it as an external function to provide G to a generic Riemannian HMC implementation.
    • The efficiency is pretty bad and there is fewer optimization opportunities due to our decoupled abstraction (e.g. compared to what derived in [1]).

    To-dos

    • The current SoftAbs implementation has numerical issues when running on Neal's funnel. I need to double check the if the manual Jacobian implementation is correct, or we could register softabs with ReverseDiff to see if this solves the issue. I could use someone else help on this.

    How to play with this PR

    I provided a notebook (which contains the same content as the test/experimental/riemannian_hmc.jl file) to play with the code. The notebook has some simple validation on the implementation and also shows the current numerical issue of SoftAbs. I highly recommend you to try this.


    [1] Betancourt, M., 2013, August. A general metric for Riemannian manifold Hamiltonian Monte Carlo. In International Conference on Geometric Science of Information (pp. 327-334). Springer, Berlin, Heidelberg.

    opened by xukai92 5
  • Report percentage of divergent transitions in progress bar

    Report percentage of divergent transitions in progress bar

    ~~This PR limits warning messages to 10. Is that a good default?~~

    ~~Solution 1: we can probably disable these numerical messages, and instead print a summary message about total divergent transitions (see e.g. this comment) when the sample function is done.~~

    A message about the percentage of divergent transitions is displayed in the progress bar. In addition, a warning message is shown if the percentage of divergent transitions is above a certain (30% by default) threshold.

    opened by yebai 7
  • Expose adapted mass matrix

    Expose adapted mass matrix

    It would be nice if the adapted mass matrix could be retrieved. @sethaxen pointed out that giving save_state=true to sample is possible, but that it currently only gives the metric before warm-up.

    opened by mschauer 0
Releases(v0.4.1)
  • v0.4.1(Dec 30, 2022)

  • v0.4.0(Dec 25, 2022)

    AdvancedHMC v0.4.0

    Diff since v0.3.6

    Closed issues:

    • README example fails (#295)

    Merged pull requests:

    • add descriptions to README (#296) (@SaranjeetKaur)
    • Remove Turing Web in favour of MultiDocumenter (#298) (@yebai)
    • Introduce type for kinetic energy (#299) (@xukai92)
    • Use LogDensityProblems.jl (#301) (@torfjelde)
    • Relativistic HMC (#302) (@xukai92)
    • chore: unpack kinetic.jl (#303) (@xukai92)
    • Moved all experimental code including tests into research folder. (#304) (@yebai)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.6(Sep 6, 2022)

    AdvancedHMC v0.3.6

    Diff since v0.3.5

    Merged pull requests:

    • CompatHelper: bump compat for Setfield to 1, (keep existing compat) (#291) (@github-actions[bot])
    • CompatHelper: bump compat for DocStringExtensions to 0.9, (keep existing compat) (#292) (@github-actions[bot])
    • Release new version (#294) (@devmotion)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.5(May 5, 2022)

    AdvancedHMC v0.3.5

    Diff since v0.3.4

    Closed issues:

    • Improve AHMC's documentation (#128)
    • MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(ℓπ),Float64},Float64,8}) (#265)

    Merged pull requests:

    • Documentation setup (#286) (@xukai92)
    • refactor: use ReTest (#287) (@xukai92)
    • CompatHelper: bump compat for StatsFuns to 1, (keep existing compat) (#289) (@github-actions[bot])
    • Increase version (#290) (@yebai)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.4(Mar 8, 2022)

  • v0.3.3(Feb 1, 2022)

    AdvancedHMC v0.3.3

    Diff since v0.3.2

    Closed issues:

    • Idea: Adding techniques from paper: "Generalizing hamiltonian monte carlo with neural networks" (#148)
    • [RFC]: notations used in this package (#48)
    • Pair values with metadata? (#101)
    • Wrap Welford estimators using OnlineStats.jl (#179)
    • Interface with AbstractMCMC.jl (#211)
    • Introducing the dev branch (#237)

    Merged pull requests:

    • Minor fix: missing rng for init_params (#284) (@theogf)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Oct 13, 2021)

    AdvancedHMC v0.3.2

    Diff since v0.3.1

    Merged pull requests:

    • Missing integrated tests for StrictGeneralisedNoUTurn (#276) (@xukai92)
    • CompatHelper: bump compat for Setfield to 0.8, (keep existing compat) (#278) (@github-actions[bot])
    • Partial momentum refreshment missing sqrt() (#280) (@rhaps0dy)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Aug 27, 2021)

  • v0.3.0(Jul 15, 2021)

    AdvancedHMC v0.3.0

    Diff since v0.2.27

    Closed issues:

    • Modular design for refreshing momentum variables (#13)
    • Unify Trajectory type for static, dynamic HMC samplers. (#103)
    • First-class support of sampling on a struct (#159)
    • Return adapted mass matrix (#230)
    • HMC gives GPU compilation error (#235)
    • Error gradient (#243)
    • Support for Complex and Matrix Parameters (#251)
    • StanHMC-adaptor (#252)
    • Compatibility with ComponentArrays.jl (#253)
    • Support for complex numbers (#262)
    • the example in the doc does not run (#264)
    • Minimal example from README.md gives error with Beta distribution (#266)

    Merged pull requests:

    • Added note to documentation of sample function (#236) (@trappmartin)
    • Update for MCMCDebugging (#238) (@xukai92)
    • GitHub Actions workflow to pull changes to dev (#239) (@xukai92)
    • Refactoring termination criterion (#240) (@xukai92)
    • Update for MCMCDebugging (#241) (@xukai92)
    • Fixed default type for NesterovDualAveraging (#242) (@torfjelde)
    • Merge dev branch into master. (#244) (@yebai)
    • Unifying trajectories (#245) (@xukai92)
    • Introduce HMCKernel and momentum refreshment structs (#247) (@xukai92)
    • Basic CUDA support (#255) (@treigerm)
    • support ComponentArrays (#257) (@scheidan)
    • Update citation suggestions (#258) (@xukai92)
    • Adopt AbstractMCMC.jl interface (#259) (@torfjelde)
    • Replace slow reconstruct from Parameters.jl with faster @setfield from Setfield.jl (#260) (@torfjelde)
    • Fix type-issue in PhasePoint (#263) (@torfjelde)
    • PhasePoint constructor bug when using GPU (#267) (@treigerm)
    • Improved testing suite (#270) (@torfjelde)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.27(Dec 2, 2020)

    AdvancedHMC v0.2.27

    Diff since v0.2.26

    Closed issues:

    • Modify link-out to IMP.hmc (#223)
    • Sampling hangs on integer data (#229)
    • Bug in metric resizing (#231)

    Merged pull requests:

    • Update IMP.hmc description in readme (#225) (@sethaxen)
    • Fix metric resizing (#232) (@treigerm)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.26(Oct 23, 2020)

    AdvancedHMC v0.2.26

    Diff since v0.2.25

    Closed issues:

    • Discontinuous Hamiltonion Monte Carld (#19)
    • Lack of integrated/statistical tests (#87)
    • Potential improvement for U-Turn detection (#94)
    • example of multiple chains drawns simultaneously (#143)
    • Export AdvancedHMC from AdvancedHMC (#172)
    • ϵ0 of JitteredLeapfrog is not adapted (#218)

    Merged pull requests:

    • [RFC] Add multi-threaded example to readme (#197) (@Vaibhavdixit02)
    • Add robust U-turn check (#207) (@treigerm)
    • Using MCMCDebugging for Geweke test (#208) (@xukai92)
    • Fix adaptation of nominal step size for JitteredLeapfrog (#220) (@sethaxen)
    • Run Github Actions CI on latest release (#221) (@sethaxen)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.25(Jun 3, 2020)

    AdvancedHMC v0.2.25

    Diff since v0.2.24

    Closed issues:

    • Correctly testing show functions (#40)
    • Link to the API doc (#176)
    • Precompilation fails with 1.5.0-beta1 (#203)

    Merged pull requests:

    • Bugfix for Base.show (#195) (@xukai92)
    • Add docs badges (#196) (@Vaibhavdixit02)
    • Fix bugs in multinomial sampling (#199) (@xukai92)
    • Small changes (#200) (@torfjelde)
    • CompatHelper: add new compat entry for "DocStringExtensions" at version "0.8" (#202) (@github-actions[bot])
    • Useinclude with the correct order via @require in __init__ (#204) (@devmotion)
    • Update CI (#205) (@devmotion)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.24(Apr 14, 2020)

  • v0.2.23(Apr 13, 2020)

  • v0.2.22(Apr 13, 2020)

    AdvancedHMC v0.2.22

    Diff since v0.2.21

    Closed issues:

    • Make API doc avaiable at turing.ml (#138)
    • Improving vectorized HMC implementaion (#162)
    • Merge metric and preconditoner (#174)

    Merged pull requests:

    • Small improvements (#166) (@xukai92)
    • Allow LazyArrays version 0.15 (#181) (@andreasnoack)
    • Create CompatHelper.yml (#182) (@yebai)
    • Improving vectorized mode (#185) (@xukai92)
    • CompatHelper: bump compat for "StatsBase" to "0.33" (#187) (@github-actions[bot])
    • Merge Preconditioner into Welford estimators (#188) (@xukai92)
    • Small improvement for Leapfrog (#189) (@xukai92)
    • Support passing stepsize directly to utility adaptor (#192) (@xukai92)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.21(Feb 26, 2020)

    AdvancedHMC v0.2.21

    Diff since v0.2.20

    Closed issues:

    • No method for pm_next! (#163)
    • Custom function for AdvancedHMC (#167)
    • Link in the gradient section of README.md is broken (#177)

    Merged pull requests:

    • Update README.md (#164) (@yebai)
    • Install TagBot as a GitHub Action (#168) (@JuliaTagBot)
    • Fix typo in README.md (#169) (@ebb-earl-co)
    • Update README.md (#170) (@yebai)
    • Fix broken link in gradient section (#178) (@xukai92)
    • bump compat entry for ArgCheck (#180) (@simeonschaub)
    • Allow LazyArrays version 0.15 (#181) (@andreasnoack)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.20(Jan 15, 2020)

  • v0.2.19(Jan 8, 2020)

  • v0.2.18(Jan 5, 2020)

  • v0.2.17(Jan 4, 2020)

  • v0.2.16(Jan 2, 2020)

  • v0.2.15(Jan 2, 2020)

    v0.2.15 (2020-01-02)

    Diff since v0.2.14

    Closed issues:

    • Fix the example (#150)
    • customize the number of n_steps (#144)
    • Multithreading accelerations (#135)
    • Implement a function to "print out" windowed adaptation (#112)
    • Improve numerical error message clarity (#110)
    • Add support for static HMC with multinomial sampler (#102)
    • Remove the n\_adapts field from StanHMCAdaptor (#97)

    Merged pull requests:

    • Work around compiler bug - Julia issue 34232 (#152) (mohamed82008)
    • add missing where P to OrdinaryDiffEq Requires (#149) (ChrisRackauckas)
    • Update gdemo using Bijectors (#147) (xukai92)
    • WIP: Setup DiffEq common interface integrator (#146) (ChrisRackauckas)
    • RFC static to support MultinomialTS (#142) (xukai92)
    • fix merge issue (#141) (xukai92)
    • Improve adapt utility (#139) (xukai92)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.14(Dec 2, 2019)

    v0.2.14 (2019-12-02)

    Diff since v0.2.13

    Closed issues:

    • About the function returns the gradient of the likelihood (#134)
    • Add AHMC composable interface to README (#104)
    • Support of theta_init of Matrix type / GPU-level multiple chain support (#92)

    Merged pull requests:

    • Bump LazyArrays dep (#137) (ChrisRackauckas)
    • Improve numerical error message (#136) (xukai92)
    • Update README.md (#133) (xukai92)
    • Support of theta_init of Matrix type / GPU-level multiple chain support (#117) (xukai92)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.13(Nov 9, 2019)

  • v0.2.12(Nov 7, 2019)

  • v0.2.11(Nov 5, 2019)

  • v0.2.10(Nov 4, 2019)

  • v0.2.9(Nov 2, 2019)

  • v0.2.8(Nov 1, 2019)

Owner
The Turing Language
Bayesian inference with probabilistic programming
The Turing Language
Subpopulation detection in high-dimensional single-cell data

PhenoGraph for Python3 PhenoGraph is a clustering method designed for high-dimensional single-cell data. It works by creating a graph ("network") repr

Dana Pe'er Lab 42 Sep 05, 2022
InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Jan 09, 2023
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Self-Supervised Vision Transformers with DINO PyTorch implementation and pretrained models for DINO. For details, see Emerging Properties in Self-Supe

Facebook Research 4.2k Jan 03, 2023
Relative Positional Encoding for Transformers with Linear Complexity

Stochastic Positional Encoding (SPE) This is the source code repository for the ICML 2021 paper Relative Positional Encoding for Transformers with Lin

Antoine Liutkus 48 Nov 16, 2022
Code for ICCV2021 paper PARE: Part Attention Regressor for 3D Human Body Estimation

PARE: Part Attention Regressor for 3D Human Body Estimation [ICCV 2021] PARE: Part Attention Regressor for 3D Human Body Estimation, Muhammed Kocabas,

Muhammed Kocabas 277 Jan 03, 2023
(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

Haoxi Ran 264 Dec 23, 2022
Job-Recommend-Competition - Vectorwise Interpretable Attentions for Multimodal Tabular Data

SiD - Simple Deep Model Vectorwise Interpretable Attentions for Multimodal Tabul

Jungwoo Park 40 Dec 22, 2022
Competitive Programming Club, Clinify's Official repository for CP problems hosting by club members.

Clinify-CPC_Programs This repository holds the record of the competitive programming club where the competitive coding aspirants are thriving hard and

Clinify Open Sauce 4 Aug 22, 2022
Investigating Attention Mechanism in 3D Point Cloud Object Detection (arXiv 2021)

Investigating Attention Mechanism in 3D Point Cloud Object Detection (arXiv 2021) This repository is for the following paper: "Investigating Attention

52 Nov 19, 2022
GLaRA: Graph-based Labeling Rule Augmentation for Weakly Supervised Named Entity Recognition

GLaRA: Graph-based Labeling Rule Augmentation for Weakly Supervised Named Entity Recognition

Xinyan Zhao 29 Dec 26, 2022
Official repository for MixFaceNets: Extremely Efficient Face Recognition Networks

MixFaceNets This is the official repository of the paper: MixFaceNets: Extremely Efficient Face Recognition Networks. (Accepted in IJCB2021) https://i

Fadi Boutros 51 Dec 13, 2022
blind SQLIpy sebuah alat injeksi sql yang menggunakan waktu sql untuk mendapatkan sebuah server database.

blind SQLIpy Alat blind SQLIpy ini merupakan alat injeksi sql yang menggunakan metode time based blind sql injection metode tersebut membutuhkan waktu

Galih Anggoro Prasetya 4 Feb 24, 2022
Pseudo-rng-app - whos needs science to make a random number when you have pseudoscience?

Pseudo-random numbers with pseudoscience rng is so complicated! Why cant we have a horoscopic, vibe-y way of calculating a random number? Why cant rng

Andrew Blance 1 Dec 27, 2021
Anime Face Detector using mmdet and mmpose

Anime Face Detector This is an anime face detector using mmdetection and mmpose. (To avoid copyright issues, I use generated images by the TADNE model

198 Jan 07, 2023
Used to record WKU's utility bills on a regular basis.

WKU水电费小助手 一个用于定期记录WKU水电费的脚本 Looking for English Readme? 背景 由于WKU校园内的水电账单系统时常存在扣费延迟的现象,而补扣的费用缺乏令人信服的证明。不少学生为费用摸不着头脑,但也没有申诉的依据。为了更好地掌握水电费使用情况,留下一手证据,我开源

2 Jul 21, 2022
Código de um painel de auto atendimento feito em Python.

Painel de Auto-Atendimento O intuito desse projeto era fazer em Python um programa que simulasse um painel de auto atendimento, no maior estilo Mac Do

Calebe Alves Evangelista 2 Nov 09, 2022
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
StyleGAN2-ADA - Official PyTorch implementation

Need Help? If you’re new to StyleGAN2-ADA and looking to get started, please check out this video series from a course Lia Coleman and I taught in Oct

Derrick Schultz 217 Jan 04, 2023
[ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models

Towards Understanding and Mitigating Social Biases in Language Models This repo contains code and data for evaluating and mitigating bias from generat

Paul Liang 42 Jan 03, 2023
Planning from Pixels in Environments with Combinatorially Hard Search Spaces -- NeurIPS 2021

PPGS: Planning from Pixels in Environments with Combinatorially Hard Search Spaces Environment Setup We recommend pipenv for creating and managing vir

Autonomous Learning Group 11 Jun 26, 2022