Hamiltonian Monte Carlo
Hamiltonian Monte Carlo (HMC) is a sampling algorithm for differentiable random variables which uses Hamiltonian dynamics. By randomly drawing a momentum for the kinetic energy and treating the true posterior as the potential energy, HMC is able to simulate trajectories which explore the space. Intuitively, this can be viewed as starting with a marble at a point inside a bowl, flicking the marble in a random direction, and then following the marble as it rolls around. The position of the marble represents the sample, the flick represents the momentum, and the shape of the bowl in combination with the force of gravity represents our true posterior.
Hamiltonian dynamicsβ
HMC applies Hamiltonian dynamics to explore the state space. This means that we think of the posterior surface as having some potential energy, proportional to the posterior's negative log likelihood at some particular set of parameter assignments. We use that potential energy, along with kinetic energy injected in the form of a random momentum "kick", to traverse the surface according to the laws of physics.
Below, we'll use to represent the current set of parameter assignments, and to represent a random momentum factor injected into the system by the inference method. The Hamiltonian physics alluded to above can be written as
where and represent the potential and kinetic energy respectively.
The potential energy represents the shape of the posterior distribution, and is defined by
Here, is the likelihood of the model evaluated at the parameter assignments , conditioned on the observed dataset .
The potential energy can be evaluated directly in Bean Machine as a function of the model's posterior probability at the current parameter values.
The kinetic energy is defined using the momentum variable as well as an inference-method-defined covariance matrix ,
The values for and are provided by the framework. Bean Machine samples from a Normal distribution scaled to the appropriate posterior size for . Bean Machine can optionally estimate an appropriate constant value for by measuring correlations between parameters during an adaptation phase of inference.
We can then simulate the trajectory using the following forms of Hamilton's equations:
Bean Machine can compute using autograd.
The goal of these equations is to determine where the new sample will come to rest after a framework-specified path length . Both the potential energy (from the posterior's likelihood) and the kinetic energy (from the injected "kick") will interact with the sampled value to influence how it travels over the posterior surface.
Since we are computing gradients, all of the latent (non-observed) variables must be continuous. For discrete variables, use CompositionalInference
or marginalize them out as in the Zero inflated count data tutorial.
Approximating Hamiltonian dynamicsβ
Unfortunately, this system of differential equations cannot be analytically computed. Instead, Bean Machine discretizes these equations based on a discrete time step , and simulates how they influence each other in this discrete setting for a framework-specified path length . This discrete simulation is referred to as the "leapfrog" algorithm. Each simulated time step is referred to as a "leapfrog step".
At a high level, leapfrog works like this:
- Choose momentum () and covariance () values to use for the simulation.
- Simulate a small time step of the momentum's influence on the sample.
- Simulate a small time step of the potential energy's influence on the sample.
- Repeat for the framework-specified path length .
We'll represent the above description mathematically. For leapfrog step at time of size , we take a half-step for momentum and a full-step for potential energy using the updated momentum, and finally another half step for the momentum.
This process is repeated until we have achieved the framework-specific distance . Thus, the number of leapfrog steps is calculated by . The final sample for is chosen by the value of after the last leapfrog step.
Due to the discretization, the resulting trajectory will contain numerical errors. To account for the error, a Metropolis acceptance probability will be used to determine whether to accept the proposed value for .
HMC's performance is quite sensitive to the hyperparameters used. They are as follows:
- Path length, . Because the samples should be minimally correlated, it is ideal to follow the trajectory for long path lengths. However, distributions may have periodic behavior, and long path lengths may waste computation. The ideal path length is the minimal path length where the starting and ending samples have low correlation.
- Step size, . If the Hamiltonian equations were followed exactly (not approximated), all samples would be accepted. However, error is introduced into the system during the discretization of Hamilton's equations. The larger the step size, the worse the final approximation will be; however, if the steps are too small, the number of steps needed as well as the overall runtime of the algorithm will increase.
- Covariance (mass) matrix, . If there is significant correlations between the variables and the covariance matrix is not properly tuned, we will need to make smaller steps to get samples that are likely to be accepted.
The optimal acceptance rate for HMC, as derived by Neal (2011), is 0.65. It is desirable to tune these parameters so that they acheive this acceptance rate on average.
Adaptive Hamiltonian Monte Carloβ
Due to the challenge of selecting good hyperparameters, Bean Machine provides extensions to HMC to help choose appropriate values.
One such extension is called Adaptive Hamiltonian Monte Carlo. Adaptive HMC Adaptive HMC requires an adaptive phase, where Bean Machine uses the HMC algorithm to generate samples while tuning HMC's step size and covariance matrix. Adaptive HMC provides two main improvements over HMC, outlined below.
Users do not have to specify a step size. During the adaptive phase, the step size is adjusted in order to achieve the optimal acceptance rate of 0.65 on average. If the acceptance rate is above optimal, then Bean Machine is being too careful and discretizing in steps that are too small; therefore, the step size should be increased. If the acceptance rate is too low, then the step size should be decreased. We follow the Robbins-Monro stochastic approximation method, where earlier iterations within the adaptive phase have a larger influence over the step size than later iterations.
HMC can take different step sizes in different dimensions. During the adaptive phase, the momentum in each dimension is tuned depending on the covariance of the previously-accepted samples. The amount of momentum used is controlled by the covariance matrix, which allows HMC to move adjust the dimensions at different rates. The estimated covariance matrix is adjusted based on the covariance of the samples. Since the ideal covariance matrix is the true covariance, we can approximate this during the adaptive phase by using the covariance of the samples.
Once the adaptive phase ends, we no longer update our parameters, and the original HMC algorithm is used to generate new samples. Since the samples generated during the adaptive phase use untuned parameters, they may not be of the highest quality and are not returned by default.
Tuning the path lengthβ
While adaptive HMC can effectively tun the step size and covariance matrix, Bean Machine relies on a separate algorithm for tuning the path length . This algorithm is called the No-U-Turn Sampler, and has its own documentation page, No-U-Turn Samplers.
Usageβ
In Bean Machine, inference using HMC can be specified as an inference method for all variables in the model:
bm.SingleSiteHamiltonianMonteCarlo(
1.0, # trajectory length
initial_step_size=0.1,
adapt_step_size=True
adapt_mass_matrix=True
target_accept_prob=0.8
).infer(
queries,
observations,
num_samples,
num_chains,
num_adaptive_samples=100,
)
Make sure to set num_adaptive_samples
when using adaptive HMC! If you forget to set num_adaptive_samples
, no adaptation will occur.
The global variant for adaptive HMC, GlobalHamiltonianMonteCarlo
, which proposes all of the variables in the model jointly, follows the same API:
bm.GlobalHamiltonianMonteCarlo(
1.0, # trajectory length
initial_step_size=0.1,
adapt_step_size=True,
adapt_mass_matrix=True,
target_accept_prob=0.8,
nnc_compile=True,
).infer(
queries,
observations,
num_samples,
num_chains,
num_adaptive_samples=100,
)
Functorch's ahead of time (AOT) autograd compiler is used
by default. If working with a non-static model or unexpected errors are encountered, you may need to manually
disable the nnc_compile
flag.
These arguments allow us to decide if we want to tune the step size adjust_step_size
or covariance matrix adapt_mass_matrix
. The target_accept_prob
argument indicates the acceptance probability which should be targeted by the step size tuning algorithm. While the optimal value is 65.1%, higher values have been show to be more robust. As a result, Bean Machine targets an acceptance rate of 0.8 by default.
The parameters to infer
are described below:
Name | Usage |
---|---|
queries | A List of @bm.random_variable targets to fit posterior distributions for. |
observations | The Dict of observations. Each key is a random variable, and its value is the observed value for that random variable. |
num_samples | Number of samples to build up distributions for the values listed in queries . |
num_chains | Number of separate inference runs to use. Multiple chains can be used by diagnostics to verify inference ran correctly. |
num_adaptive_samples | Number of warmup samples to adapt the parameters. |
Neal, Radford M. "MCMC using Hamiltonian dynamics." Handbook of markov chain monte carlo 2.11 (2011): 2.
Herbert Robbins. Sutton Monro. "A Stochastic Approximation Method." Ann. Math. Statist. 22 (3) 400 - 407, September, 1951. https://doi.org/10.1214/aoms/1177729586