beanmachine.ppl package

Subpackages

Module contents

class beanmachine.ppl.CompositionalInference(inference_dict: Optional[Dict[Union[Callable, Tuple[Callable, ...], ellipsis], Union[beanmachine.ppl.inference.base_inference.BaseInference, Tuple[beanmachine.ppl.inference.base_inference.BaseInference, ...], ellipsis]]] = None, nnc_compile: bool = True)

Bases: beanmachine.ppl.inference.base_inference.BaseInference

The CompositionalInference class enables combining multiple inference algorithms and blocking random variables together. By default, continuous variables will be blocked together and use the GlobalNoUTurnProposer. Discrete variables will be proposed independently with SingleSiteUniformProposer. To override the default behavior, you can pass an inference_dict. To learn more about Compositional Inference, please see the Compositional Inference page on our website.

Example 0 (use different inference method for different random variable families):

CompositionalInference({
    model.foo: bm.SingleSiteAncestralMetropolisHastings(),
    model.bar: bm.SingleSiteNewtonianMonteCarlo(),
})

Example 1 (override default inference method):

CompositionalInference({...: bm.SingleSiteAncestralMetropolisHastings()})

Example 2 (block inference (jointly propose) model.foo and model.bar):

CompositionalInference({(model.foo, model.bar): bm.GlobalNoUTurnSampler()})

Warning

When using the default inference behavior, graphs (i.e. the number of latent variables) must be static and cannot change between iterations.

Parameters
  • inference_dict – an optional inference configuration as shown above.

  • nnc_compile – where available, use NNC to compile proposers.

get_proposers(world: beanmachine.ppl.world.world.World, target_rvs: Set[beanmachine.ppl.model.rv_identifier.RVIdentifier], num_adaptive_sample: int) List[beanmachine.ppl.inference.proposer.base_proposer.BaseProposer]

Returns the proposer(s) corresponding to every non-observed variable in target_rvs. Should be implemented by the specific inference algorithm.

class beanmachine.ppl.Diagnostics(samples: beanmachine.ppl.inference.monte_carlo_samples.MonteCarloSamples)

Bases: beanmachine.ppl.diagnostics.diagnostics.BaseDiagnostics

class beanmachine.ppl.GlobalHamiltonianMonteCarlo(trajectory_length: float, initial_step_size: float = 1.0, adapt_step_size: bool = True, adapt_mass_matrix: bool = True, full_mass_matrix: bool = False, target_accept_prob: float = 0.8, nnc_compile: bool = False, experimental_inductor_compile: bool = False)

Bases: beanmachine.ppl.inference.base_inference.BaseInference

Global (multi-site) Hamiltonian Monte Carlo [1] sampler. This global sampler blocks all of the target random_variables in the World together and proposes them jointly.

[1] Neal, Radford. MCMC Using Hamiltonian Dynamics.

Parameters
  • trajectory_length (float) – Length of single trajectory.

  • initial_step_size (float) – Defaults to 1.0.

  • adapt_step_size (bool) – Whether to adapt the step size, Defaults to True,

  • adapt_mass_matrix (bool) – Whether to adapt the mass matrix. Defaults to True,

  • target_accept_prob (float) – Target accept prob. Increasing this value would lead to smaller step size. Defaults to 0.8.

  • nnc_compile – If True, NNC compiler will be used to accelerate the inference.

  • experimental_inductor_compile – If True, TorchInductor will be used to accelerate the inference.

get_proposers(world: beanmachine.ppl.world.world.World, target_rvs: Set[beanmachine.ppl.model.rv_identifier.RVIdentifier], num_adaptive_sample: int) List[beanmachine.ppl.inference.proposer.base_proposer.BaseProposer]

Returns the proposer(s) corresponding to every non-observed variable in target_rvs. Should be implemented by the specific inference algorithm.

class beanmachine.ppl.GlobalNoUTurnSampler(max_tree_depth: int = 10, max_delta_energy: float = 1000.0, initial_step_size: float = 1.0, adapt_step_size: bool = True, adapt_mass_matrix: bool = True, full_mass_matrix: bool = False, multinomial_sampling: bool = True, target_accept_prob: float = 0.8, nnc_compile: bool = True, experimental_inductor_compile: bool = False)

Bases: beanmachine.ppl.inference.base_inference.BaseInference

Global No U-turn sampler [1]. This sampler blocks multiple variables together in the World and samples them jointly. This sampler adaptively sets the hyperparameters of the HMC kernel.

[1] Hoffman and Gelman. The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. [2] Betancourt, Michael. A Conceptual Introduction to Hamiltonian Monte Carlo.

Parameters
  • max_tree_depth (int) – Maximum tree depth, defaults to 10.

  • max_delta_energy (float) – Maximum delta energy (for numerical stability), defaults to 1000.

  • initial_step_size (float) – Defaults to 1.0.

  • adapt_step_size (bool) – Whether to adapt step size with Dual averaging as suggested in [1], defaults to True.

  • adapt_mass_matrix (bool) – defaults to True.

  • multinomial_sampling (bool) – Whether to use multinomial sampling as in [2], defaults to True.

  • target_accept_prob (float) – Target accept probability. Increasing this would lead to smaller step size. Defaults to 0.8.

  • nnc_compile – If True, NNC compiler will be used to accelerate the inference.

  • experimental_inductor_compile – If True, TorchInductor will be used to accelerate the inference.

get_proposers(world: beanmachine.ppl.world.world.World, target_rvs: Set[beanmachine.ppl.model.rv_identifier.RVIdentifier], num_adaptive_sample: int) List[beanmachine.ppl.inference.proposer.base_proposer.BaseProposer]

Returns the proposer(s) corresponding to every non-observed variable in target_rvs. Should be implemented by the specific inference algorithm.

class beanmachine.ppl.RVIdentifier(wrapper: Callable, arguments: Tuple)

Bases: object

Struct representing the unique key corresponding to a BM random variable.

arguments: Tuple
property function
property is_functional
property is_random_variable
wrapper: Callable
class beanmachine.ppl.SingleSiteAncestralMetropolisHastings

Bases: beanmachine.ppl.inference.single_site_inference.SingleSiteInference

class beanmachine.ppl.SingleSiteHamiltonianMonteCarlo(trajectory_length: float, initial_step_size: float = 1.0, adapt_step_size: bool = True, adapt_mass_matrix: bool = True, full_mass_matrix: bool = False, target_accept_prob: float = 0.8, nnc_compile: bool = True, experimental_inductor_compile: bool = False)

Bases: beanmachine.ppl.inference.base_inference.BaseInference

Single site Hamiltonian Monte Carlo [1] sampler. During inference, each random variable is proposed through its own leapfrog trajectory while fixing the rest of World as constant.

[1] Neal, Radford. MCMC Using Hamiltonian Dynamics.

Parameters
  • trajectory_length (float) – Length of single trajectory.

  • initial_step_size (float) – Defaults to 1.0.

  • adapt_step_size (bool) – Whether to adapt the step size, Defaults to True,

  • adapt_mass_matrix (bool) – Whether to adapt the mass matrix. Defaults to True,

  • target_accept_prob (float) – Target accept prob. Increasing this value would lead to smaller step size. Defaults to 0.8.

  • nnc_compile – If True, NNC compiler will be used to accelerate the inference.

  • experimental_inductor_compile – If True, TorchInductor will be used to accelerate the inference.

get_proposers(world: beanmachine.ppl.world.world.World, target_rvs: Set[beanmachine.ppl.model.rv_identifier.RVIdentifier], num_adaptive_sample: int) List[beanmachine.ppl.inference.proposer.base_proposer.BaseProposer]

Returns the proposer(s) corresponding to every non-observed variable in target_rvs. Should be implemented by the specific inference algorithm.

class beanmachine.ppl.SingleSiteNewtonianMonteCarlo(real_space_alpha: float = 10.0, real_space_beta: float = 1.0)

Bases: beanmachine.ppl.inference.base_inference.BaseInference

Single site Newtonian Monte Carlo [1]. This algorithm selects a proposer based on the support of the random variable. Valid supports include real, positive real, and simplex. Each site is proposed independently.

[1] Arora, Nim, et al. Newtonian Monte Carlo: single-site MCMC meets second-order gradient methods

Parameters
  • real_space_alpha – alpha value for real space as specified in [1], defaults to 10.0

  • real_space_beta – beta value for real space as specified in [1], defaults to 1.0

get_proposers(world: beanmachine.ppl.world.world.World, target_rvs: Set[beanmachine.ppl.model.rv_identifier.RVIdentifier], num_adaptive_sample: int) List[beanmachine.ppl.inference.proposer.base_proposer.BaseProposer]

Returns the proposer(s) corresponding to every non-observed variable in target_rvs. Should be implemented by the specific inference algorithm.

class beanmachine.ppl.SingleSiteNoUTurnSampler(max_tree_depth: int = 10, max_delta_energy: float = 1000.0, initial_step_size: float = 1.0, adapt_step_size: bool = True, adapt_mass_matrix: bool = True, full_mass_matrix: bool = False, multinomial_sampling: bool = True, target_accept_prob: float = 0.8, nnc_compile: bool = False, experimental_inductor_compile: bool = False)

Bases: beanmachine.ppl.inference.base_inference.BaseInference

Single site No U-turn sampler [1]. This sampler proposes value for each random variable in the World one at a time. This sampler adaptively sets the hyperparameters of the HMC kernel.

[1] Hoffman and Gelman. The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. [2] Betancourt, Michael. A Conceptual Introduction to Hamiltonian Monte Carlo.

Parameters
  • max_tree_depth (int) – Maximum tree depth, defaults to 10.

  • max_delta_energy (float) – Maximum delta energy (for numerical stability), defaults to 1000.

  • initial_step_size (float) – Defaults to 1.0.

  • adapt_step_size (bool) – Whether to adapt step size with Dual averaging as suggested in [1], defaults to True.

  • adapt_mass_matrix (bool) – defaults to True.

  • multinomial_sampling (bool) – Whether to use multinomial sampling as in [2], defaults to True.

  • target_accept_prob (float) – Target accept probability. Increasing this would lead to smaller step size. Defaults to 0.8.

  • nnc_compile – If True, NNC compiler will be used to accelerate the inference.

  • experimental_inductor_compile – If True, TorchInductor will be used to accelerate the inference.

get_proposers(world: beanmachine.ppl.world.world.World, target_rvs: Set[beanmachine.ppl.model.rv_identifier.RVIdentifier], num_adaptive_sample: int) List[beanmachine.ppl.inference.proposer.base_proposer.BaseProposer]

Returns the proposer(s) corresponding to every non-observed variable in target_rvs. Should be implemented by the specific inference algorithm.

class beanmachine.ppl.SingleSiteRandomWalk(step_size: float = 1.0)

Bases: beanmachine.ppl.inference.base_inference.BaseInference

Single Site random walk Metropolis-Hastings. This single site algorithm uses a Normal distribution proposer.

Parameters

step_size – Step size, defaults to 1.0

get_proposers(world: beanmachine.ppl.world.world.World, target_rvs: Set[beanmachine.ppl.model.rv_identifier.RVIdentifier], num_adaptive_sample: int) List[beanmachine.ppl.inference.proposer.base_proposer.BaseProposer]

Returns the proposer(s) corresponding to every non-observed variable in target_rvs. Should be implemented by the specific inference algorithm.

class beanmachine.ppl.SingleSiteUniformMetropolisHastings

Bases: beanmachine.ppl.inference.single_site_inference.SingleSiteInference

Single site uniform Metropolis-Hastings. This single site algorithm proposes from a uniform distribution (uniform Categorical for discrete variables).

beanmachine.ppl.effective_sample_size(query_samples: torch.Tensor) torch.Tensor
beanmachine.ppl.empirical(queries: List[beanmachine.ppl.model.rv_identifier.RVIdentifier], samples: beanmachine.ppl.inference.monte_carlo_samples.MonteCarloSamples, num_samples: Optional[int] = 1) beanmachine.ppl.inference.monte_carlo_samples.MonteCarloSamples

Samples from the empirical (marginal) distribution of the queried variables.

Parameters
  • queries – list of random_variable’s to be sampled.

  • samplesMonteCarloSamples of the distribution.

  • num_samples – Number of samples to sample (with replacement). Defaults to 1.

Returns

MonteCarloSamples object containing the sampled random variables.

beanmachine.ppl.functional(f: Callable[[beanmachine.ppl.model.statistical_model.P], torch.Tensor]) Callable[[beanmachine.ppl.model.statistical_model.P], Union[beanmachine.ppl.model.rv_identifier.RVIdentifier, torch.Tensor]]

Decorator to be used for every query defined in statistical model, which are functions of bm.random_variable

@bm.random_variable
def foo():
  return Normal(0., 1.)

@bm.functional():
def bar():
  return foo() * 2.0
beanmachine.ppl.param(init_fn)

Decorator to be used for params (variable to be optimized with VI).:

@bm.param
def mu():
  return torch.zeros(2)

@bm.random_variable
def foo():
  return Normal(mu(), 1.)
beanmachine.ppl.r_hat(query_samples: torch.Tensor) Optional[torch.Tensor]
beanmachine.ppl.random_variable(f: Callable[[beanmachine.ppl.model.statistical_model.P], torch.distributions.distribution.Distribution]) Callable[[beanmachine.ppl.model.statistical_model.P], Union[beanmachine.ppl.model.rv_identifier.RVIdentifier, torch.Tensor]]

Decorator to be used for every stochastic random variable defined in all statistical models. E.g.:

@bm.random_variable
def foo():
  return Normal(0., 1.)

def foo():
  return Normal(0., 1.)
foo = bm.random_variable(foo)
beanmachine.ppl.seed(seed: int) None
beanmachine.ppl.simulate(queries: List[beanmachine.ppl.model.rv_identifier.RVIdentifier], posterior: Optional[Union[beanmachine.ppl.inference.monte_carlo_samples.MonteCarloSamples, Dict[beanmachine.ppl.model.rv_identifier.RVIdentifier, torch.Tensor]]] = None, num_samples: Optional[int] = None, vectorized: Optional[bool] = False, progress_bar: Optional[bool] = True) beanmachine.ppl.inference.monte_carlo_samples.MonteCarloSamples

Generates predictives from a generative model.

For example:

obs_queries = [likelihood(i) for i in range(10))]
posterior = SinglesiteHamiltonianMonteCarlo(10, 0.1).infer(...)
# generates one sample per world (same shape as `posterior` samples)
predictives = simulate(obs_queries, posterior=posterior)

To generate prior predictives:

queries = [prior(), likelihood()]  # specify the full generative model
# Monte carlo samples of shape (num_samples, sample_shape)
predictives = simulate(queries, num_samples=1000)
Parameters
  • query – list of random_variable’s corresponding to the observations.

  • posterior – Optional MonteCarloSamples or RVDict of the latent variables.

  • num_samples – Number of prior predictive samples, defaults to 1. Should not be specified if posterior is specified.

Returns

MonteCarloSamples of the generated predictives.

beanmachine.ppl.split_r_hat(query_samples: torch.Tensor) Optional[torch.Tensor]