a variational parameter to be optimized during variational inference.
@bm.param to decorate an "initialization fuction" which returns a
tensor value to initialize the variational parameter at the start of optimization.
get_guide_distribution: given a
RVIdentifier, returns its corresponding guide distribution
get_param: given a
Param, returns (possibly initializing if empty) the value of the parameter
Note: An implementation detail is that
update_graph is overriden such that the
guide distribution is automatically used if one is available.
Gradient Estimators and Divergences
computes a Monte-Carlo (possibly surrogate) objective estimate whose gradients
are used as the training signal.
class provides an entrypoint for VI. Model and guide
RVIdentifiers are associated in the
queries_to_guides argument and optimizater configuration is provided through
optimizer callback. An
infer() method is provided for easy invocation whereas
permits more customized interactions (e.g. tensorboard callbacks).
Manually defining a guide for each random variable can become tedious.
provides an initialization strategy for
automatically defines guides through calling a method
get_guide(query: RVIdentifier, distrib: dist.Distribution) implemented by
AutoGuides currently make a mean-field assumption over
In Automatic Differentiation Variational Inference (ADVI), a properly-sized Gaussian is used as a guide to approximate each site:
In Maximum A Posteriori (MAP) inference,
point estimate is used as the guide for each site: