LAMPE is a simulation-based inference (SBI) package that focuses on amortized estimation of posterior distributions, without relying on explicit likelihood functions; hence the name Likelihood-free AMortized Posterior Estimation (LAMPE). The package provides PyTorch implementations of modern amortized simulation-based inference algorithms like neural ratio estimation (NRE), neural posterior estimation (NPE) and more. Similar to PyTorch, the philosophy of LAMPE is to avoid obfuscation and expose all components, from network architecture to optimizer, to the user such that they are free to modify or replace anything they like.

As part of the inference pipeline, lampe provides components to efficiently store and load data from disk, diagnose predictions and display results graphically.


The lampe package is available on PyPI, which means it is installable via pip.

pip install lampe

Alternatively, if you need the latest features, you can install it from the repository.

pip install git+https://github.com/probabilists/lampe

Simulation-based inference#

In many areas of science, computer simulators are used to describe complex phenomena such as high energy particle interactions, gravitational waves or neuronal ion-channel dynamics. These simulators are stochastic models/programs that generate synthetic observations according to input parameters. A common task for scientists is to use such models to perform statistical inference of the parameters given one or more observations. Unfortunately, simulators often feature high-dimensional parameter spaces and intractable likelihoods, making inference challenging.

Formally, a stochastic model takes (a vector of) parameters \(\theta \in \Theta\) as input, samples internally a series \(z \in \mathcal{Z}\) of latent variables and, finally, produces an observation \(x \in \mathcal{X} \sim p(x | \theta, z)\) as output, thereby defining an implicit likelihood \(p(x | \theta)\). This likelihood is typically intractable as it corresponds to the integral of the joint likelihood \(p(x, z | \theta)\) over all possible trajectories through the latent space \(\mathcal{Z}\). Moreover, in Bayesian inference, we are interested in the posterior

\[p(\theta | x) = \frac{p(x | \theta) p(\theta)}{p(x)} = \frac{p(x | \theta) p(\theta)}{\int_\Theta p(x | \theta') p(\theta') \operatorname{d}\!\theta'}\]

for some observation \(x\) and a prior distribution \(p(\theta)\), which not only involves the intractable likelihood \(p(x | \theta)\) but also an intractable integral over the parameter space \(\Theta\). The omnipresence of this problem gave rise to a rapidly expanding field of research referred to as simulation-based inference (SBI).


The frontier of simulation-based inference (Cranmer et al., 2020)
Approximating Likelihood Ratios with Calibrated Discriminative Classifiers (Cranmer et al., 2015)
Likelihood-free MCMC with Amortized Approximate Ratio Estimators (Hermans et al., 2019)
Fast Likelihood-free Inference with Autoregressive Flows (Papamakarios et al., 2018)
Automatic Posterior Transformation for Likelihood-Free Inference (Greenberg et al., 2019)