Latent Diffusion Priors for Physics-Based Inverse Problems

TL;DR

Problem setting

Inverse problems aim to recover unknown inputs to a forward model from observations. The paper focuses on a Bayesian formulation and highlights the challenges of selecting a good prior and sampling in high dimensions.

At a high level, the Bayesian inverse problem can be summarized as:

p(xy)p(yx)p(x)p(x \mid y) \propto p(y \mid x)\,p(x)

Key idea

Train a latent diffusion model as a prior, then perform inference in latent space rather than directly in the original parameter space. This makes it possible to leverage a powerful learned prior while keeping inference tractable.

Method (high level)

  1. Train a latent diffusion model on data.
  2. Use the learned generator as the prior in a Bayesian inverse problem.
  3. Compare BBVI and Metropolis MCMC for posterior inference in latent space.

Latent diffusion prior pipeline

Latent diffusion architecture

Autoencoder reconstruction

The DDPM training objective used in the derivation can be written as:

L=Et,x0,ϵϵϵθ(xt,t)2L = \mathbb{E}_{t, x_0, \epsilon} \left\| \epsilon - \epsilon_\theta(x_t, t) \right\|^2

Evidence

The paper reports:

Limitations and open points

Takeaway

The project illustrates how latent diffusion priors can be integrated into Bayesian inversion and suggests that variational methods can be more practical than naive MCMC in this setting.