Presentation is loading. Please wait.

Presentation is loading. Please wait.

Expectation-Maximization & Belief Propagation

Similar presentations


Presentation on theme: "Expectation-Maximization & Belief Propagation"— Presentation transcript:

1 Expectation-Maximization & Belief Propagation
Alan Yuille Dept. Statistics UCLA

2 1. Chair Goal of this Talk. The goal is to introduce the Expectation-Maximization (EM) and Belief Propagation (BP) algorithms. EM is one of the major algorithms used for inference for models where there are hidden/missing/latent variables.

3 Example: Geman and Geman

4 Images are piecewise smooth
Assume that images are smooth except at sharp discontinuities (edges). Justification from the statistics of real images (Zhu & Mumford).

5 Graphical Model & Potential
The Graphical Model. An undirected graph. Hidden Markov Model. The potential. If the gradient in u becomes too large, then the line process is activated and the smoothness is cut.

6 The Posterior Distribution
We apply Bayes rule to get a posterior distribution:

7 Line Process: Off and On
Illustration of Line Processes. No Edge Edge

8 Choice of Task. What do we want to estimate?

9 Expectation Maximization.

10 Expectation-Maximization

11 Back to the Geman & Geman model

12 Image Example

13 Neural Networks and the Brain
An early variant of this algorithm was formulated as a Hopfield network. Koch, Marroquin, Yuille (1987) It is just possible that a variant of this algorithm is implemented in V1 – Prof. Tai Sing Lee (CMU).

14 EM for Mixture of two Gaussians
A mixture model is of form:

15 EM for a Mixture of two Gaussians
Each observation has been generated by one of two Gaussians. But we do not know the parameters (i.e. mean and variance) of the Gaussians and we do not know which Gaussian generated each observation. Colours indicate the assignment of points to clusters (red and blue). Intermediates (e.g. purple) represent probabilistic assignments. The ellipses represent the current parameters values of each cluster.

16 Expectation-Maximization: Summary
We can apply EM to any inference problem with hidden variables. The following limitations apply: (1) Can we perform the E and M steps? For the image problem, the E step was analytic and the M step required solving linear equations. (2) Does the algorithm converge to the global maximum of P(u|d)? This is true for some problems, but not for all.

17 Expectation Maximization: Summary
For an important class of problems – EM has a nice symbiotic relationship with dynamic programming (see next lecture). Mathematically, the EM algorithm falls into a class of optimization techniques known as Majorization (Statistics) and Variational Bounding (Machine Learning). Majorization (De Leeuw) is considerably older…

18 Belief Propagation (BP) and Message Passing
BP is an inference algorithm that is exact for graphical models defined on trees. It is similar to dynamic programming (see next lecture). It is often known as “loopy BP” when applied to graphs with closed loops. Empirically, it is often a successful approximate algorithm for graphs with closed loops. But it tends to degrade badly when the number of closed loops increases.

19 BP and Message Parsing We define a distribution (undirected graph)
BP comes in two forms: (I) sum-product, and (II) max-product. Sum product (Pearl) is used for estimating the marginal distributions of the variables x.

20 Message Passing: Sum Product
Sum-product proceeds by passing messages between nodes.

21 Message Parsing: Max Product
The max-product algorithm (Gallager) also uses messages but it replaces the sum by a max. The update rule is:

22 Beliefs and Messages We construct “beliefs” – estimates of the marginal probabilities – from the messages: For graphical models defined on trees (i.e.no closed loops): (i) sum-product will converge to the marginals of the distribution P(x). (ii) max-product converges to the maximum probability states of P(x). But this is not very special, because other algorithms do this – see next lecture.

23 Loopy BP The major interest in BP is that it performs well empirically when applied to graphs with closed loops. But: (i) convergence is not guaranteed (the algorithm can oscillate) (ii) the resulting beliefs are only approximations to the correct marginals.

24 Bethe Free Energy There is one major theoretical result (Yedidia et al). The fixed points of BP correspond to extrema of the Bethe free energy. The Bethe free energy is one of a set of approximations to the free energy.

25 BP without messages. Use the beliefs to construct local approximations B(.) to the distribution. Update beliefs by repeated marginalization

26 BP without messages Local approximations (consistent on trees).

27 Another Viewpoint of BP
There is also a relationship between BP and Markov Chain Monte Carlo (MCMC). BP is like a deterministic form of the Gibbs sampler. MCMC will be described in later lectures.

28 Summary of BP BP gives exact results on trees (similar to dynamic programming). BP gives surprisingly good approximate results on graphs with loops. No guarantees of convergence, but fixed points of BP correspond to extrema of the Bethe Free energy. BP can be formulated without messages. BP is like a deterministic version of the Gibbs sampler in MCMC.


Download ppt "Expectation-Maximization & Belief Propagation"

Similar presentations


Ads by Google