Presentation is loading. Please wait.

Presentation is loading. Please wait.

Expectation-Maximization

Similar presentations


Presentation on theme: "Expectation-Maximization"— Presentation transcript:

1 Expectation-Maximization
Markoviana Reading Group Fatih Gelgi, ASU, 2005 4/17/2017

2 Outline What is EM? Intuitive Explanation Algorithm Generalized EM
Example: Gaussian Mixture Algorithm Generalized EM Discussion Applications HMM – Baum-Welch K-means 4/17/2017 Fatih Gelgi, ASU’05

3 What is EM? Two main applications:
Data has missing values, due to problems with or limitations of the observation process. Optimizing the likelihood function is extremely hard, but the likelihood function can be simplified by assuming the existence of and values for additional missing or hidden parameters. 4/17/2017 Fatih Gelgi, ASU’05

4 Key Idea… The observed data U is generated by some distribution and is called the incomplete data. Assume that a complete data set exists Z = (U,J), where J is the missing or hidden data. Maximize the posterior probability of the parameters  given the data U, marginalizing over J: 4/17/2017 Fatih Gelgi, ASU’05

5 Intuitive Explanation of EM
Alternate between estimating the unknowns  and the hidden variables J. In each iteration, instead of finding the best J  J, compute a distribution over the space J. EM is a lower-bound maximization process (Minka,98). E-step: construct a local lower-bound to the posterior distribution. M-step: optimize the bound. 4/17/2017 Fatih Gelgi, ASU’05

6 Intuitive Explanation of EM
Lower-bound approximation method ** Sometimes provides faster convergence than gradient descent and Newton’s method 4/17/2017 Fatih Gelgi, ASU’05

7 Example: Mixture Components
4/17/2017 Fatih Gelgi, ASU’05

8 Example (cont’d): True Likelihood of Parameters
4/17/2017 Fatih Gelgi, ASU’05

9 Example (cont’d): Iterations of EM
4/17/2017 Fatih Gelgi, ASU’05

10 Lower-bound Maximization
Posterior probability  Logarithm of the joint distribution Idea: start with a guess t, compute an easily computed lower-bound B(; t) to the function log P(|U) and maximize the bound instead. difficult!!! 4/17/2017 Fatih Gelgi, ASU’05

11 Lower-bound Maximization (cont.)
Construct a tractable lower-bound B(; t) that contains a sum of logarithms. ft(J) is an arbitrary prob. dist. By Jensen’s inequality, 4/17/2017 Fatih Gelgi, ASU’05

12 Optimal Bound B(; t) touches the objective function log P(U,) at t. Maximize B(t; t) with respect to ft(J): Introduce a Lagrange multiplier  to enforce the constraint 4/17/2017 Fatih Gelgi, ASU’05

13 Optimal Bound (cont.) Derivative with respect to ft(J): Maximizes at:
4/17/2017 Fatih Gelgi, ASU’05

14 Maximizing the Bound Re-write B(;t) with respect to the expectations: where Finally, 4/17/2017 Fatih Gelgi, ASU’05

15 EM Algorithm EM converges to a local maximum of log P(U,)  maximum of log P(|U). 4/17/2017 Fatih Gelgi, ASU’05

16 A Relation to the Log-Posterior
An alternative way to compute expected log-posterior: which is the same as maximization with respect to , 4/17/2017 Fatih Gelgi, ASU’05

17 Generalized EM Assume and B function are differentiable in
.The EM likelihood converges to a point where GEM: Instead of setting t+1 = argmax B(;t) Just find t+1 such that B(;t+1) > B(;t) GEM also is guaranteed to converge 4/17/2017 Fatih Gelgi, ASU’05

18 HMM – Baum-Welch Revisited
Estimate the parameters (a, b, ) st. number of correct individual states to be maximum. gt(i) is the probability of being in state Si at time t xt(i,j) is the probability of being in state Si at time t, and Sj at time t+1 4/17/2017 Fatih Gelgi, ASU’05

19 Baum-Welch: E-step 4/17/2017 Fatih Gelgi, ASU’05

20 Baum-Welch: M-step 4/17/2017 Fatih Gelgi, ASU’05

21 K-Means Problem: Given data X and the number of clusters K, find clusters. Clustering based on centroids, A point belongs to the cluster with closest centroid. Hidden variables centroids of the clusters! 4/17/2017 Fatih Gelgi, ASU’05

22 K-Means (cont.) Starting with an initial 0, centroids,
E-step: Split the data into K clusters according to distances to the centroids (Calculate the distribution ft(J)). M-step: Update the centroids (Calculate t+1). 4/17/2017 Fatih Gelgi, ASU’05

23 K Means Example (K=2) Pick seeds Reassign clusters Compute centroids
Converged! 4/17/2017 Fatih Gelgi, ASU’05

24 Discussion Is EM a Primal-Dual algorithm? 4/17/2017
Fatih Gelgi, ASU’05

25 Reference: A.P.Dempster et al “Maximum-likelihood from incomplete data Journal of the Royal Statistical Society. Series B (Methodological), Vol. 39, No. 1. (1977), pp F. Dellaert, “The Expectation Maximization Algorithm”, Tech. Rep. GIT-GVU-02-20, 2002. T. Minka, “Expectation-Maximization as lower bound maximization”, 1998 Y. Chang, M. Kölsch. Presentation: Expectation Maximization, UCSB, 2002. K. Andersson, Presentation: Model Optimization using the EM algorithm, COSC 7373, 2001 4/17/2017 Fatih Gelgi, ASU’05

26 Thanks! 4/17/2017 Fatih Gelgi, ASU’05


Download ppt "Expectation-Maximization"

Similar presentations


Ads by Google