Download presentation

Presentation is loading. Please wait.

Published byJordan Leeks Modified over 3 years ago

1
Expectation Maximization Dekang Lin Department of Computing Science University of Alberta

2
Objectives Expectation Maximization (EM) is perhaps most often used and mostly half understood algorithm for unsupervised learning. It is very intuitive. Many people rely on their intuition to apply the algorithm in different problem domains. I will present a proof of the EM Theorem that explains why the algorithm works. Hopefully this will help applying EM when intuition is not obvious.

3
Model Building with Partial Observations Our goal is to build a probabilistic model A model is defined by a set of parameters θ The model parameters can be estimated from a set of training examples: x 1, x 2, …, x n x i ’s are identically and independently distributed (iid) Unfortunately, we only get to observe part of each training example: x i =(t i, y i ) and we can only observe y i. How do we build the model?

4
Example: POS Tagging Complete data: A sentence (a sequence of words) and a corresponding sequence of POS tags. Observed data: the sentence Unobserved data: the sequence of tags Model: an HMM with transition/emission probability tables.

5
Training with Tagged Corpus Pierre NNP Vinken NNP,, 61 CD years NNS old JJ,, will MD join VB the DT board NN as IN a DT nonexecutive JJ director NN Nov. NNP 29 CD.. Mr. NNP Vinken NNP is VBZ chairman NN of IN Elsevier NNP N.V. NNP,, the DT Dutch NNP publishing VBG group NN.. Rudolph NNP Agnew NNP,, 55 CD years NNS old JJ and CC former JJ chairman NN of IN Consolidated NNP Gold NNP Fields NNP PLC NNP,, was VBD named VBN a DT nonexecutive JJ director NN of IN this DT British JJ industrial JJ conglomerate NN.. Pierre NNP Vinken NNP,, 61 CD years NNS old JJ,, will MD join VB the DT board NN as IN a DT nonexecutive JJ director NN Nov. NNP 29 CD.. Mr. NNP Vinken NNP is VBZ chairman NN of IN Elsevier NNP N.V. NNP,, the DT Dutch NNP publishing VBG group NN.. Rudolph NNP Agnew NNP,, 55 CD years NNS old JJ and CC former JJ chairman NN of IN Consolidated NNP Gold NNP Fields NNP PLC NNP,, was VBD named VBN a DT nonexecutive JJ director NN of IN this DT British JJ industrial JJ conglomerate NN.. c( JJ )=7 c( JJ, NN )=4, P( NN | JJ )=4/7

6
Example: Parsing Complete Data: a sentence and its parse tree. Observed data: the sentence Unobserved data: the nonterminal categories and their relationships that form the parse tree. Model: PCFG or anything that allows one to compute the probability of parse trees.

7
Example: Semantic Labeling Complete Data: (context, cluster, word) Observed Data: (context, word) Unobserved Data: cluster Model: P(context, cluster, word) = P(context)P(cluster|context)P(word|cluster)

8
What is the best Model? There are many possibly models Many possible ways to set the model parameters. We obviously want the “best” model. Which model is the best? The model that assigns the highest probability to the observation is the best. Maximize Π i P θ (y i ), or equivalently Σ i log P θ (y i ) What about maximizing the probability of the hidden data? This is know as the maximum likelihood estimation (MLE)

9
MLE Example A coin with P(H)=p, P(T)=q. We observed m H’s and n T’s. What are p and q according to MLE? Maximize Σ i log P θ (y i )= log p m q n Under the constraint: p+q=1 Lagrange Method: Define g(p,q)=m log p + n log q+λ(p+q-1) Solve the equations

10
Example Suppose we have two coins. Coin 1 is fair. Coin 2 has probability p generating H. They each have ½ probability to be chosen and tossed. The complete data is (1, H), (1, T), (2, T), (1, H), (2, T) We only know the result of the toss, but don’t know when coin was chosen. The observed data is H, T, T, H, T. Problem: Suppose the observations include m H’s and n T’s. How to estimate p to maximize Σ i log P θ (y i )?

11
Need for Iterative Algorithm Unfortunately, we often cannot find the best θ by solving equations. Example: Three coins, 0, 1, and 2, with probabilities p 0, p 1, and p 2 generating H. Experiment: Toss coin 0 If H, toss coin 1 three times If T, toss coin 2 three times Observations: ,,,, What is MLE for p 0, p 1, and p 2 ?

12
Overview of EM Create an initial model, θ 0. Arbitrarily, randomly, or with a small set of training examples. Use the model θ’ to obtain another model θ such that Σ i log P θ (y i ) > Σ i log P θ’ (y i ) Repeat the above step until reaching a local maximum. Guaranteed to find a better model after each iteration.

13
Maximizing Likelihood How do we find a better model θ given a model θ’? Can we use Lagrange method to maximize Σ i logP θ (y i )? If this can be done, there is no need to iterate!

14
EM Theorem The following EM Theorem holds This theorem is similar to (but is not identical to, nor does it follow) the EM Theorem in [Jelinek 1997, p.148] (the proof is almost identical). EM Theorem: Σ t is summation over all possible values of unobserved data

15
What does EM Theorem Mean? If we can find a θ that maximizes the same θ will also satisfy the condition which is needed in the EM algorithm. We can maximize the former by taking its partial derivatives w.r.t. parameters in θ.

16
EM Theorem: why? Why optimizing is easier than optimizing P θ (t, y i ) involves the complete data and is usually a product of a set of parameters. P θ (y i ) usually involves summation over all hidden variables.

17
EM Theorem: Proof =1 ≤0 (Jensen’s Inequality)

18
The proof used the inequality More generally, if p and q are probability distributions Even more generally, if f is a convex function, E[f(x)] ≥ f(E[x]) Jensen’s Inequality

19
What is ? The expected value of log P θ (t,y i ) according to the model θ’. The EM Theorem states that we can get a better model by maximizing the sum (over all instances) of the expectation.

20
A Generic Set Up for EM Assume P θ (t, y) is a product of a set of parameters. Assume θ consists of M groups of parameters. The parameters in each group sum up to 1. Let u jk be a parameter. Σ m u jm =1 Let T jk be a subset of hidden data such that if t is in T jk, the computation of P θ (t, y i ) involves u jk. Let n(t,y i ) be the number of times u jk is used in P θ (t,y i ), i.e., P θ (t,y i )=u jk n(t,y i ) v(t,y), where v(t,y) is the product of all other parameters.

21
pseudo count of instances involving u jk

22
Summary EM Theorem Intuition Proof Generic Set-up

Similar presentations

Presentation is loading. Please wait....

OK

Overview Full Bayesian Learning MAP learning

Overview Full Bayesian Learning MAP learning

© 2018 SlidePlayer.com Inc.

All rights reserved.

To ensure the functioning of the site, we use **cookies**. We share information about your activities on the site with our partners and Google partners: social networks and companies engaged in advertising and web analytics. For more information, see the Privacy Policy and Google Privacy & Terms.
Your consent to our cookies if you continue to use this website.

Ads by Google

Ppt on big data analytics Importance of water for kids ppt on batteries Ppt on recycling of waste materials Ppt on different types of houses in india Ppt on as 14 amalgamation symbol Ppt on leadership in organizations Ppt on new technology in electrical Ppt on polynomials Ppt on producers consumers and decomposers for kids Ppt on business email etiquette