Presentation is loading. Please wait.

Presentation is loading. Please wait.

. EM algorithm and applications Lecture #9 Background Readings: Chapters 11.2, 11.6 in the text book, Biological Sequence Analysis, Durbin et al., 2001.

Similar presentations


Presentation on theme: ". EM algorithm and applications Lecture #9 Background Readings: Chapters 11.2, 11.6 in the text book, Biological Sequence Analysis, Durbin et al., 2001."— Presentation transcript:

1 . EM algorithm and applications Lecture #9 Background Readings: Chapters 11.2, 11.6 in the text book, Biological Sequence Analysis, Durbin et al., 2001.

2 2 This lecture plan: 1. Presentation and Correctness Proof of the EM algorithm. 2. Examples of Implementations The EM algorithm

3 3 A “model with parameters θ ” is a probabilistic space M, in which each simple event y is determined by values of random variables (dice). The parameters θ are the probabilities associated with the random variables. (In HMM of length L, the simple events are HMM-sequences of length L, and the parameters are the transition probabilities m kl and the emission probabilities e k (b)). An “observed data” is a non empty subset x  M. (In HMM, it is usually all the simple events which fit with a given output sequence). Given observed data x, the ML method seeks parameters θ* which maximize the likelihood of the data p(x|θ)=∑ y p(x,y|θ). (In HMM, x can be the transmitted letters,and y the hidden states) Finding such θ* is easy when the observed data is a simple event, but hard in general. Model, Parameters, ML

4 Assume a model with parameters as in the previous slide. Given observed data x, the likelihood of x under model parameters θ is given by p(x|θ)=∑ y p(x,y|θ). (The pairs (x,y) are the simple events which comprise x. Informally, y denotes the possible values of the“hidden data”). The EM algorithm receives x and parameters θ, and returns new parameters * s.t. p(x| *) ≥ p(x|θ), with equality only if λ*=θ. i.e., the new parameters increase the likelihood of the observed data. 4 The EM algorithm

5 5 EM uses the current parameters θ to construct a simpler ML problem L θ : Guarantee: if L θ (λ)>L θ (θ), than P(x| λ)>P(x| θ). log P(x| λ ) The EM algorithm u The graphs below are the logarithms of the likelihood functions λ Log(L θ )= E  [log P(x,y| λ )] θλ*λ*

6 6 Let x be the observed data. Let {(x,y 1 ),…,(x,y k )} be the set of (simple) events which comprise x. Our goal is to find parameters θ * which maximize the sum As this is hard, we start with some parameters θ, and only find λ * s.t. if λ*≠θ then: Derivation of the EM Algorithm Finding λ* is obtained via “virtual sampling”, defined next.

7 7 For given parameters θ, Let p i = p(y i |x,θ). (note that p 1 +…+p k =1). We use the p i ’s to define “virtual” sampling, in which: y 1 occurs p 1 times, y 2 occurs p 2 times, … y k occurs p k times

8 8 In each iteration the EM algorithm does the following. u (E step): Given θ, compute the function u (M step): Find * which maximizes L θ ( ) (Next iteration sets   * and repeat). The EM algorithm Comment: 1. At the M-step we only need that L θ ( *)>L θ (θ). This change yields the so called Generalized EM algorithm. It is used when it is hard to find the optimal *. 2. Usually, the computations use the function:

9 9 Correctness Theorem for the EM Algorithm

10 10 Correctness proof of EM

11 11 Correctness proof of EM (end)

12 12 The Baum-Welsh algorithm is the EM algorithm for HMM: u E step for HMM: where λ are the new parameters {m kl,e k (b)}. u M step for HMM: look for λ which maximizes L θ ( ). Example: Baum Welsh = EM for HMM

13 13 Baum Welsh = EM for HMM (cont) M kl Ek(b)Ek(b)

14 14 A simple example: EM for 2 coin tosses Consider the following experiment: Given a coin with two possible outcomes: H (head) and T (tail), with probabilities θ H, θ T = 1- θ H. The coin is tossed twice, but only the 1 st outcome, T, is seen. So the data is x = (T,*). We wish to apply the EM algorithm to get parameters that increase the likelihood of the data. Let the initial parameters be θ = (θ H, θ T ) = ( ¼, ¾ ).

15 15 EM for 2 coin tosses (cont) The “hidden data” which produce x are the sequences y 1 = (T,H); y 2 =(T,T); Hence the likelihood of x with parameters (θ H, θ T ), is p(x| θ) = P(x,y 1 |  ) + P(x,y 2 |  ) = q H q T +q T 2 For the initial parameters θ = ( ¼, ¾ ), we have: p(x| θ) = ¼ ∙ ¾ + ¾ ∙ ¾ = ¾ Note that in this case P(x,y i |  ) = P(y i |  ), for i = 1,2. we can always define y so that (x,y) = y (otherwise we set y’  (x,y) and replace the “ y ”s by “ y’ ”s).

16 16 EM for 2 coin tosses - E step Calculate L θ ( ) = L θ (λ H,λ T ). Recall: λ H,λ T are the new parameters, which we need to optimize p(y 1 |x,θ) = p(y 1,x|θ)/p(x|θ) = (¾∙ ¼)/ (¾) = ¼ p(y 2 |x,θ) = p(y 2,x|θ)/p(x|θ) = (¾∙ ¾)/ (¾) = ¾ Thus we have This is the “virtual sampling”

17 17 EM for 2 coin tosses - E step For a sequence y of coin tosses, let N H (y) be the number of H’s in y, and N T (y) be the number of T’s in y. Then In our example: y 1 = (T,H); y 2 =(T,T), hence: N H (y 1 ) = N T (y 1 )=1, N H (y 2 ) =0, N T (y 2 )=2

18 18 Thus Example: 2 coin tosses - E step N T = 7 / 4 N H = ¼ And in general:

19 19 EM for 2 coin tosses - M step Find * which maximizes L θ ( ) And as we already saw, is maximized when: [The optimal parameters (0,1), will never be reached by the EM algorithm!]

20 20 Let N k be the expected value of N k (y), given x and θ: N k =E(N k |x,θ) = ∑ y p(y|x,θ) N k (y), EM for single random variable (dice) Now, the probability of each y (≡(x,y)) is given by a sequence of dice tosses. The dice has m outcomes, with probabilities λ 1,..,λ m. Let N k (y) = #(outcome k occurs in y). Then Then we have:

21 21 L  (λ) for one dice NkNk

22 22 EM algorithm for n independent observations x 1,…, x n : Expectation step It can be shown that, if the x j are independent, then:

23 23 Example: The ABO locus A locus is a particular place on the chromosome. Each locus’ state (called genotype) consists of two alleles – one parental and one maternal. Some loci (plural of locus) determine distinguished features. The ABO locus, for example, determines blood type. Suppose we randomly sampled N individuals and found that N a/a have genotype a/a, N a/b have genotype a/b, etc. Then, the MLE is given by: The ABO locus has six possible genotypes {a/a, a/o, b/o, b/b, a/b, o/o}. The first two genotypes determine blood type A, the next two determine blood type B, then blood type AB, and finally blood type O. We wish to estimate the proportion in a population of the 6 genotypes.

24 24 The ABO locus (Cont.) However, testing individuals for their genotype is a very expensive. Can we estimate the proportions of genotype using the common cheap blood test with outcome being one of the four blood types (A, B, AB, O) ? The problem is that among individuals measured to have blood type A, we don’t know how many have genotype a/a and how many have genotype a/o. So what can we do ?

25 25 The ABO locus (Cont.) The Hardy-Weinberg equilibrium rule states that in equilibrium the frequencies of the three alleles q a,q b,q o in the population determine the frequencies of the genotypes as follows: q a/b = 2q a q b, q a/o = 2q a q o, q b/o = 2q b q o, q a/a = [q a ] 2, q b/b = [q b ] 2, q o/o = [q o ] 2. In fact, Hardy-Weinberg equilibrium rule follows from modeling this problem as data x with hidden parameters y:

26 26 The ABO locus (Cont.) The dice’ outcome are the three possible alleles a, b and o. The observed data are the blood types A, B, AB or O. Each blood type is determined by two successive random sampling of alleles, which is an “ordered genotypes pair” – this is the hidden data. A ={(a,a), (a,o),(o,a)}; B={(b,b),(b,o),(o,b); AB={(a,b),(b,a)}; O={(o,o)}. So we have three parameters of one dice – q a,q b,q o - that we need to estimate. We start with parameters θ = (q a,q b,q o ), and then use EM to improve them.

27 27 EM setting for the ABO locus The observed data x =(x 1,..,x n ) is a sequence of elements (blood types) from the set {A,B,AB,O}. eg: (B,A,B,B,O,A,B,A,O,B, AB) are observations (x 1,…x 11 ). The hidden data (ie the y’s) for each x j is the set of ordered pairs of alleles that generates it. For instance, for A it is the set {aa, ao, oa}. The parameters  = {q a,q b, q o } are the (current) probabilities of the alleles. The complete implementation of the EM algorithm for this problem will be given in the tutorial.


Download ppt ". EM algorithm and applications Lecture #9 Background Readings: Chapters 11.2, 11.6 in the text book, Biological Sequence Analysis, Durbin et al., 2001."

Similar presentations


Ads by Google