Presentation is loading. Please wait.

Presentation is loading. Please wait.

Markov Models. Markov Chain A sequence of states: X 1, X 2, X 3, … Usually over time The transition from X t-1 to X t depends only on X t-1 (Markov Property).

Similar presentations


Presentation on theme: "Markov Models. Markov Chain A sequence of states: X 1, X 2, X 3, … Usually over time The transition from X t-1 to X t depends only on X t-1 (Markov Property)."— Presentation transcript:

1 Markov Models

2 Markov Chain A sequence of states: X 1, X 2, X 3, … Usually over time The transition from X t-1 to X t depends only on X t-1 (Markov Property). A Bayesian network that forms a chain The transition probabilities are the same for any t (stationary process) X2X3X4X1

3 Example: Gambler’s Ruin Specification: Gambler has 3 dollars. Win a dollar with prob. 1/3. Lose a dollar with prob. 2/3. Fail: no dollars. Succeed: Have 5 dollars. States: the amount of money 0, 1, 2, 3, 4, 5 Transition Probabilities Courtsey of Michael Littman

4 Example: Bi-gram Language Modeling States: Transition Probabilities:

5 Transition Probabilities Suppose a state has N possible values X t =s 1, X t =s 2,….., X t =s N. N 2 Transition Probabilities P(X t =s i |X t-1 =s j ), 1≤ i, j ≤N The transition probabilities can be represented as a NxN matrix or a directed graph. Example: Gambler’s Ruin

6 What can Markov Chains Do? Example: Gambler’s Ruin The probability of a particular sequence  3, 4, 3, 2, 3, 2, 1, 0 The probability of success for the gambler The average number of bets the gambler will make.

7 Example: Academic Life A.Assistant Prof.: 20 B. Associate Prof.: 60 T. Tenured Prof.: 90 S. Out on the Street: 10 D. Dead: 0 1.0 0.6 0.2 0.8 0.2 0.6 0.2 0.7 0.3 What is the expected lifetime income of an academic? Courtsey of Michael Littman

8 Solving for Total Reward L(i) is expected total reward received starting in state i. How could we compute L(A)? Would it help to compute L(B), L(T), L(S), and L(D) also?

9 Solving the Academic Life The expected income at state D is 0 L(T)=90+0.7x90+0.7 2 x90+… L(T)=90+0.7xL(T) L(T)=300 T. Tenured Prof.: 90 D. Dead: 0 0.7 0.3

10 Working Backwards A.Assistant Prof.: 20 B. Associate Prof.: 60 T. Tenured Prof.: 90 S. Out on the Street: 10 D. Dead: 0 1.0 0.6 0.2 0.8 0.2 0.6 0.2 0.7 0.3 0 300 50 325 287.5 Another question: What is the life expectancy of professors?

11 Ruin Chain 012345 1/3 2/3 1 1 1 +1

12 Gambling Time Chain 012345 1/3 2/3 1 1 +1

13 Google’s Search Engine Assumption: A link from page A to page B is a recommendation of page B by the author of A (we say B is successor of A)  Quality of a page is related to its in-degree Recursion: Quality of a page is related to its in-degree, and to the quality of pages linking to it  PageRank [Brin and Page ‘98]

14 Definition of PageRank Consider the following infinite random walk (surf): Initially the surfer is at a random page At each step, the surfer proceeds  to a randomly chosen web page with probability d  to a randomly chosen successor of the current page with probability 1-d The PageRank of a page p is the fraction of steps the surfer spends at p in the limit.

15 Random Web Surfer What’s the probability of a page being visited?

16 Stationary Distributions Let S is the set of states in a Markov Chain P is its transition probability matrix The initial state chosen according to some probability distribution q (0) over S q (t) = row vector whose i-th component is the probability that the chain is in state i at time t q (t+1) = q (t) P  q (t) = q (0) P t A stationary distribution is a probability distribution q such that q = q P (steady-state behavior)

17 Markov Chains Theorem: Under certain conditions: There exists a unique stationary distribution q with q i > 0 for all i Let N(i,t) be the number of times the Markov chain visits state i in t steps. Then,

18 PageRank PageRank = the probability for this Markov chain, i.e. where n is the total number of nodes in the graph d is the probability of making a random jump. Query-independent Summarizes the “web opinion” of the page importance

19 PageRank P A B PageRank of P is (1-d)  (  1/4 th the PageRank of A + 1/3 rd the PageRank of B ) +d/n

20 Kth-Order Markov Chain What we have discussed so far is the first-order Markov Chain. More generally, in kth-order Markov Chain, each state transition depends on previous k states. What’s the size of transition probability matrix? X2X3X4X1

21 Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.

22 Hidden Markov Model A HMM is a quintuple (S, E,  S : {s 1 …s N } are the values for the hidden states E : {e 1 …e T } are the values for the observations  probability distribution of the initial state  transition probability matrix  emission probability matrix X t+1 XtXt X t-1 e t+1 etet e t-1 X1X1 e1e1 XTXT eTeT

23 Alternative Specification If we define a special initial state, which does not emit anything, the probability distribution  becomes part of transition probability matrix.

24 Notations X t : A random variable denoting the state at time t. x t : A particular value of X t. X t =s i. e 1:t : an observation sequence from time 1 to t. x 1:t : a state sequence from time 1 to t.

25 Forward Probability Forward Probability: P(X t =s i, e 1:t ) Why compute forward probability? Probability of observations: P(e 1:t ). Prediction: P(X t+1 =s i | e 1:t )=?

26 Compute Forward Probability P(X t =s i, e 1:t ) = P(X t =s i, e 1:t-1, e t ) =  X t-1 =S j P(X t-1 =s j, X t =s i, e 1:t-1, e t ) =  X t-1 =S j P(e t |X t =s i, X t-1 =s j, e 1:t-1 ) P(X t =s i, X t-1 =s j, e 1:t-1 ) =  X t-1 =S j P(e t |X t =s i ) P(X t =s i |X t-1 =s j, e 1:t-1 ) P(X t-1 =s j, e 1:t-1 ) =  X t-1 =S j P(e t |X t =s i ) P(X t =s i |X t-1 =s j ) P(X t-1 =s j, e 1:t-1 ) Same form. Use recursion

27 Compute Forward Probability (continued) α i (t) = P(X t =s i, e 1:t ) =  X t-1 =S j P(X t =s i |X t-1 =s j ) P(e t |X t =s i ) α j (t-1) =  j a ij b ie t α j (t-1) where a ij is an entry in the transition matrix b ie t is an entry in the emission matrix

28 Inferences with HMM Decoding: argmax x 1:t P(x 1:t |e 1:t ) Given an observation sequence, compute the most likely hidden state sequence. Learning: argmax  P  (e 1:t ) where  =(   ) are parameters of the HMM Given an observation sequence, find out which transition probability and emission probability table assigns the highest probability to the observations. Unsupervised learning

29 Viterbi Algorithm Compute argmax x 1:t P(x 1:t |e 1:t ) Since P(x 1:t |e 1:t ) = P(x 1:t, e 1:t )/P(e 1:t ), and P(e 1:t ) remains constant when we consider different x 1:t argmax x 1:t P(x 1:t |e 1:t )= argmax x 1:t P(x 1:t, e 1:t ) Since the Markov chain is a Bayes Net, P(x 1:t, e 1:t )=P(x 0 )  i=1,t P(x i |x i-1 ) P(e i |x i ) Minimize – log P(x 1:t, e 1:t ) =–logP(x 0 ) +  i=1,t (–log P(x i |x i-1 ) –log P(e i |x i ))

30 Viterbi Algorithm Given a HMM (S, E,  and observations o 1:t, construct a graph that consists 1+tN nodes: One initial node N node at time i. The jth node at time i represent X i =s j. The link between the nodes X i-1 =s j and X i =s k is associated with the length –log(P(X i =s k | X i-1 =s j )P(e i |X i =s k ))

31 The total length of a path is -logP(x 1:t,e 1:t ) The problem of finding argmax x 1:t P(x 1:t |e 1:t ) becomes that of finding the shortest path from x 0 =s 0 to one of the nodes x t =s t.

32 Example

33 Baum-Welch Algorithm The previous two kinds of computation needs parameters  =(  ). Where do the probabilities come from? Relative frequency? But the states are not observable! Solution: Baum-Welch Algorithm Unsupervised learning from observations Find argmax  P  (e 1:t )

34 Baum-Welch Algorithm Start with an initial set of parameters  0 Possibly arbitrary Compute pseudo counts How many times the transition from X i-i =s j to X i =s k occurred? Use the pseudo counts to obtain another (better) set of parameters  1 Iterate until P  1 (e 1:t ) is not bigger than P  (e 1:t ) A special case of EM (Expectation-Maximization)

35 Pseudo Counts Given the observation sequence e 1:T, the pseudo count of the state s i at time t is the probability P(X t =s i |e 1:T ) the pseudo counts of the link from X t =s i to X t+1 =s j is the probability P(X t =s i,X t+1 =s j |e 1:T ) X t =s i X t+1 =s j

36 Update HMM Parameters count(i): the total pseudo count of state s i. count(i,j): the total pseudo count of transition from s i to s j. Add P(X t =s i,X t+1 =s j |e 1:T ) to count(i,j) Add P(X t =s i |e 1:T ) to count(i) Add P(X t =s i |e 1:T ) to count(i,e t ) Updated a ij = count(i,j)/count(i); Updated b je t =count(j,e t )/count(j)

37 P(X t =s i,X t+1 =s j |e 1:T ) =P(X t =s i,X t+1 =s j, e 1:t, e t+1, e t+2:T )/ P(e 1:T ) =P(X t =s i, e 1:t )P(X t+1 =s j |X t =s i )P(e t+1 |X t+1 =s j ) P(e t+2:T |X t+1 =s j )/P(e 1:T ) =P(X t =s i, e 1:t ) a ij b je t+1 P(e t+2:T |X t+1 =s j )/P(e 1:T ) =  i (t) a ij b je t β j (t+1)/P(e 1:T )

38 Forward Probability

39 Backward Probability

40 X t =s i X t+1 =s j t-1 tt+1t+2  i (t)  j (t+1) a ij b je t

41 P(X t =s i |e 1:T ) =P(X t =s i, e 1:t, e t+1:T )/P(e 1:T ) =P(e t+1:T | X t =s i, e 1:t )P(X t =s i, e 1:t )/P(e 1:T ) = P(e t+1:T | X t =s i )P(X t =s i |e 1:t )P(e 1:t )/P(e 1:T ) =  i (t) β i (t)/P(e t+1:T |e 1:t )


Download ppt "Markov Models. Markov Chain A sequence of states: X 1, X 2, X 3, … Usually over time The transition from X t-1 to X t depends only on X t-1 (Markov Property)."

Similar presentations


Ads by Google