Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS621: Artificial Intelligence Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay Lecture 19: Hidden Markov Models.

Similar presentations


Presentation on theme: "CS621: Artificial Intelligence Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay Lecture 19: Hidden Markov Models."— Presentation transcript:

1 CS621: Artificial Intelligence Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay Lecture 19: Hidden Markov Models

2 Example : Blocks World STRIPS : A planning system – Has rules with precondition deletion list and addition list on(B, table) on(A, table) on(C, A) hand empty clear(C) clear(B) on(C, table) on(B, C) on(A, B) hand empty clear(A) A C A CB B STARTGOAL Robot hand

3 Rules R1 : pickup(x) Precondition & Deletion List : handempty, on(x,table), clear(x) Add List : holding(x) R2 : putdown(x) Precondition & Deletion List : holding(x) Add List : handempty, on(x,table), clear(x)

4 Rules R3 : stack(x,y) Precondition & Deletion List :holding(x), clear(y) Add List : on(x,y), clear(x), handempty R4 : unstack(x,y) Precondition & Deletion List : on(x,y), clear(x),handempty Add List : holding(x), clear(y)

5 Plan for the block world problem For the given problem, Start  Goal can be achieved by the following sequence : 1.Unstack(C,A) 2.Putdown(C) 3.Pickup(B) 4.Stack(B,C) 5.Pickup(A) 6.Stack(A,B) Execution of a plan: achieved through a data structure called Triangular Table.

6 Why Probability? (discussion based on the book “Automated Planning” by Dana Nau)

7 Motivation In many situations, actions may have more than one possible outcome – Action failures e.g., gripper drops its load – Exogenous events e.g., road closed Would like to be able to plan in such situations One approach: Markov Decision Processes a c b Grasp block c a c b Intended outcome abc Unintended outcome

8 Stochastic Systems Stochastic system: a triple  = (S, A, P) – S = finite set of states – A = finite set of actions – P a (s | s) = probability of going to s if we execute a in s –  s  S P a (s | s) = 1

9 Robot r1 starts at location l1 – State s1 in the diagram Objective is to get r1 to location l4 – State s4 in the diagram Goal Start Example

10 No classical plan (sequence of actions) can be a solution, because we can’t guarantee we’ll be in a state where the next action is applicable – e.g., π =  move(r1,l1,l2), move(r1,l2,l3), move(r1,l3,l4)  Goal Start Example

11 Another Example Urn 1 # of Red = 30 # of Green = 50 # of Blue = 20 Urn 3 # of Red =60 # of Green =10 # of Blue = 30 Urn 2 # of Red = 10 # of Green = 40 # of Blue = 50 A colored ball choosing example : U1U2U3 U10.10.40.5 U20.60.2 U30.30.40.3 Probability of transition to another Urn after picking a ball:

12 Example (contd.) U1U2U3 U10.10.40.5 U20.60.2 U30.30.40.3 Given : Observation : RRGGBRGR State Sequence : ?? Not so Easily Computable. and RGB U10.30.50.2 U20.10.40.5 U30.60.10.3

13 Example (contd.) Here : – S = {U1, U2, U3} – V = { R,G,B} For observation: – O ={o 1 … o n } And State sequence – Q ={q 1 … q n } π is U1U2U3 U10.10.40.5 U20.60.2 U30.30.40.3 RGB U10.30.50.2 U20.10.40.5 U30.60.10.3 A = B=

14 Hidden Markov Models

15 Model Definition Set of states : S where |S|=N Output Alphabet : V Transition Probabilities : A = {a ij } Emission Probabilities : B = {b j (o k )} Initial State Probabilities : π

16 Markov Processes Properties – Limited Horizon :Given previous n states, a state i, is independent of preceding 0…i-n+1 states. P(X t =i|X t-1, X t-2,… X 0 ) = P(X t =i|X t-1, X t-2 … X t-n ) – Time invariance : P(X t =i|X t-1 =j) = P(X 1 =i|X 0 =j) = P(X n =i|X 0-1 =j)

17 Three Basic Problems of HMM 1.Given Observation Sequence O ={o 1 … o T } – Efficiently estimate P(O|λ) 2.Given Observation Sequence O ={o 1 … o T } – Get best Q ={q 1 … q T } i.e. Maximize P(Q|O, λ) 3.How to adjust to best maximize – Re-estimate λ

18 Three basic problems (contd.) Problem 1: Likelihood of a sequence – Forward Procedure – Backward Procedure Problem 2: Best state sequence – Viterbi Algorithm Problem 3: Re-estimation – Baum-Welch ( Forward-Backward Algorithm )

19 Problem 2 Given Observation Sequence O ={o 1 … o T } – Get “best” Q ={q 1 … q T } i.e. Solution : 1.Best state individually likely at a position i 2.Best state given all the previously observed states and observations  Viterbi Algorithm

20 Example Output observed – aabb What state seq. is most probable? Since state seq. cannot be predicted with certainty, the machine is given qualification “hidden”. Note: ∑ P(outlinks) = 1 for all states

21 Probabilities for different possible seq 1 1,2 1,1 0.4 1,1,1 0.16 1,1,2 0.06 1,2,1 0.0375 1,2,2 0.0225 1,1,1,1 0.016 1,1,1,2 0.056...and so on 1,1,2,1 0.018 1,1,2,2 0.018 0.15

22 If P(s i |s i-1, s i-2 ) (order 2 HMM) then the Markovian assumption will take effect only after two levels. (generalizing for n-order… after n levels) Viterbi for higher order HMM

23 Forward and Backward Probability Calculation

24 A Simple HMM q r a: 0.3 b: 0.1 a: 0.2 b: 0.1 b: 0.2 b: 0.5 a: 0.2 a: 0.4

25 Forward or α-probabilities Let α i (t) be the probability of producing w 1,t-1, while ending up in state s i α i (t)= P(w 1,t-1,S t =s i ), t>1

26 Initial condition on α i (t) α i (t)= 1.0 if i=1 0 otherwise

27 Probability of the observation using α i (t) P(w 1,n ) =Σ 1 σ P(w 1,n, S n+1 =s i ) = Σ i=1 σ α i (n+1) σ is the total number of states

28 Recursive expression for α α j (t+1) =P(w 1,t, S t+1 =s j ) =Σ i=1 σ P(w 1,t, S t =s i, S t+1 =s j ) =Σ i=1 σ P(w 1,t-1, S t =s j ) P(w t, S t+1 =s j |w 1,t-1, S t =s i ) =Σ i=1 σ P(w 1,t-1, S t =s i ) P(w t, S t+1 =s j |S t =s i ) = Σ i=1 σ α j (t) P(w t, S t+1 =s j |S t =s i )

29 Time Ticks123 4 5 INPUTεbbb bbb bbba 1.00.20.05 0.017 0.0148 0.00.10.07 0.04 0.0131 P(w,t)1.00.30.12 0.057 0.0279 The forward probabilities of “bbba”

30 Backward or β-probabilities Let β i (t) be the probability of seeing w t,n, given that the state of the HMM at t is s i β i (t)= P(w t,n,S t =s i )

31 Probability of the observation using β P(w 1,n )=β 1 (1)

32 Recursive expression for β β j (t-1) =P(w t-1,n |S t-1 =s j ) =Σ j=1 σ P(w t-1,n, S t =s i | S t-1 =s i ) =Σ i=1 σ P(w t-1, S t =s j |S t-1 =s i ) P(w t,n,|w t-1,S t =s j, S t-1 =s i ) =Σ i=1 σ P(w t-1, S t =s j |S t-1 =s i ) P(w t,n, |S t =s j ) (consequence of Markov Assumption) = Σ j=1 σ P(w t-1, S t =s j |S t-1 =s i ) β j (t)

33 Problem 1 of the three basic problems

34 Problem 1 (contd) Order 2TN T Definitely not efficient!! Is there a method to tackle this problem? Yes. – Forward or Backward Procedure

35 Forward Procedure Forward Step:

36 Forward Procedure

37 Backward Procedure

38

39 Forward Backward Procedure Benefit – Order N 2 T as compared to 2TN T for simple computation Only Forward or Backward procedure needed for Problem 1

40 Problem 2 Given Observation Sequence O ={o 1 … o T } – Get “best” Q ={q 1 … q T } i.e. Solution : 1.Best state individually likely at a position i 2.Best state given all the previously observed states and observations  Viterbi Algorithm

41 Viterbi Algorithm Define such that, i.e. the sequence which has the best joint probability so far. By induction, we have,

42 Viterbi Algorithm

43

44 Problem 3 How to adjust to best maximize – Re-estimate λ Solutions : – To re-estimate (iteratively update and improve) HMM parameters A,B, π Use Baum-Welch algorithm

45 Baum-Welch Algorithm Define Putting forward and backward variables

46 Baum-Welch algorithm

47 Define Then, expected number of transitions from S i And, expected number of transitions from S j to S i

48

49 Baum-Welch Algorithm Baum et al have proved that the above equations lead to a model as good or better than the previous


Download ppt "CS621: Artificial Intelligence Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay Lecture 19: Hidden Markov Models."

Similar presentations


Ads by Google