Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,..., sN Si Sj.

Similar presentations


Presentation on theme: "1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,..., sN Si Sj."— Presentation transcript:

1 1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,..., sN Si Sj

2 2 Hidden Markov Model (Cont’d) Discrete Markov Model Degree 1 Markov Model

3 3 Hidden Markov Model (Cont’d) : Transition Probability from Si to Sj,

4 4 Discrete Markov Model Example S1 : The weather is rainy S2 : The weather is cloudy S3 : The weather is sunny rainy cloudysunny rainy cloudy sunny

5 5 Hidden Markov Model Example (Cont’d) Question 1:How much is this probability: Sunny-Sunny-Sunny-Rainy-Rainy-Sunny-Cloudy-Cloudy

6 6 Hidden Markov Model Example (Cont’d) Question 2:The probability of staying in state Si for d days if we are in state Si? The probability of being in state i in time t=1 d Days

7 7 Discrete Density HMM Components Discrete Density HMM Components N : Number Of States M : Number Of Outputs A (NxN) : State Transition Probability Matrix B (NxM): Output Occurrence Probability in each state (1xN): Initial State Probability (1xN): Initial State Probability : Set of HMM Parameters

8 8 Three Basic HMM Problems Recognition Problem: Given an HMM  and a sequence of observations O,what is the probability ? Given an HMM  and a sequence of observations O,what is the probability ? State Decoding Problem: Given a model and a sequence of observations O, what is the most likely state sequence in the model that produced the observations? Given a model and a sequence of observations O, what is the most likely state sequence in the model that produced the observations? Training Problem: Given a model  and a sequence of observations O, how should we adjust model parameters in order to maximize ? Given a model  and a sequence of observations O, how should we adjust model parameters in order to maximize ?

9 9 First Problem Solution We Know That: And

10 10 First Problem Solution (Cont’d) Computation Order :

11 11 Forward Backward Approach Computing 1) Initialization

12 12 Forward Backward Approach (Cont’d) 2) Induction : 3) Termination : Computation Order :

13 13 Backward Variable 1) Initialization 2)Induction

14 14 Second Problem Solution Finding the most likely state sequence Individually most likely state :

15 15 Viterbi Algorithm Define : P is the most likely state sequence with this conditions : state i, time t and observation o

16 16 Viterbi Algorithm (Cont’d) 1) Initialization Is the most likely state before state i at time t-1

17 17 Viterbi Algorithm (Cont’d) 2) Recursion

18 18 Viterbi Algorithm (Cont’d) 3) Termination: 4)Backtracking:

19 19 Third Problem Solution Parameters Estimation using Baum- Welch Or Expectation Maximization (EM) Approach Define:

20 20 Third Problem Solution (Cont’d) : Expected value of the number of jumps from state i : Expected value of the number of jumps from state i to state j

21 21 Third Problem Solution (Cont’d)

22 22 Baum Auxiliary Function By this approach we will reach to a local optimum

23 23 Restrictions Of Reestimation Formulas

24 24 Continuous Observation Density We have amounts of a PDF instead of We have Mixture Coefficients Average Variance

25 25 Continuous Observation Density Mixture in HMM M2|1 M1|1 M4|1 M3|1 M2|3 M1|3 M4|3 M3|3 M2|2 M1|2 M4|2 M3|2 S1 S2S3 Dominant Mixture:

26 26 Continuous Observation Density (Cont’d) Model Parameters: N×N N×M×K×K N×M×KN×M1×N N : Number Of States M : Number Of Mixtures In Each State K : Dimension Of Observation Vector

27 27 Continuous Observation Density (Cont’d)

28 28 Continuous Observation Density (Cont’d) Probability of event j’th state and k’th mixture at time t

29 29 State Duration Modeling Si Sj Probability of staying d times in state i :

30 30 State Duration Modeling (Cont’d) Si Sj ……. HMM With clear duration

31 31 State Duration Modeling (Cont’d) HMM consideration with State Duration : –Selecting using ‘s –Selecting using –Selecting Observation Sequence using in practice we assume the following independence: –Selecting next state using transition probabilities. We also have an additional constraint:

32 32 Training In HMM Maximum Likelihood (ML) Maximum Mutual Information (MMI) Minimum Discrimination Information (MDI)

33 33 Training In HMM Maximum Likelihood (ML)...... Observation Sequence

34 34 Training In HMM (Cont’d) Maximum Mutual Information (MMI) Mutual Information

35 35 Training In HMM (Cont’d) Minimum Discrimination Information (MDI) Observation : Auto correlation :


Download ppt "1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,..., sN Si Sj."

Similar presentations


Ads by Google