Presentation is loading. Please wait.

Presentation is loading. Please wait.

THE HIDDEN MARKOV MODEL (HMM)

Similar presentations


Presentation on theme: "THE HIDDEN MARKOV MODEL (HMM)"— Presentation transcript:

1 THE HIDDEN MARKOV MODEL (HMM)
Introduction An Example and Definition of Terms Three major problems Problem1 Problem2 Problem3 Solutions to the Problems Applications Advantages of HMM Disadvantages of HMM

2 THE HIDDEN MARKOV MODEL (AN EXPLANATORY EXAMPLE)
Suppose someone is in a closed room and is tossing 3 coins in a given sequence that we don’t know. All we know is the outcome HHTTTH..that is displayed on a screen. We do not know the sequence of tossing the coins nor the bias of the coins. Another important factor that we are not aware of is whether the current state that we are found at affects the future outcome.

3 THE HIDDEN MARKOV MODEL (AN EXPLANATORY EXAMPLE)
If we are told that the third (3rd) coin is highly biased to produce head then We expect there to be more H on the display screen. In addition to the above fact if we are told that the probability of tossing the third (3rd) coin after tossing the second (2nd) or first (1st) coin is zero (0) and assuming we are in the `second coin tossing` state then In spite of the bias, both H and T have an equal probability of turning up on the display screen.

4 THE HIDDEN MARKOV MODEL (AN EXPLANATORY EXAMPLE)
Observations The output very much depends on The individual bias The transition probability between states and The state chosen to begin the experiment

5 THE HIDDEN MARKOV MODEL (AN EXPLANATORY EXAMPLE)
The Hidden Markov Model is (for the above example) characterized by: the three (3) sets namely, The set of individual bias of the three coins The set of transition probability from one coin to another and The set of initial probabilities of choosing the states.

6 THE HIDDEN MARKOV MODEL DEFINITIONS

7 THE HIDDEN MARKOV MODEL THREE MAJOR PROBLEMS
Given the model λ=(A,B,∏), how do we compute P(O|λ), the probability of occurrence of the observation sequence O=O1,O2,O3…OT Likelihood Given the model λ=(A,B,∏), how do we choose a state sequence I=I1,I2,I3..It so that the joint probability of the observation sequence O=O1O2..Ot and the state sequence ( P(O,I|λ)) is maximized. Most probable path? Decoding How do we adjust the model parameters of λ so that P(O|λ) ( or P(O,I|λ) ) is maximize? Learning/training

8 THE HIDDEN MARKOV MODEL SOLUTION OF THE
Likelihood Problem The Forward - Backward Procedure Dynamic Programming:Store previously calculated values of at(i)

9 THE HIDDEN MARKOV MODEL SOLUTION OF THE
Likelihood Problem The Backward Procedure

10 THE HIDDEN MARKOV MODEL SOLUTION OF THE
Most Probable Path Problem We have to find a state sequence I=i1,i2,…,in such that the occurrence of the observation sequence O=O1,O2,O3,…,On from this state sequence is GREATER than that from any other state sequence. In other words, we are trying to find I that will maximize P(O,I|λ) This is achieved by using a famous algorithm called the Viterbi Algorithm.

11 THE HIDDEN MARKOV MODEL SOLUTION OF THE
Most Probable Path Problem The Vertibi Algorithm This is an inductive algorithm in which at each instant the BEST possible state sequence for each N states is kept as the intermediate state for the desired observation sequence. BEST above means the state giving maximum probability. In this way we finally have the best path for each of the N states as the last state for the desired observation sequence. We then SELECT the one with the highest probability.

12 THE HIDDEN MARKOV MODEL SOLUTION OF THE
Training the HMM This problem deals with training the model such that it encodes the observation sequence such that any observation sequence having characteristics resembling a previous one can be identified. There are two methods used to achieve the above goal: The Segmental K-Means Algorithm. The Baum-Welch re-estimation Formulas.

13 THE HIDDEN MARKOV MODEL SOLUTION OF THE
Training the HMM The Segmental K-Means Algorithm In this method, the parameters of λ =(A,B, ∏) are adjusted as to maximize P(O,I|λ) where I is the optimal sequence as calculated in the solution for problem 2 (optimization problem).

14 THE HIDDEN MARKOV MODEL SOLUTION OF THE
Training the HMM The Baum-Welch re-estimation Formulas In this method, the parameters of λ =(A,B, ∏) are adjusted as to increase P(O|λ) until a maximum value is achieved. No particular state is of special interest to us.

15 THE HIDDEN MARKOV MODEL APPLICATIONS
HMM is a great tool for modelling Time Series Data Has a variety of applications Speech Recognition Computer Vision Applications Computational Molecular Biology Pattern recognition

16 THE HIDDEN MARKOV MODEL ADVANTAGES
Solid statistical foundation Efficient learning algorithms Flexible and general model for sequence properties Unsupervised learning from variable-length, raw sequences

17 THE HIDDEN MARKOV MODEL DISADVANTAGES
Large number of unstructured parameters Need large amounts of data Subtle long-range correlations in real sequences unaccounted for, due to Markov property

18 THE HIDDEN MARKOV MODEL Question?
Thanks a lot for your time.


Download ppt "THE HIDDEN MARKOV MODEL (HMM)"

Similar presentations


Ads by Google