Presentation is loading. Please wait.

Presentation is loading. Please wait.

CHAPTER 15: Hidden Markov Models

Similar presentations


Presentation on theme: "CHAPTER 15: Hidden Markov Models"— Presentation transcript:

1 CHAPTER 15: Hidden Markov Models
Apaydin slides with a several modifications and additions by Christoph Eick.

2 Introduction Spatial Sequences
Modeling dependencies in input; no longer iid; e.g the order of observations in a dataset matters: Temporal Sequences: In speech; phonemes in a word (dictionary), words in a sentence (syntax, semantics of the language). Stock market (stock values over time) Spatial Sequences Base pairs in DNA Sequences Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

3 Discrete Markov Process
N states: S1, S2, ..., SN State at “time” t, qt = Si First-order Markov P(qt+1=Sj | qt=Si, qt-1=Sk ,...) = P(qt+1=Sj | qt=Si) Transition probabilities aij ≡ P(qt+1=Sj | qt=Si) aij ≥ 0 and Σj=1N aij=1 Initial probabilities πi ≡ P(q1=Si) Σj=1N πi=1

4 Stochastic Automaton/Markov Chain
Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

5 Example: Balls and Urns
1 2 3 Example: Balls and Urns Three urns each full of balls of one color S1: blue, S2: red, S3: green

6 Balls and Urns: Learning
Given K example sequences of length T Remark: Extract the probabilities from the observed sequences: s1-s2-s1-s3 s2-s1-s1-s2  1=1/3, 2=2/3, a11=1/3, a12=1/3, a13=1/3, a21=3/4,… s2-s3-s2-s1

7 Hidden Markov Models States are not observable
Hidden Markov Models States are not observable Discrete observations {v1,v2,...,vM} are recorded; a probabilistic function of the state Emission probabilities bj(m) ≡ P(Ot=vm | qt=Sj) Example: In each urn, there are balls of different colors, but with different probabilities. For each observation sequence, there are multiple state sequences Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

8 http://a-little-book-of-r-for-bioinformatics. readthedocs
HMM Unfolded in Time Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

9 Now a more complicated problem
1 2 3 Markov Chains We observe: Hidden Markov Models What urn sequence create it? (somewhat trivial, as states are observable!) (1 or 2)-(1 or 2)-(2 or 3)-(2 or 3) and the potential sequences have different probabilities—e.g drawing a blue ball from urn1 is more likely than from urn2!

10 Another Motivating Example

11 Elements of an HMM N: Number of states
M: Number of observation symbols A = [aij]: N by N state transition probability matrix B = bj(m): N by M observation probability matrix Π = [πi]: N by 1 initial state probability vector λ = (A, B, Π), parameter set of HMM Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

12 Three Basic Problems of HMMs
Evaluation: Given λ, and sequence O, calculate P (O | λ) Most Likely State Sequence: Given λ and sequence O, find state sequence Q* such that P (Q* | O, λ ) = maxQ P (Q | O , λ ) Learning: Given a set of sequence O={O1,…Ok}, find λ* such that λ* is the most like explanation for the sequences in O. P ( O | λ* )=maxλ k P ( Ok | λ ) (Rabiner, 1989)

13 Evaluation Forward variable: Complexity: O(N2*T)
Probability of observing O1-…-Ot and additionally being in state i Forward variable: Using i the probability of the observed sequence can be computed as follows: Complexity: O(N2*T)

14 Backward variable: Probability of observing Ot+1-…-OT
and additionally being in state i Backward variable: Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

15 Finding the Most Likely State Sequence
t(i):=Probability of being in state i at step t. Choose the state that has the highest probability, for each time step: qt*= arg maxi γt(i) Observe: O1…OtOt+1…OT Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

16 Viterbi’s Algorithm Idea: Combines path probability computations
Only briefly discussed in 2016! Viterbi’s Algorithm δt(i) ≡ maxq1q2∙∙∙ qt-1 p(q1q2∙∙∙qt-1,qt =Si,O1∙∙∙Ot | λ) Initialization: δ1(i) = πibi(O1), ψ1(i) = 0 Recursion: δt(j) = maxi δt-1(i)aijbj(Ot), ψt(j) = argmaxi δt-1(i)aij Termination: p* = maxi δT(i), qT*= argmaxi δT (i) Path backtracking: qt* = ψt+1(qt+1* ), t=T-1, T-2, ..., 1 Idea: Combines path probability computations with backtracking over competing paths.

17 Observed Symbol Sequence
Baum-Welch Algorithm Baum- Welch Algorithm O={O1,…,OK} Model =(A,B,) Hidden State Sequence O Observed Symbol Sequence

18 Learning a Model  from Sequences O
This is a hidden(latent) variable, measuring the probability of going from state i to state j at step t+1 observing Ot+1, given a model  and an observed sequence O Ok. An EM-style algorithm is used! This is a hidden(latent) variable, measuring the probability of being in state i step t observing given a model  and an observed sequence O Ok.

19 Baum-Welch Algorithm: M-Step
Probability going from i to j Probability being in i Remark: k iterates over the observed sequences O1,…,OK; for each individual sequence OrO r and r are computed in the E-step; then, the actual model  is computed in the M-step by averaging over the estimates of i,aij,bj (based on k and k) for each of the K observed sequences.

20 Baum-Welch Algorithm: Summary
For more discussion see: Baum- Welch Algorithm O={O1,…,OK} Model =(A,B,) See also: Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

21 Generalization of HMM: Continuous Observations
The observations generated at each time step are vectors consisting of k numbers; a multivariate Gaussian with k dimensions is associated with each state j, defining the probabilities of k-dimensional vector v generated when being in state j: Hidden State Sequence =(A, (j,j) j=1,…n,B) O Observed Vector Sequence

22 Generalization: HMM with Inputs
Input-dependent observations: Input-dependent transitions (Meila and Jordan, 1996; Bengio and Frasconi, 1996): Time-delay input: Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)


Download ppt "CHAPTER 15: Hidden Markov Models"

Similar presentations


Ads by Google