Hidden Markov Model Parameter Estimation BMI/CS 576 Colin Dewey Fall 2015.

Slides:



Advertisements
Similar presentations
Hidden Markov Model in Biological Sequence Analysis – Part 2
Advertisements

Marjolijn Elsinga & Elze de Groot1 Markov Chains and Hidden Markov Models Marjolijn Elsinga & Elze de Groot.
HMM II: Parameter Estimation. Reminder: Hidden Markov Model Markov Chain transition probabilities: p(S i+1 = t|S i = s) = a st Emission probabilities:
Learning HMM parameters
Introduction to Hidden Markov Models
Hidden Markov Models Eine Einführung.
MNW2 course Introduction to Bioinformatics
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
 CpG is a pair of nucleotides C and G, appearing successively, in this order, along one DNA strand.  CpG islands are particular short subsequences in.
Statistical NLP: Lecture 11
Hidden Markov Models Theory By Johan Walters (SR 2003)
Statistical NLP: Hidden Markov Models Updated 8/12/2005.
Foundations of Statistical NLP Chapter 9. Markov Models 한 기 덕한 기 덕.
Hidden Markov Models Fundamentals and applications to bioinformatics.
Lecture 15 Hidden Markov Models Dr. Jianjun Hu mleg.cse.sc.edu/edu/csce833 CSCE833 Machine Learning University of South Carolina Department of Computer.
… Hidden Markov Models Markov assumption: Transition model:
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
Hidden Markov Models. Two learning scenarios 1.Estimation when the “right answer” is known Examples: GIVEN:a genomic region x = x 1 …x 1,000,000 where.
Hidden Markov Models. Two learning scenarios 1.Estimation when the “right answer” is known Examples: GIVEN:a genomic region x = x 1 …x 1,000,000 where.
Hidden Markov Models Lecture 6, Thursday April 17, 2003.
Hidden Markov Models I Biology 162 Computational Genetics Todd Vision 14 Sep 2004.
S. Maarschalkerweerd & A. Tjhang1 Parameter estimation for HMMs, Baum-Welch algorithm, Model topology, Numerical stability Chapter
Master’s course Bioinformatics Data Analysis and Tools
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
Lecture 9 Hidden Markov Models BioE 480 Sept 21, 2004.
Hidden Markov Models Usman Roshan BNFO 601. Hidden Markov Models Alphabet of symbols: Set of states that emit symbols from the alphabet: Set of probabilities.
Forward-backward algorithm LING 572 Fei Xia 02/23/06.
Part 4 c Baum-Welch Algorithm CSE717, SPRING 2008 CUBS, Univ at Buffalo.
Elze de Groot1 Parameter estimation for HMMs, Baum-Welch algorithm, Model topology, Numerical stability Chapter
Chapter 3 (part 3): Maximum-Likelihood and Bayesian Parameter Estimation Hidden Markov Model: Extension of Markov Chains All materials used in this course.
. Learning Parameters of Hidden Markov Models Prepared by Dan Geiger.
Hidden Markov models Sushmita Roy BMI/CS 576 Oct 16 th, 2014.
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
. Class 5: Hidden Markov Models. Sequence Models u So far we examined several probabilistic model sequence models u These model, however, assumed that.
1 Markov Chains. 2 Hidden Markov Models 3 Review Markov Chain can solve the CpG island finding problem Positive model, negative model Length? Solution:
Probabilistic Sequence Alignment BMI 877 Colin Dewey February 25, 2014.
MNW2 course Introduction to Bioinformatics Lecture 22: Markov models Centre for Integrative Bioinformatics FEW/FALW
Hidden Markov Models for Sequence Analysis 4
Fundamentals of Hidden Markov Model Mehmet Yunus Dönmez.
Hidden Markov Models Yves Moreau Katholieke Universiteit Leuven.
Hidden Markov Models Usman Roshan CS 675 Machine Learning.
Hidden Markov Models BMI/CS 776 Mark Craven March 2002.
Class review Sushmita Roy BMI/CS 576 Dec 11 th, 2014.
Hidden Markov Models CBB 231 / COMPSCI 261 part 2.
S. Salzberg CMSC 828N 1 Three classic HMM problems 2.Decoding: given a model and an output sequence, what is the most likely state sequence through the.
PGM 2003/04 Tirgul 2 Hidden Markov Models. Introduction Hidden Markov Models (HMM) are one of the most common form of probabilistic graphical models,
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
1 CS 552/652 Speech Recognition with Hidden Markov Models Winter 2011 Oregon Health & Science University Center for Spoken Language Understanding John-Paul.
Algorithms in Computational Biology11Department of Mathematics & Computer Science Algorithms in Computational Biology Markov Chains and Hidden Markov Model.
CZ5226: Advanced Bioinformatics Lecture 6: HHM Method for generating motifs Prof. Chen Yu Zong Tel:
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2006 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
Learning Sequence Motifs Using Expectation Maximization (EM) and Gibbs Sampling BMI/CS 776 Mark Craven
Hidden Markov Models (HMMs) –probabilistic models for learning patterns in sequences (e.g. DNA, speech, weather, cards...) (2 nd order model)
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, March 14, 2013 Session 8: Sequence Labeling This work is licensed under.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximation.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Lecture 16, CS5671 Hidden Markov Models (“Carnivals with High Walls”) States (“Stalls”) Emission probabilities (“Odds”) Transitions (“Routes”) Sequences.
Hidden Markov Models BMI/CS 576
Learning Sequence Motif Models Using Expectation Maximization (EM)
Hidden Markov Models - Training
Three classic HMM problems
N-Gram Model Formulas Word sequences Chain rule of probability
CONTEXT DEPENDENT CLASSIFICATION
Introduction to HMM (cont)
Presentation transcript:

Hidden Markov Model Parameter Estimation BMI/CS Colin Dewey Fall 2015

Three Important Questions How likely is a given sequence? What is the most probable “path” for generating a given sequence? How can we learn the HMM parameters given a set of sequences?

Learning without Hidden State learning is simple if we know the correct path for each sequence in our training set estimate parameters by counting the number of times each parameter is used across the training set 5 C A G T beginend

Learning with Hidden State 5 C A G T 0 beginend ???? if we don’t know the correct path for each sequence in our training set, consider all possible paths for the sequence estimate parameters through a procedure that counts the expected number of times each transition and emission occurs across the training set

Learning Parameters if we know the state path for each training sequence, learning the model parameters is simple –no hidden state during training –count how often each transition and emission occurs –normalize/smooth to get probabilities –process is just like it was for Markov chain models if we don’t know the path for each training sequence, how can we determine the counts? –key insight: estimate the counts by considering every path weighted by its probability

Learning Parameters: The Baum-Welch Algorithm a.k.a. the Forward-Backward algorithm an Expectation Maximization (EM) algorithm –EM is a family of algorithms for learning probabilistic models in problems that involve hidden state –generally used to find maximum likelihood estimates for parameters of a model in this context, the hidden state is the path that explains each training sequence

Learning Parameters: The Baum-Welch Algorithm algorithm sketch: –initialize the parameters of the model –iterate until convergence calculate the expected number of times each transition or emission is used adjust the parameters to maximize the likelihood of these expected values

The Expectation Step first, we need to know the probability of the i th symbol being produced by state k, given sequence x given this we can compute our expected counts for state transitions, character emissions

The Expectation Step the probability of of producing x with the i th symbol being produced by state k is the first term is, computed by the forward algorithm the second term is, computed by the backward algorithm

The Expectation Step we want to know the probability of producing sequence x with the i th symbol being produced by state k (for all x, i and k) C A G T A 0.4 C 0.1 G 0.2 T 0.3 A 0.4 C 0.1 G 0.1 T 0.4 A 0.2 C 0.3 G 0.3 T 0.2 beginend A 0.1 C 0.4 G 0.4 T 0.1

The Expectation Step the forward algorithm gives us, the probability of being in state k having observed the first i characters of x A 0.4 C 0.1 G 0.2 T 0.3 A 0.2 C 0.3 G 0.3 T 0.2 begin end C A G T A 0.4 C 0.1 G 0.1 T 0.4 A 0.1 C 0.4 G 0.4 T 0.1

The Expectation Step the backward algorithm gives us, the probability of observing the rest of x, given that we’re in state k after i characters A 0.4 C 0.1 G 0.2 T 0.3 A 0.2 C 0.3 G 0.3 T 0.2 beginend A 0.1 C 0.4 G 0.4 T 0.1 C A G T A 0.4 C 0.1 G 0.1 T 0.4

The Expectation Step putting forward and backward together, we can compute the probability of producing sequence x with the i th symbol being produced by state k A 0.4 C 0.1 G 0.2 T 0.3 A 0.2 C 0.3 G 0.3 T 0.2 beginend C A G T A 0.4 C 0.1 G 0.1 T 0.4 A 0.1 C 0.4 G 0.4 T 0.1

The Backward Algorithm initialization: for states with a transition to end state

The Backward Algorithm recursion (i =L-1…0): An alternative to the forward algorithm for computing the probability of a sequence:

The Expectation Step now we can calculate the probability of the i th symbol being produced by state k, given x

now we can calculate the expected number of times letter c is emitted by state k here we’ve added the superscript j to refer to a specific sequence in the training set The Expectation Step sum over sequences sum over positions where c occurs in x j

The Expectation Step sum over sequences sum over positions where c occurs in x

The Expectation Step and we can calculate the expected number of times that the transition from k to l is used or if l is a silent state

The Maximization Step Let be the expected number of emissions of c from state k for the training set estimate new emission parameters by: just like in the simple case but typically we’ll do some “smoothing” (e.g. add pseudocounts)

The Maximization Step let be the expected number of transitions from state k to state l for the training set estimate new transition parameters by:

The Baum-Welch Algorithm initialize the parameters of the HMM iterate until convergence –initialize, with pseudocounts –E-step: for each training set sequence j = 1…n calculate values for sequence j add the contribution of sequence j to, –M-step: update the HMM parameters using,

Baum-Welch Algorithm Example given –the HMM with the parameters initialized as shown –the training sequences TAG, ACG A 0.1 C 0.4 G 0.4 T 0.1 A 0.4 C 0.1 G 0.1 T 0.4 beginend we’ll work through one iteration of Baum-Welch

Baum-Welch Example (Cont) determining the forward values for TAG here we compute just the values that represent events with non-zero probability in a similar way, we also compute forward values for ACG

Baum-Welch Example (Cont) determining the backward values for TAG here we compute just the values that represent events with non-zero probability in a similar way, we also compute backward values for ACG

Baum-Welch Example (Cont) determining the expected emission counts for state 1 contribution of TAG contribution of ACG pseudocount *note that the forward/backward values in these two columns differ; in each column they are computed for the sequence associated with the column

Baum-Welch Example (Cont) determining the expected transition counts for state 1 (not using pseudocounts) in a similar way, we also determine the expected emission/transition counts for state 2 contribution of TAG contribution of ACG

Baum-Welch Example (Cont) determining probabilities for state 1

Baum-Welch Convergence some convergence criteria –likelihood of the training sequences changes little –fixed number of iterations reached –parameters are not changing significantly usually converges in a small number of iterations will converge to a local maximum (in the likelihood of the data given the model) parameters

Computational Complexity of HMM Algorithms given an HMM with N states and a sequence of length L, the time complexity of the Forward, Backward and Viterbi algorithms is –this assumes that the states are densely interconnected Given M sequences of length L, the time complexity of Baum-Welch on each iteration is

Learning and Prediction Tasks learning Given: a model, a set of training sequences Do: find model parameters that explain the training sequences with relatively high probability (goal is to find a model that generalizes well to sequences we haven’t seen before) classification Given: a set of models representing different sequence classes, a test sequence Do: determine which model/class best explains the sequence segmentation Given: a model representing different sequence classes, a test sequence Do: segment the sequence into subsequences, predicting the class of each subsequence

Algorithms for Learning and Prediction Tasks learning correct path known for each training sequence  simple maximum- likelihood or Bayesian estimation correct path not known  Baum-Welch (EM) algorithm for maximum likelihood estimation classification simple Markov model  calculate probability of sequence along single path for each model hidden Markov model  Forward algorithm to calculate probability of sequence along all paths for each model segmentation hidden Markov model  Viterbi algorithm to find most probable path for sequence