Eric Xing © Eric CMU, 2006-20101 Machine Learning Structured Models: Hidden Markov Models versus Conditional Random Fields Eric Xing Lecture 13,

Slides:



Advertisements
Similar presentations
Marjolijn Elsinga & Elze de Groot1 Markov Chains and Hidden Markov Models Marjolijn Elsinga & Elze de Groot.
Advertisements

HMM II: Parameter Estimation. Reminder: Hidden Markov Model Markov Chain transition probabilities: p(S i+1 = t|S i = s) = a st Emission probabilities:
Hidden Markov Model.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
John Lafferty, Andrew McCallum, Fernando Pereira
Bioinformatics Hidden Markov Models. Markov Random Processes n A random sequence has the Markov property if its distribution is determined solely by its.
Hidden Markov Models Eine Einführung.
Hidden Markov Models.
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
 CpG is a pair of nucleotides C and G, appearing successively, in this order, along one DNA strand.  CpG islands are particular short subsequences in.
Hidden Markov Models Modified from:
Statistical NLP: Lecture 11
Hidden Markov Models Theory By Johan Walters (SR 2003)
Hidden Markov Model Most pages of the slides are from lecture notes from Prof. Serafim Batzoglou’s course in Stanford: CS 262: Computational Genomics (Winter.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
CpG islands in DNA sequences
Heuristic Local Alignerers 1.The basic indexing & extension technique 2.Indexing: techniques to improve sensitivity Pairs of Words, Patterns 3.Systems.
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
Hidden Markov Models. Two learning scenarios 1.Estimation when the “right answer” is known Examples: GIVEN:a genomic region x = x 1 …x 1,000,000 where.
Hidden Markov Models. Decoding GIVEN x = x 1 x 2 ……x N We want to find  =  1, ……,  N, such that P[ x,  ] is maximized  * = argmax  P[ x,  ] We.
Hidden Markov Models. Two learning scenarios 1.Estimation when the “right answer” is known Examples: GIVEN:a genomic region x = x 1 …x 1,000,000 where.
Hidden Markov Models Lecture 6, Thursday April 17, 2003.
Learning Seminar, 2004 Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data J. Lafferty, A. McCallum, F. Pereira Presentation:
Hidden Markov Models I Biology 162 Computational Genetics Todd Vision 14 Sep 2004.
Hidden Markov Models Lecture 5, Tuesday April 15, 2003.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Lecture 5: Learning models using EM
S. Maarschalkerweerd & A. Tjhang1 Parameter estimation for HMMs, Baum-Welch algorithm, Model topology, Numerical stability Chapter
Linear-Space Alignment. Linear-space alignment Using 2 columns of space, we can compute for k = 1…M, F(M/2, k), F r (M/2, N – k) PLUS the backpointers.
CpG islands in DNA sequences
Hidden Markov Models Lecture 5, Tuesday April 15, 2003.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
Conditional Random Fields
Time Warping Hidden Markov Models Lecture 2, Thursday April 3, 2003.
Bioinformatics Hidden Markov Models. Markov Random Processes n A random sequence has the Markov property if its distribution is determined solely by its.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Elze de Groot1 Parameter estimation for HMMs, Baum-Welch algorithm, Model topology, Numerical stability Chapter
Hidden Markov Models.
Hidden Markov Models (HMMs) for Information Extraction
Hidden Markov models Sushmita Roy BMI/CS 576 Oct 16 th, 2014.
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
CS262 Lecture 5, Win07, Batzoglou Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
1 Markov Chains. 2 Hidden Markov Models 3 Review Markov Chain can solve the CpG island finding problem Positive model, negative model Length? Solution:
Crash Course on Machine Learning
Hidden Markov Models Yves Moreau Katholieke Universiteit Leuven.
Hidden Markov Models Usman Roshan CS 675 Machine Learning.
CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov
CS5263 Bioinformatics Lecture 10: Markov Chain and Hidden Markov Models.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
CS Statistical Machine learning Lecture 24
Advanced Algorithms and Models for Computational Biology -- a machine learning approach Computational Genomics II: Sequence Modeling & Gene Finding with.
Algorithms in Computational Biology11Department of Mathematics & Computer Science Algorithms in Computational Biology Markov Chains and Hidden Markov Model.
John Lafferty Andrew McCallum Fernando Pereira
Eric Xing © Eric CMU, Machine Learning Mixture Model, HMM, and Expectation Maximization Eric Xing Lecture 9, August 14, 2010 Reading:
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hidden Markov Model Parameter Estimation BMI/CS 576 Colin Dewey Fall 2015.
Graphical Models for Segmenting and Labeling Sequence Data Manoj Kumar Chinnakotla NLP-AI Seminar.
Conditional Random Fields and Its Applications Presenter: Shih-Hsiang Lin 06/25/2007.
Hidden Markov Models – Concepts 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
CSCI2950-C Lecture 2 September 11, Comparative Genomic Hybridization (CGH) Measuring Mutations in Cancer.
Hidden Markov Models BMI/CS 576
Hidden Markov Models - Training
Hidden Markov Model ..
LECTURE 15: REESTIMATION, EM AND MIXTURES
Presentation transcript:

Eric Xing © Eric CMU, Machine Learning Structured Models: Hidden Markov Models versus Conditional Random Fields Eric Xing Lecture 13, August 15, 2010 Reading:

Eric Xing © Eric CMU, From static to dynamic mixture models Dynamic mixture AAAA X2X2 X3X3 X1X1 XTXT Y2Y2 Y3Y3 Y1Y1 YTYT... Static mixture A X1X1 Y1Y1 N The sequence: The underlying source:Phonemes, Speech signal, sequence of rolls, dice,

Eric Xing © Eric CMU, Chromosomes of tumor cell: Predicting Tumor Cell States

Eric Xing © Eric CMU, Copy number profile for chromosome 1 from 600 MPE cell line Copy number profile for chromosome 8 from COLO320 cell line fold amplification of CMYC region Copy number profile for chromosome 8 in MDA-MB-231 cell line deletion DNA Copy number aberration types in breast cancer

Eric Xing © Eric CMU, A real CGH run

Eric Xing © Eric CMU, Hidden Markov Model Observation space Alphabetic set: Euclidean space: Index set of hidden states Transition probabilities between any two states or Start probabilities Emission probabilities associated with each state or in general: AAAA x2x2 x3x3 x1x1 xTxT y2y2 y3y3 y1y1 yTyT... Graphical model K 1 … 2 State automata

Eric Xing © Eric CMU, The Dishonest Casino A casino has two dice: Fair die P(1) = P(2) = P(3) = P(5) = P(6) = 1/6 Loaded die P(1) = P(2) = P(3) = P(5) = 1/10 P(6) = 1/2 Casino player switches back-&-forth between fair and loaded die once every 20 turns Game: 1.You bet $1 2.You roll (always with a fair die) 3.Casino player rolls (maybe with fair die, maybe with loaded die) 4.Highest number wins $2

Eric Xing © Eric CMU, FAIRLOADED P(1|F) = 1/6 P(2|F) = 1/6 P(3|F) = 1/6 P(4|F) = 1/6 P(5|F) = 1/6 P(6|F) = 1/6 P(1|L) = 1/10 P(2|L) = 1/10 P(3|L) = 1/10 P(4|L) = 1/10 P(5|L) = 1/10 P(6|L) = 1/2 The Dishonest Casino Model

Eric Xing © Eric CMU, Puzzles Regarding the Dishonest Casino GIVEN: A sequence of rolls by the casino player QUESTION How likely is this sequence, given our model of how the casino works? This is the EVALUATION problem in HMMs What portion of the sequence was generated with the fair die, and what portion with the loaded die? This is the DECODING question in HMMs How “loaded” is the loaded die? How “fair” is the fair die? How often does the casino player change from fair to loaded, and back? This is the LEARNING question in HMMs

Eric Xing © Eric CMU, Joint Probability

Eric Xing © Eric CMU, Probability of a Parse Given a sequence x = x 1 …… x T and a parse y = y 1, ……, y T, To find how likely is the parse: (given our HMM and the sequence) p ( x, y ) = p ( x 1 …… x T, y 1, ……, y T ) (Joint probability) = p ( y 1 ) p ( x 1 | y 1 ) p ( y 2 | y 1 ) p ( x 2 | y 2 ) … p ( y T | y T-1 ) p ( x T | y T ) = p ( y 1 ) P( y 2 | y 1 ) … p ( y T | y T-1 ) × p ( x 1 | y 1 ) p ( x 2 | y 2 ) … p ( x T | y T ) Marginal probability: Posterior probability: AAAA x2x2 x3x3 x1x1 xTxT y2y2 y3y3 y1y1 yTyT...

Eric Xing © Eric CMU, Example: the Dishonest Casino Let the sequence of rolls be: x = 1, 2, 1, 5, 6, 2, 1, 6, 2, 4 Then, what is the likelihood of y = Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair? (say initial probs a 0Fair = ½, a oLoaded = ½) ½  P(1 | Fair) P(Fair | Fair) P(2 | Fair) P(Fair | Fair) … P(4 | Fair) = ½  (1/6) 10  (0.95) 9 = = 5.21  10 -9

Eric Xing © Eric CMU, Example: the Dishonest Casino So, the likelihood the die is fair in all this run is just 5.21  OK, but what is the likelihood of  = Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded? ½  P(1 | Loaded) P(Loaded | Loaded) … P(4 | Loaded) = ½  (1/10) 8  (1/2) 2 (0.95) 9 = = 0.79  Therefore, it is after all 6.59 times more likely that the die is fair all the way, than that it is loaded all the way

Eric Xing © Eric CMU, Example: the Dishonest Casino Let the sequence of rolls be: x = 1, 6, 6, 5, 6, 2, 6, 6, 3, 6 Now, what is the likelihood  = F, F, …, F? ½  (1/6) 10  (0.95) 9 = 0.5  10 -9, same as before What is the likelihood y = L, L, …, L? ½  (1/10) 4  (1/2) 6 (0.95) 9 = = 5  So, it is 100 times more likely the die is loaded

Eric Xing © Eric CMU, Three Main Questions on HMMs 1.Evaluation GIVEN an HMM M, and a sequence x, FIND Prob ( x | M ) Forward ALGO.Forward 2.Decoding GIVENan HMM M, and a sequence x, FINDthe sequence y of states that maximizes, e.g., P( y | x, M ), or the most probable subsequence of states Viterbi, Forward-backward ALGO.Viterbi, Forward-backward 3.Learning GIVENan HMM M, with unspecified transition/emission probs., and a sequence x, FINDparameters  = (  i, a ij,  ik ) that maximize P( x |  ) Baum-Welch (EM) ALGO.Baum-Welch (EM)

Eric Xing © Eric CMU, Applications of HMMs Some early applications of HMMs finance, but we never saw them speech recognition modelling ion channels In the mid-late 1980s HMMs entered genetics and molecular biology, and they are now firmly entrenched. Some current applications of HMMs to biology mapping chromosomes aligning biological sequences predicting sequence structure inferring evolutionary relationships finding genes in DNA sequence

Eric Xing © Eric CMU, The Forward Algorithm We want to calculate P ( x ), the likelihood of x, given the HMM Sum over all possible ways of generating x : To avoid summing over an exponential number of paths y, define (the forward probability) The recursion:

Eric Xing © Eric CMU, The Forward Algorithm – derivation Compute the forward probability: AA xtxt x1x1 ytyt y1y1... A x t-1 y t-1...

Eric Xing © Eric CMU, The Forward Algorithm We can compute for all k, t, using dynamic programming! Initialization: Iteration: Termination:

Eric Xing © Eric CMU, The Backward Algorithm We want to compute, the posterior probability distribution on the t th position, given x We start by computing The recursion: Forward,  t k Backward, AA x t+1 xTxT y t+1 yTyT... A xtxt ytyt

Eric Xing © Eric CMU, The Backward Algorithm – derivation Define the backward probability: AA x t+1 xTxT y t+1 yTyT... A xtxt ytyt

Eric Xing © Eric CMU, The Backward Algorithm We can compute for all k, t, using dynamic programming! Initialization: Iteration: Termination:

Eric Xing © Eric CMU, Example: FAIRLOADED P(1|F) = 1/6 P(2|F) = 1/6 P(3|F) = 1/6 P(4|F) = 1/6 P(5|F) = 1/6 P(6|F) = 1/6 P(1|L) = 1/10 P(2|L) = 1/10 P(3|L) = 1/10 P(4|L) = 1/10 P(5|L) = 1/10 P(6|L) = 1/2 x = 1, 2, 1, 5, 6, 2, 1, 6, 2, 4

Eric Xing © Eric CMU, Alpha (actual) Beta (actual) FAIRLOADED P(1|F) = 1/6 P(2|F) = 1/6 P(3|F) = 1/6 P(4|F) = 1/6 P(5|F) = 1/6 P(6|F) = 1/6 P(1|L) = 1/10 P(2|L) = 1/10 P(3|L) = 1/10 P(4|L) = 1/10 P(5|L) = 1/10 P(6|L) = 1/2 x = 1, 2, 1, 5, 6, 2, 1, 6, 2, 4

Eric Xing © Eric CMU, Alpha (logs) Beta (logs) FAIRLOADED P(1|F) = 1/6 P(2|F) = 1/6 P(3|F) = 1/6 P(4|F) = 1/6 P(5|F) = 1/6 P(6|F) = 1/6 P(1|L) = 1/10 P(2|L) = 1/10 P(3|L) = 1/10 P(4|L) = 1/10 P(5|L) = 1/10 P(6|L) = 1/2 x = 1, 2, 1, 5, 6, 2, 1, 6, 2, 4

Eric Xing © Eric CMU, What is the probability of a hidden state prediction?

Eric Xing © Eric CMU, Posterior decoding We can now calculate Then, we can ask What is the most likely state at position t of sequence x : Note that this is an MPA of a single hidden state, what if we want to a MPA of a whole hidden state sequence? Posterior Decoding: This is different from MPA of a whole sequence of hidden states This can be understood as bit error rate vs. word error rate Example: MPA of X ? MPA of (X, Y) ?

Eric Xing © Eric CMU, Viterbi decoding GIVEN x = x 1, …, x T, we want to find y = y 1, …, y T, such that P ( y | x ) is maximized: y * = argmax y P( y | x ) = argmax  P( y, x ) Let = Probability of most likely sequence of states ending at state y t = k The recursion: Underflows are a significant problem These numbers become extremely small – underflow Solution: Take the logs of all values:

Eric Xing © Eric CMU, Computational Complexity and implementation details What is the running time, and space required, for Forward, and Backward? Time: O(K 2 N ); Space: O( KN ). Useful implementation technique to avoid underflows Viterbi: sum of logs Forward/Backward: rescaling at each position by multiplying by a constant

Eric Xing © Eric CMU, Learning HMM: two scenarios Supervised learning: estimation when the “right answer” is known Examples: GIVEN: a genomic region x = x 1 …x 1,000,000 where we have good (experimental) annotations of the CpG islands GIVEN: the casino player allows us to observe him one evening, as he changes dice and produces 10,000 rolls Unsupervised learning: estimation when the “right answer” is unknown Examples: GIVEN: the porcupine genome; we don’t know how frequent are the CpG islands there, neither do we know their composition GIVEN: 10,000 rolls of the casino player, but we don’t see when he changes dice QUESTION: Update the parameters  of the model to maximize P ( x |  ) --- Maximal likelihood (ML) estimation

Eric Xing © Eric CMU, (Homework!) Supervised ML estimation Given x = x 1 … x N for which the true state path y = y 1 … y N is known, Define: A ij = # times state transition i  j occurs in y B ik = # times state i in y emits k in x We can show that the maximum likelihood parameters  are: What if y is continuous? We can treat as N  T observations of, e.g., a Gaussian, and apply learning rules for Gaussian … (Homework!)

Eric Xing © Eric CMU, Pseudocounts Solution for small training sets: Add pseudocounts A ij = # times state transition i  j occurs in y + R ij B ik = # times state i in y emits k in x + S ik R ij, S ij are pseudocounts representing our prior belief Total pseudocounts : R i =  j R ij, S i =  k S ik, --- "strength" of prior belief, --- total number of imaginary instances in the prior Larger total pseudocounts  strong prior belief Small total pseudocounts: just to avoid 0 probabilities --- smoothing

Eric Xing © Eric CMU, Unsupervised ML estimation Given x = x 1 … x N for which the true state path y = y 1 … y N is unknown, EXPECTATION MAXIMIZATION 0. Starting with our best guess of a model M, parameters  : 1. Estimate A ij, B ik in the training data How?,, How? (homework) 2. Update  according to A ij, B ik Now a "supervised learning" problem 3. Repeat 1 & 2, until convergence This is called the Baum-Welch Algorithm We can get to a provably more (or equally) likely parameter set  each iteration

Eric Xing © Eric CMU, The Baum Welch algorithm The complete log likelihood The expected complete log likelihood EM The E step The M step ("symbolically" identical to MLE)

Eric Xing © Eric CMU, Summary Modeling hidden transitional trajectories (in discrete state space, such as cluster label, DNA copy number, dice-choice, etc.) underlying observed sequence data (discrete, such as dice outcomes; or continuous, such as CGH signals) Useful for parsing, segmenting sequential data Important HMM computations: The joint likelihood of a parse and data can be written as a product to local terms (i.e., initial prob, transition prob, emission prob.) Computing marginal likelihood of the observed sequence: forward algorithm Predicting a single hidden state: forward-backward Predicting an entire sequence of hidden states: viterbi Learning HMM parameters: an EM algorithm known as Baum-Welch

Eric Xing © Eric CMU, Shortcomings of Hidden Markov Model HMM models capture dependences between each state and only its corresponding observation NLP example: In a sentence segmentation task, each segmental state may depend not just on a single word (and the adjacent segmental stages), but also on the (non-local) features of the whole line such as line length, indentation, amount of white space, etc. Mismatch between learning objective function and prediction objective function HMM learns a joint distribution of states and observations P(Y, X), but in a prediction task, we need the conditional probability P(Y|X) Y1Y1 Y2Y2 ………YnYn X1X1 X2X2 ………XnXn

Eric Xing © Eric CMU, Recall Generative vs. Discriminative Classifiers Goal: Wish to learn f: X  Y, e.g., P(Y|X) Generative classifiers (e.g., Naïve Bayes): Assume some functional form for P(X|Y), P(Y) This is a ‘generative’ model of the data! Estimate parameters of P(X|Y), P(Y) directly from training data Use Bayes rule to calculate P(Y|X= x) Discriminative classifiers (e.g., logistic regression) Directly assume some functional form for P(Y|X) This is a ‘discriminative’ model of the data! Estimate parameters of P(Y|X) directly from training data YnYn XnXn YnYn XnXn

Eric Xing © Eric CMU, Structured Conditional Models Conditional probability P (label sequence y | observation sequence x ) rather than joint probability P ( y, x ) Specify the probability of possible label sequences given an observation sequence Allow arbitrary, non-independent features on the observation sequence X The probability of a transition between labels may depend on past and future observations Relax strong independence assumptions in generative models Y1Y1 Y2Y2 ………YnYn x 1:n

Eric Xing © Eric CMU, Conditional Distribution If the graph G = (V, E) of Y is a tree, the conditional distribution over the label sequence Y = y, given X = x, by the Hammersley Clifford theorem of random fields is: ─ x is a data sequence ─ y is a label sequence ─ v is a vertex from vertex set V = set of label random variables ─ e is an edge from edge set E over V ─ f k and g k are given and fixed. g k is a Boolean vertex feature; f k is a Boolean edge feature ─ k is the number of features ─ are parameters to be estimated ─ y| e is the set of components of y defined by edge e ─ y| v is the set of components of y defined by vertex v Y1Y1 Y2Y2 Y5Y5 … X 1 … X n

Eric Xing © Eric CMU, CRF is a partially directed model Discriminative model Usage of global normalizer Z(x) Models the dependence between each state and the entire observation sequence Conditional Random Fields Y1Y1 Y2Y2 ………YnYn x 1:n

Eric Xing © Eric CMU, Conditional Random Fields General parametric form: Y1Y1 Y2Y2 ………YnYn x 1:n

Eric Xing © Eric CMU, Conditional Random Fields Allow arbitrary dependencies on input Clique dependencies on labels Use approximate inference for general graphs

Eric Xing © Eric CMU, CRFs: Inference Given CRF parameters and , find the y * that maximizes P(y|x) Can ignore Z(x) because it is not a function of y Run the max-product algorithm on the junction-tree of CRF: Y1Y1 Y2Y2 ………YnYn x 1:n Y 1,Y 2 Y 2,Y 3 ……. Y n-2,Y n-1 Y n-1,Y n Y2Y2 Y3Y3 Y n-2 Y n-1 Same as Viterbi decoding used in HMMs!

Eric Xing © Eric CMU, CRF learning Given {(x d, y d )} d=1 N, find *,  * such that Computing the gradient w.r.t Gradient of the log-partition function in an exponential family is the expectation of the sufficient statistics.

Eric Xing © Eric CMU, CRFs: some empirical results Comparison of error rates on synthetic data CRF error HMM error MEMM error CRF error Data is increasingly higher order in the direction of arrow CRFs achieve the lowest error rate for higher order data

Eric Xing © Eric CMU, CRFs: some empirical results Parts of Speech tagging Using same set of features: HMM >= MEMM Using additional overlapping features: CRF + > MEMM + >> HMM

Eric Xing © Eric CMU, Summary Conditional Random Fields is a discriminative Structured Input Output model! HMM is a generative structured I/O model Complementary strength and weakness: … YnYn XnXn YnYn XnXn