Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.

Slides:



Advertisements
Similar presentations
Hidden Markov Model in Biological Sequence Analysis – Part 2
Advertisements

Marjolijn Elsinga & Elze de Groot1 Markov Chains and Hidden Markov Models Marjolijn Elsinga & Elze de Groot.
HMM II: Parameter Estimation. Reminder: Hidden Markov Model Markov Chain transition probabilities: p(S i+1 = t|S i = s) = a st Emission probabilities:
Hidden Markov Model.
Hidden Markov Models Chapter 11. CG “islands” The dinucleotide “CG” is rare –C in a “CG” often gets “methylated” and the resulting C then mutates to T.
Bioinformatics Hidden Markov Models. Markov Random Processes n A random sequence has the Markov property if its distribution is determined solely by its.
Hidden Markov Models Eine Einführung.
Hidden Markov Models.
 CpG is a pair of nucleotides C and G, appearing successively, in this order, along one DNA strand.  CpG islands are particular short subsequences in.
Hidden Markov Models Modified from:
Statistical NLP: Lecture 11
Hidden Markov Models Theory By Johan Walters (SR 2003)
Hidden Markov Model Most pages of the slides are from lecture notes from Prof. Serafim Batzoglou’s course in Stanford: CS 262: Computational Genomics (Winter.
Hidden Markov Models BIOL337/STAT337/437 Spring Semester 2014.
Lecture 15 Hidden Markov Models Dr. Jianjun Hu mleg.cse.sc.edu/edu/csce833 CSCE833 Machine Learning University of South Carolina Department of Computer.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
. Hidden Markov Model Lecture #6. 2 Reminder: Finite State Markov Chain An integer time stochastic process, consisting of a domain D of m states {1,…,m}
CpG islands in DNA sequences
Heuristic Local Alignerers 1.The basic indexing & extension technique 2.Indexing: techniques to improve sensitivity Pairs of Words, Patterns 3.Systems.
Lecture 6, Thursday April 17, 2003
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
Hidden Markov Models. Two learning scenarios 1.Estimation when the “right answer” is known Examples: GIVEN:a genomic region x = x 1 …x 1,000,000 where.
Hidden Markov Models. Decoding GIVEN x = x 1 x 2 ……x N We want to find  =  1, ……,  N, such that P[ x,  ] is maximized  * = argmax  P[ x,  ] We.
Hidden Markov Models. Two learning scenarios 1.Estimation when the “right answer” is known Examples: GIVEN:a genomic region x = x 1 …x 1,000,000 where.
Hidden Markov Models Lecture 6, Thursday April 17, 2003.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Hidden Markov Models Lecture 5, Tuesday April 15, 2003.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Linear-Space Alignment. Linear-space alignment Using 2 columns of space, we can compute for k = 1…M, F(M/2, k), F r (M/2, N – k) PLUS the backpointers.
CpG islands in DNA sequences
Hidden Markov Models Lecture 5, Tuesday April 15, 2003.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
Time Warping Hidden Markov Models Lecture 2, Thursday April 3, 2003.
Hidden Markov Models—Variants Conditional Random Fields 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Hidden Markov Models 1 2 K … x1 x2 x3 xK.
Bioinformatics Hidden Markov Models. Markov Random Processes n A random sequence has the Markov property if its distribution is determined solely by its.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Hidden Markov Models.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Hidden Markov models Sushmita Roy BMI/CS 576 Oct 16 th, 2014.
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
Sequence Alignment Cont’d. CS262 Lecture 4, Win06, Batzoglou Indexing-based local alignment (BLAST- Basic Local Alignment Search Tool) 1.SEED Construct.
CS262 Lecture 5, Win07, Batzoglou Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
. Class 5: Hidden Markov Models. Sequence Models u So far we examined several probabilistic model sequence models u These model, however, assumed that.
Hidden Markov Model Continues …. Finite State Markov Chain A discrete time stochastic process, consisting of a domain D of m states {1,…,m} and 1.An m.
1 Markov Chains. 2 Hidden Markov Models 3 Review Markov Chain can solve the CpG island finding problem Positive model, negative model Length? Solution:
HMM Hidden Markov Model Hidden Markov Model. CpG islands CpG islands In human genome, CG dinucleotides are relatively rare In human genome, CG dinucleotides.
. Parameter Estimation For HMM Lecture #7 Background Readings: Chapter 3.3 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
BINF6201/8201 Hidden Markov Models for Sequence Analysis
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
CS5263 Bioinformatics Lecture 11: Markov Chain and Hidden Markov Models.
Hidden Markov Models Yves Moreau Katholieke Universiteit Leuven.
Hidden Markov Models Usman Roshan CS 675 Machine Learning.
CS5263 Bioinformatics Lecture 12: Hidden Markov Models and applications.
CS5263 Bioinformatics Lecture 10: Markov Chain and Hidden Markov Models.
Hidden Markov Models CBB 231 / COMPSCI 261 part 2.
S. Salzberg CMSC 828N 1 Three classic HMM problems 2.Decoding: given a model and an output sequence, what is the most likely state sequence through the.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Advanced Algorithms and Models for Computational Biology -- a machine learning approach Computational Genomics II: Sequence Modeling & Gene Finding with.
Algorithms in Computational Biology11Department of Mathematics & Computer Science Algorithms in Computational Biology Markov Chains and Hidden Markov Model.
1 DNA Analysis Part II Amir Golnabi ENGS 112 Spring 2008.
Eric Xing © Eric CMU, Machine Learning Structured Models: Hidden Markov Models versus Conditional Random Fields Eric Xing Lecture 13,
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Hidden Markov Models – Concepts 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
CSCI2950-C Lecture 2 September 11, Comparative Genomic Hybridization (CGH) Measuring Mutations in Cancer.
Hidden Markov Models BMI/CS 576
Hidden Markov Model ..
Hidden Markov Model Lecture #6
Presentation transcript:

Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2

CS262 Lecture 5, Win06, Batzoglou Hidden Markov Models FAIRLOADED FAIRLOADEDFAIR Alphabet  = { b1, b2, …, bM } Set of states Q = { 1,..., K } Transition probabilities  a ij = transition prob from state i to state j Emission probabilities  e i (b) = P( x i = b |  i = k) Markov Property: P(  t+1 = k | “whatever happened so far”) = P(  t+1 = k |  1,  2, …,  t, x 1, x 2, …, x t )= P(  t+1 = k |  t ) Given a sequence x = x 1 ……x N, A parse of x is a sequence of states  =  1, ……,  N

CS262 Lecture 5, Win06, Batzoglou Likelihood of a Parse Simply, multiply all the orange arrows! (transition probs and emission probs) 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2

CS262 Lecture 5, Win06, Batzoglou Likelihood of a parse Given a sequence x = x 1 ……x N and a parse  =  1, ……,  N, To find how likely is the parse: (given our HMM) P(x,  ) = P(x 1, …, x N,  1, ……,  N ) = a 0  1 a  1  2 ……a  N-1  N e  1 (x 1 )……e  N (x N ) 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2 A compact way to write a 0  1 a  1  2 ……a  N-1  N e  1 (x 1 )……e  N (x N ) Number all parameters a ij and e i (b); n params Example: a 0Fair :  1 ; a 0Loaded :  2 ; … e Loaded (6) =  18 Then, count in x and  the # of times each parameter j = 1, …, n occurs F(j, x,  ) = # parameter  j occurs in (x,  ) (call F(.,.,.) the feature counts) Then, P(x,  ) =  j=1…n  j F(j, x,  ) = = exp [  j=1…n log(  j )  F(j, x,  ) ]

CS262 Lecture 5, Win06, Batzoglou Example: the dishonest casino Let the sequence of rolls be: x = 1, 2, 1, 5, 6, 2, 1, 5, 2, 4 Then, what is the likelihood of  = F, F, …, F? (say initial probs a 0Fair = ½, a oLoaded = ½) ½  P(1 | Fair) P(Fair | Fair) P(2 | Fair) P(Fair | Fair) … P(4 | Fair) = ½  (1/6) 10  (0.95) 9 = ~= 0.5  10 -9

CS262 Lecture 5, Win06, Batzoglou Example: the dishonest casino So, the likelihood the die is fair in this run is just  OK, but what is the likelihood of  = L, L, …, L ? ½  P(1 | Loaded) P(Loaded, Loaded) … P(4 | Loaded) = ½  (1/10) 9  (1/2) 1 (0.95) 9 = ~= 0.16  Therefore, it somewhat more likely that all the rolls are done with the fair die, than that they are all done with the loaded die

CS262 Lecture 5, Win06, Batzoglou Example: the dishonest casino Let the sequence of rolls be: x = 1, 6, 6, 5, 6, 2, 6, 6, 3, 6 Now, what is the likelihood  = F, F, …, F? ½  (1/6) 10  (0.95) 9 = 0.5  10 -9, same as before What is the likelihood  = L, L, …, L? ½  (1/10) 4  (1/2) 6 (0.95) 9 = ~= 0.5  So, it is 100 times more likely the die is loaded

CS262 Lecture 5, Win06, Batzoglou The three main questions on HMMs 1.Evaluation GIVEN a HMM M, and a sequence x, FIND Prob[ x | M ] 2.Decoding GIVENa HMM M, and a sequence x, FINDthe sequence  of states that maximizes P[ x,  | M ] 3.Learning GIVENa HMM M, with unspecified transition/emission probs., and a sequence x, FINDparameters  = (e i (.), a ij ) that maximize P[ x |  ]

CS262 Lecture 5, Win06, Batzoglou Notational Conventions The model M is: architecture (#states, etc), parameters  = a ij, e i (.) P[x | M] is the same with P[ x |  ], and P[ x ], when the architecture, and the parameters, respectively, are implied P[ x,  | M ], P[ x,  |  ] and P[ x,  ] are the same when the architecture, and the parameters, are implied LEARNING: we write P[ x |  ] to emphasize that we are seeking the  * that maximizes P[ x |  ]

CS262 Lecture 5, Win06, Batzoglou Problem 1: Decoding Find the most likely parse of a sequence

CS262 Lecture 5, Win06, Batzoglou Decoding GIVEN x = x 1 x 2 ……x N Find  =  1, ……,  N, to maximize P[ x,  ]  * = argmax  P[ x,  ] Maximizes a 0  1 e  1 (x 1 ) a  1  2 ……a  N-1  N e  N (x N ) Dynamic Programming! V k (i) = max {  1…  i-1} P[x 1 …x i-1,  1, …,  i-1, x i,  i = k] = Prob. of most likely sequence of states ending at state  i = k 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2 Given that we end up in state k at step i, maximize product to the left and right

CS262 Lecture 5, Win06, Batzoglou Decoding – main idea Inductive assumption: Given that for all states k, and for a fixed position i, V k (i) = max {  1…  i-1} P[x 1 …x i-1,  1, …,  i-1, x i,  i = k] What is V l (i+1)? From definition, V l (i+1) = max {  1…  i} P[ x 1 …x i,  1, …,  i, x i+1,  i+1 = l ] = max {  1…  i} P(x i+1,  i+1 = l | x 1 …x i,  1,…,  i ) P[x 1 …x i,  1,…,  i ] = max {  1…  i} P(x i+1,  i+1 = l |  i ) P[x 1 …x i-1,  1, …,  i-1, x i,  i ] = max k [ P(x i+1,  i+1 = l |  i =k) max {  1…  i-1} P[x 1 …x i-1,  1,…,  i-1, x i,  i =k] ] = max k [ P(x i+1 |  i+1 = l ) P(  i+1 = l |  i =k) V k (i) ] = e l (x i+1 ) max k a kl V k (i)

CS262 Lecture 5, Win06, Batzoglou The Viterbi Algorithm Input: x = x 1 ……x N Initialization: V 0 (0) = 1(0 is the imaginary first position) V k (0) = 0, for all k > 0 Iteration: V j (i) = e j (x i )  max k a kj V k (i – 1) Ptr j (i) = argmax k a kj V k (i – 1) Termination: P(x,  *) = max k V k (N) Traceback:  N * = argmax k V k (N)  i-1 * = Ptr  i (i)

CS262 Lecture 5, Win06, Batzoglou The Viterbi Algorithm Similar to “aligning” a set of states to a sequence Time: O(K 2 N) Space: O(KN) x 1 x 2 x 3 ………………………………………..x N State 1 2 K V j (i)

CS262 Lecture 5, Win06, Batzoglou Viterbi Algorithm – a practical detail Underflows are a significant problem P[ x 1,…., x i,  1, …,  i ] = a 0  1 a  1  2 ……a  i e  1 (x 1 )……e  i (x i ) These numbers become extremely small – underflow Solution: Take the logs of all values V l (i) = log e k (x i ) + max k [ V k (i-1) + log a kl ]

CS262 Lecture 5, Win06, Batzoglou Example Let x be a long sequence with a portion of ~ 1/6 6’s, followed by a portion of ~ ½ 6’s… x = … … Then, it is not hard to show that optimal parse is (exercise): FFF…………………...F LLL………………………...L 6 characters “123456” parsed as F, contribute.95 6  (1/6) 6 = 1.6  parsed as L, contribute.95 6  (1/2) 1  (1/10) 5 = 0.4  “162636” parsed as F, contribute.95 6  (1/6) 6 = 1.6  parsed as L, contribute.95 6  (1/2) 3  (1/10) 3 = 9.0  10 -5

CS262 Lecture 5, Win06, Batzoglou Problem 2: Evaluation Find the likelihood a sequence is generated by the model

CS262 Lecture 5, Win06, Batzoglou Generating a sequence by the model Given a HMM, we can generate a sequence of length n as follows: 1.Start at state  1 according to prob a 0  1 2.Emit letter x 1 according to prob e  1 (x 1 ) 3.Go to state  2 according to prob a  1  2 4.… until emitting x n 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xnxn 2 1 K 2 0 e 2 (x 1 ) a 02

CS262 Lecture 5, Win06, Batzoglou A couple of questions Given a sequence x, What is the probability that x was generated by the model? Given a position i, what is the most likely state that emitted x i ? Example: the dishonest casino Say x = 12341… …21341 Most likely path:  = FF……F However: marked letters more likely to be L than unmarked letters P(box: FFFFFFFFFFF) = (1/6) 11 * = * 0.54 = P(box: LLLLLLLLLLL) = [ (1/2) 6 * (1/10) 5 ] * * = 1.56*10 -7 * = FF

CS262 Lecture 5, Win06, Batzoglou Evaluation We will develop algorithms that allow us to compute: P(x)Probability of x given the model P(x i …x j )Probability of a substring of x given the model P(  I = k | x)Probability that the i th state is k, given x A more refined measure of which states x may be in

CS262 Lecture 5, Win06, Batzoglou The Forward Algorithm We want to calculate P(x) = probability of x, given the HMM Sum over all possible ways of generating x: P(x) =   P(x,  ) =   P(x |  ) P(  ) To avoid summing over an exponential number of paths , define f k (i) = P(x 1 …x i,  i = k) (the forward probability)

CS262 Lecture 5, Win06, Batzoglou The Forward Algorithm – derivation Define the forward probability: f k (i) = P(x 1 …x i,  i = k) =   1…  i-1 P(x 1 …x i-1,  1,…,  i-1,  i = k) e k (x i ) =  l   1…  i-2 P(x 1 …x i-1,  1,…,  i-2,  i-1 = l) a lk e k (x i ) =  l P(x 1 …x i-1,  i-1 = l) a lk e k (x i ) = e k (x i )  l f l (i-1) a lk

CS262 Lecture 5, Win06, Batzoglou The Forward Algorithm We can compute f k (i) for all k, i, using dynamic programming! Initialization: f 0 (0) = 1 f k (0) = 0, for all k > 0 Iteration: f k (i) = e k (x i )  l f l (i-1) a lk Termination: P(x) =  k f k (N) a k0 Where, a k0 is the probability that the terminating state is k (usually = a 0k )

CS262 Lecture 5, Win06, Batzoglou Relation between Forward and Viterbi VITERBI Initialization: V 0 (0) = 1 V k (0) = 0, for all k > 0 Iteration: V j (i) = e j (x i ) max k V k (i-1) a kj Termination: P(x,  *) = max k V k (N) FORWARD Initialization: f 0 (0) = 1 f k (0) = 0, for all k > 0 Iteration: f l (i) = e l (x i )  k f k (i-1) a kl Termination: P(x) =  k f k (N) a k0

CS262 Lecture 5, Win06, Batzoglou Motivation for the Backward Algorithm We want to compute P(  i = k | x), the probability distribution on the i th position, given x We start by computing P(  i = k, x) = P(x 1 …x i,  i = k, x i+1 …x N ) = P(x 1 …x i,  i = k) P(x i+1 …x N | x 1 …x i,  i = k) = P(x 1 …x i,  i = k) P(x i+1 …x N |  i = k) Then, P(  i = k | x) = P(  i = k, x) / P(x) Forward, f k (i)Backward, b k (i)

CS262 Lecture 5, Win06, Batzoglou The Backward Algorithm – derivation Define the backward probability: b k (i) = P(x i+1 …x N |  i = k) =   i+1…  N P(x i+1,x i+2, …, x N,  i+1, …,  N |  i = k) =  l   i+1…  N P(x i+1,x i+2, …, x N,  i+1 = l,  i+2, …,  N |  i = k) =  l e l (x i+1 ) a kl   i+1…  N P(x i+2, …, x N,  i+2, …,  N |  i+1 = l) =  l e l (x i+1 ) a kl b l (i+1)

CS262 Lecture 5, Win06, Batzoglou The Backward Algorithm We can compute b k (i) for all k, i, using dynamic programming Initialization: b k (N) = a k0, for all k Iteration: b k (i) =  l e l (x i+1 ) a kl b l (i+1) Termination: P(x) =  l a 0l e l (x 1 ) b l (1)

CS262 Lecture 5, Win06, Batzoglou Computational Complexity What is the running time, and space required, for Forward, and Backward? Time: O(K 2 N) Space: O(KN) Useful implementation technique to avoid underflows Viterbi: sum of logs Forward/Backward: rescaling at each position by multiplying by a constant

CS262 Lecture 5, Win06, Batzoglou Posterior Decoding We can now calculate f k (i) b k (i) P(  i = k | x) = ––––––– P(x) Then, we can ask What is the most likely state at position i of sequence x: Define  ^ by Posterior Decoding:  ^ i = argmax k P(  i = k | x)

CS262 Lecture 5, Win06, Batzoglou Posterior Decoding For each state,  Posterior Decoding gives us a curve of likelihood of state for each position  That is sometimes more informative than Viterbi path  * Posterior Decoding may give an invalid sequence of states  Why?

CS262 Lecture 5, Win06, Batzoglou Posterior Decoding P(  i = k | x) =   P(  | x) 1(  i = k) =  {  :  [i] = k} P(  | x) x 1 x 2 x 3 …………………………………………… x N State 1 l P(  i =l|x) k 1(  ) = 1, if  is true 0, otherwise

CS262 Lecture 5, Win06, Batzoglou Viterbi, Forward, Backward VITERBI Initialization: V 0 (0) = 1 V k (0) = 0, for all k > 0 Iteration: V l (i) = e l (x i ) max k V k (i-1) a kl Termination: P(x,  *) = max k V k (N) FORWARD Initialization: f 0 (0) = 1 f k (0) = 0, for all k > 0 Iteration: f l (i) = e l (x i )  k f k (i-1) a kl Termination: P(x) =  k f k (N) a k0 BACKWARD Initialization: b k (N) = a k0, for all k Iteration: b l (i) =  k e l (x i +1) a kl b k (i+1) Termination: P(x) =  k a 0k e k (x 1 ) b k (1)

CS262 Lecture 5, Win06, Batzoglou A+C+G+T+ A-C-G-T- A modeling Example CpG islands in DNA sequences

CS262 Lecture 5, Win06, Batzoglou Methylation & Silencing One way cells differentiate is methylation  Addition of CH 3 in C-nucleotides  Silences genes in region CG (denoted CpG) often mutates to TG, when methylated In each cell, one copy of X is silenced, methylation plays role Methylation is inherited during cell division

CS262 Lecture 5, Win06, Batzoglou Example: CpG Islands CpG nucleotides in the genome are frequently methylated (Write CpG not to confuse with CG base pair) C  methyl-C  T Methylation often suppressed around genes, promoters  CpG islands

CS262 Lecture 5, Win06, Batzoglou Example: CpG Islands In CpG islands,  CG is more frequent  Other pairs (AA, AG, AT…) have different frequencies Question: Detect CpG islands computationally

CS262 Lecture 5, Win06, Batzoglou A model of CpG Islands – (1) Architecture A+C+G+T+ A-C-G-T- CpG Island Not CpG Island

CS262 Lecture 5, Win06, Batzoglou A model of CpG Islands – (2) Transitions How do we estimate the parameters of the model? Emission probabilities: 1/0 1.Transition probabilities within CpG islands Established from many known (experimentally verified) CpG islands (Training Set) 2.Transition probabilities within other regions Established from many known non-CpG islands + ACGT A C G T ACGT A C G T

CS262 Lecture 5, Win06, Batzoglou Log Likehoods— Telling “Prediction” from “Random” ACGT A C G T Another way to see effects of transitions: Log likelihoods L(u, v) = log[ P(uv | + ) / P(uv | -) ] Given a region x = x 1 …x N A quick-&-dirty way to decide whether entire x is CpG P(x is CpG) > P(x is not CpG)   i L(x i, x i+1 ) > 0

CS262 Lecture 5, Win06, Batzoglou A model of CpG Islands – (2) Transitions What about transitions between (+) and (-) states? They affect  Avg. length of CpG island  Avg. separation between two CpG islands XY 1-p 1-q pq Length distribution of region X: P[l X = 1] = 1-p P[l X = 2] = p(1-p) … P[l X = k] = p k (1-p) E[l X ] = 1/(1-p) Geometric distribution, with mean 1/(1-p)

CS262 Lecture 5, Win06, Batzoglou A+C+G+T+ A-C-G-T- 1–p A model of CpG Islands – (2) Transitions No reason to favor exiting/entering (+) and (-) regions at a particular nucleotide To determine transition probabilities between (+) and (-) states 1.Estimate average length of a CpG island: l CPG = 1/(1-p)  p = 1 – 1/l CPG 2.For each pair of (+) states k, l, let a kl  p × a kl 3.For each (+) state k, (-) state l, let a kl = (1-p)/4 (better: take frequency of l in the (-) regions into account) 4.Do the same for (-) states A problem with this model: CpG islands don’t have exponential length distribution This is a defect of HMMs – compensated with ease of analysis & computation

CS262 Lecture 5, Win06, Batzoglou Applications of the model Given a DNA region x, The Viterbi algorithm predicts locations of CpG islands Given a nucleotide x i, (say x i = A) The Viterbi parse tells whether x i is in a CpG island in the most likely general scenario The Forward/Backward algorithms can calculate P(x i is in CpG island) = P(  i = A + | x) Posterior Decoding can assign locally optimal predictions of CpG islands  ^ i = argmax k P(  i = k | x)

CS262 Lecture 5, Win06, Batzoglou What if a new genome comes? We just sequenced the porcupine genome We know CpG islands play the same role in this genome However, we have no known CpG islands for porcupines We suspect the frequency and characteristics of CpG islands are quite different in porcupines How do we adjust the parameters in our model? LEARNING