Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sequence Alignment Cont’d. CS262 Lecture 4, Win06, Batzoglou Indexing-based local alignment (BLAST- Basic Local Alignment Search Tool) 1.SEED Construct.

Similar presentations


Presentation on theme: "Sequence Alignment Cont’d. CS262 Lecture 4, Win06, Batzoglou Indexing-based local alignment (BLAST- Basic Local Alignment Search Tool) 1.SEED Construct."— Presentation transcript:

1 Sequence Alignment Cont’d

2 CS262 Lecture 4, Win06, Batzoglou Indexing-based local alignment (BLAST- Basic Local Alignment Search Tool) 1.SEED Construct a dictionary of all the words in query (database) & search database (query) in linear time for word matches 2.EXTEND Initiate fast local alignment procedures to the left and right of each word match 3.REPORT Retain all alignments that score above a threshold and report them …… query DB query scan

3 CS262 Lecture 4, Win06, Batzoglou Indexing-based local alignment— Extensions A C G A A G T A A G G T C C A G T C C C T T C C T G G A T T G C G A Example: k = 4 The matching word GGTC initiates an alignment Extension to the left and right with no gaps until alignment falls < C below best so far Output: GTAAGGTCC GTTAGGTCC

4 CS262 Lecture 4, Win06, Batzoglou Indexing-based local alignment— Extensions A C G A A G T A A G G T C C A G T C T G A T C C T G G A T T G C G A Gapped extensions until threshold Extensions with gaps until score < C below best score so far Reference: Zhang, Bergman, Miller, RECOMB ‘98 Output: GTAAGGTCCAGT GTTAGGTC-AGT

5 CS262 Lecture 4, Win06, Batzoglou Sensitivity-Speed Tradeoff long words (k = 15) short words (k = 7) Sensitivity Speed Kent WJ, Genome Research 2002 Sens. Speed X%

6 CS262 Lecture 4, Win06, Batzoglou Sensitivity-Speed Tradeoff Methods to improve sensitivity/speed 1.Using pairs of words 2.Using inexact words 3.Patterns—non consecutive positions ……ATAACGGACGACTGATTACACTGATTCTTAC…… ……GGCACGGACCAGTGACTACTCTGATTCCCAG…… ……ATAACGGACGACTGATTACACTGATTCTTAC…… ……GGCGCCGACGAGTGATTACACAGATTGCCAG…… TTTGATTACACAGAT T G TT CAC G

7 CS262 Lecture 4, Win06, Batzoglou Measured improvement Kent WJ, Genome Research 2002

8 CS262 Lecture 4, Win06, Batzoglou Non-consecutive words—Patterns Patterns increase the likelihood of at least one match within a long conserved region 3 common 5 common 7 common Consecutive PositionsNon-Consecutive Positions 6 common On a 100-long 70% conserved region: Consecutive Non-consecutive Expected # hits: 1.070.97 Prob[at least one hit]:0.300.47

9 CS262 Lecture 4, Win06, Batzoglou Advantage of Patterns 11 positions 10 positions

10 CS262 Lecture 4, Win06, Batzoglou Multiple patterns K different patterns  Construct K distinct dictionaries, one for each pattern  Takes K times longer to scan  Patterns can complement one another Computational problem:  Given: a model (prob distribution) for homology between two regions  Find: best set of K patterns that maximizes Prob(at least one match) TTTGATTACACAGAT T G TT CAC G T G T C CAG TTGATT A G Buhler et al. RECOMB 2003 Sun & Buhler RECOMB 2004 How long does it take to search the query?

11 CS262 Lecture 4, Win06, Batzoglou Variants of BLAST NCBI BLAST: search the universe http://www.ncbi.nlm.nih.gov/BLAST/ http://www.ncbi.nlm.nih.gov/BLAST/ MEGABLAST: http://genopole.toulouse.inra.fr/blast/megablast.html http://genopole.toulouse.inra.fr/blast/megablast.html  Optimized to align very similar sequences Works best when k = 4i  16 Linear gap penalty WU-BLAST: (Wash U BLAST) http://blast.wustl.edu/ http://blast.wustl.edu/  Very good optimizations  Good set of features & command line arguments BLAT http://genome.ucsc.edu/cgi-bin/hgBlat http://genome.ucsc.edu/cgi-bin/hgBlat  Faster, less sensitive than BLAST  Good for aligning huge numbers of queries CHAOS http://www.cs.berkeley.edu/~brudno/chaos http://www.cs.berkeley.edu/~brudno/chaos  Uses inexact k-mers, sensitive PatternHunter http://www.bioinformaticssolutions.com/products/ph/index.php http://www.bioinformaticssolutions.com/products/ph/index.php  Uses patterns instead of k-mers BlastZ http://www.psc.edu/general/software/packages/blastz/ http://www.psc.edu/general/software/packages/blastz/  Uses patterns, good for finding genes Typhon http://typhon.stanford.edu http://typhon.stanford.edu  Uses multiple alignments to improve sensitivity/speed tradeoff

12 CS262 Lecture 4, Win06, Batzoglou Example Query: Human atoh enhancer, 179 letters[1.5 min] Result: 57 blast hits 1. gi|7677270|gb|AF218259.1|AF218259 Homo sapiens ATOH1 enhanc... 355 1e-95 gi|7677270|gb|AF218259.1|AF218259355 2.gi|22779500|gb|AC091158.11| Mus musculus Strain C57BL6/J ch... 264 4e-68gi|22779500|gb|AC091158.11|264 3.gi|7677269|gb|AF218258.1|AF218258 Mus musculus Atoh1 enhanc... 256 9e-66gi|7677269|gb|AF218258.1|AF218258256 4.gi|28875397|gb|AF467292.1| Gallus gallus CATH1 (CATH1) gene... 78 5e-12gi|28875397|gb|AF467292.1|78 5.gi|27550980|emb|AL807792.6| Zebrafish DNA sequence from clo... 54 7e-05gi|27550980|emb|AL807792.6|54 6.gi|22002129|gb|AC092389.4| Oryza sativa chromosome 10 BAC O... 44 0.068gi|22002129|gb|AC092389.4|44 7.gi|22094122|ref|NM_013676.1| Mus musculus suppressor of Ty... 42 0.27gi|22094122|ref|NM_013676.1|42 8.gi|13938031|gb|BC007132.1| Mus musculus, Similar to suppres... 42 0.27gi|13938031|gb|BC007132.1|42 gi|7677269|gb|AF218258.1|AF218258gi|7677269|gb|AF218258.1|AF218258 Mus musculus Atoh1 enhancer sequence Length = 1517 Score = 256 bits (129), Expect = 9e-66 Identities = 167/177 (94%), Gaps = 2/177 (1%) Strand = Plus / Plus Query: 3 tgacaatagagggtctggcagaggctcctggccgcggtgcggagcgtctggagcggagca 62 ||||||||||||| ||||||||||||||||||| |||||||||||||||||||||||||| Sbjct: 1144 tgacaatagaggggctggcagaggctcctggccccggtgcggagcgtctggagcggagca 1203 Query: 63 cgcgctgtcagctggtgagcgcactctcctttcaggcagctccccggggagctgtgcggc 122 |||||||||||||||||||||||||| ||||||||| |||||||||||||||| ||||| Sbjct: 1204 cgcgctgtcagctggtgagcgcactc-gctttcaggccgctccccggggagctgagcggc 1262 Query: 123 cacatttaacaccatcatcacccctccccggcctcctcaacctcggcctcctcctcg 179 ||||||||||||| || ||| |||||||||||||||||||| ||||||||||||||| Sbjct: 1263 cacatttaacaccgtcgtca-ccctccccggcctcctcaacatcggcctcctcctcg 1318 http://www.ncbi.nlm.nih.gov/BLAST/

13 CS262 Lecture 4, Win06, Batzoglou The Four-Russian Algorithm brief overview A (not so useful) speedup of Dynamic Programming [ Arlazarov, Dinic, Kronrod, Faradzev 1970]

14 CS262 Lecture 4, Win06, Batzoglou Main Observation Within a rectangle of the DP matrix, values of D depend only on the values of A, B, C, and substrings x l...l’, y r…r’ Definition: A t-block is a t  t square of the DP matrix Idea: Divide matrix in t-blocks, Precompute t-blocks Speedup: O(t) A B C D xlxl x l’ yryr y r’ t

15 CS262 Lecture 4, Win06, Batzoglou The Four-Russian Algorithm Main structure of the algorithm: 1.Divide N  N DP matrix into K  K log 2 N-blocks that overlap by 1 column & 1 row 2.For i = 1……K 3. For j = 1……K 4. Compute D i,j as a function of A i,j, B i,j, C i,j, x[l i …l’ i ], y[r j …r’ j ] Time: O(N 2 / log 2 N) times the cost of step 4 t t t

16 CS262 Lecture 4, Win06, Batzoglou The Four-Russian Algorithm t t t

17 Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2

18 CS262 Lecture 4, Win06, Batzoglou Outline for our next topic Hidden Markov models – the theory Probabilistic interpretation of alignments using HMMs Later in the course: Applications of HMMs to biological sequence modeling and discovery of features such as genes

19 CS262 Lecture 4, Win06, Batzoglou Example: The Dishonest Casino A casino has two dice: Fair die P(1) = P(2) = P(3) = P(5) = P(6) = 1/6 Loaded die P(1) = P(2) = P(3) = P(5) = 1/10 P(6) = 1/2 Casino player switches back-&-forth between fair and loaded die once every 20 turns Game: 1.You bet $1 2.You roll (always with a fair die) 3.Casino player rolls (maybe with fair die, maybe with loaded die) 4.Highest number wins $2

20 CS262 Lecture 4, Win06, Batzoglou Question # 1 – Evaluation GIVEN A sequence of rolls by the casino player 1245526462146146136136661664661636616366163616515615115146123562344 QUESTION How likely is this sequence, given our model of how the casino works? This is the EVALUATION problem in HMMs Prob = 1.3 x 10 -35

21 CS262 Lecture 4, Win06, Batzoglou Question # 2 – Decoding GIVEN A sequence of rolls by the casino player 1245526462146146136136661664661636616366163616515615115146123562344 QUESTION What portion of the sequence was generated with the fair die, and what portion with the loaded die? This is the DECODING question in HMMs FAIRLOADEDFAIR

22 CS262 Lecture 4, Win06, Batzoglou Question # 3 – Learning GIVEN A sequence of rolls by the casino player 1245526462146146136136661664661636616366163616515615115146123562344 QUESTION How “loaded” is the loaded die? How “fair” is the fair die? How often does the casino player change from fair to loaded, and back? This is the LEARNING question in HMMs Prob(6) = 64%

23 CS262 Lecture 4, Win06, Batzoglou The dishonest casino model FAIRLOADED 0.05 0.95 P(1|F) = 1/6 P(2|F) = 1/6 P(3|F) = 1/6 P(4|F) = 1/6 P(5|F) = 1/6 P(6|F) = 1/6 P(1|L) = 1/10 P(2|L) = 1/10 P(3|L) = 1/10 P(4|L) = 1/10 P(5|L) = 1/10 P(6|L) = 1/2

24 CS262 Lecture 4, Win06, Batzoglou Definition of a hidden Markov model Definition: A hidden Markov model (HMM) Alphabet  = { b 1, b 2, …, b M } Set of states Q = { 1,..., K } Transition probabilities between any two states a ij = transition prob from state i to state j a i1 + … + a iK = 1, for all states i = 1…K Start probabilities a 0i a 01 + … + a 0K = 1 Emission probabilities within each state e i (b) = P( x i = b |  i = k) e i (b 1 ) + … + e i (b M ) = 1, for all states i = 1…K K 1 … 2

25 CS262 Lecture 4, Win06, Batzoglou A HMM is memory-less At each time step t, the only thing that affects future states is the current state  t P(  t+1 = k | “whatever happened so far”) = P(  t+1 = k |  1,  2, …,  t, x 1, x 2, …, x t )= P(  t+1 = k |  t ) K 1 … 2

26 CS262 Lecture 4, Win06, Batzoglou A parse of a sequence Given a sequence x = x 1 ……x N, A parse of x is a sequence of states  =  1, ……,  N 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2

27 CS262 Lecture 4, Win06, Batzoglou Likelihood of a parse Given a sequence x = x 1 ……x N and a parse  =  1, ……,  N, To find how likely is the parse: (given our HMM) P(x,  ) = P(x 1, …, x N,  1, ……,  N ) = P(x N,  N |  N-1 ) P(x N-1,  N-1 |  N-2 )……P(x 2,  2 |  1 ) P(x 1,  1 ) = P(x N |  N ) P(  N |  N-1 ) ……P(x 2 |  2 ) P(  2 |  1 ) P(x 1 |  1 ) P(  1 ) = a 0  1 a  1  2 ……a  N-1  N e  1 (x 1 )……e  N (x N ) 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2 A compact way to write a 0  1 a  1  2 ……a  N-1  N e  1 (x 1 )……e  N (x N ) Number all parameters a ij and e i (b); n params Example: a 0Fair :  1 ; a 0Loaded :  2 ; … e Loaded (6) =  18 Then, count in x and  the # of times each parameter j = 1, …, n occurs F(j, x,  ) = # parameter  j occurs in (x,  ) (call F(.,.,.) the feature counts) Then, P(x,  ) =  j=1…n  j F(j, x,  ) = = exp [  j=1…n log(  j )  F(j, x,  ) ]

28 CS262 Lecture 4, Win06, Batzoglou Example: the dishonest casino Let the sequence of rolls be: x = 1, 2, 1, 5, 6, 2, 1, 5, 2, 4 Then, what is the likelihood of  = Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair? (say initial probs a 0Fair = ½, a oLoaded = ½) ½  P(1 | Fair) P(Fair | Fair) P(2 | Fair) P(Fair | Fair) … P(4 | Fair) = ½  (1/6) 10  (0.95) 9 =.00000000521158647211 ~= 0.5  10 -9

29 CS262 Lecture 4, Win06, Batzoglou Example: the dishonest casino So, the likelihood the die is fair in this run is just 0.521  10 -9 OK, but what is the likelihood of  = Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded? ½  P(1 | Loaded) P(Loaded, Loaded) … P(4 | Loaded) = ½  (1/10) 9  (1/2) 1 (0.95) 9 =.00000000015756235243 ~= 0.16  10 -9 Therefore, it somewhat more likely that all the rolls are done with the fair die, than that they are all done with the loaded die

30 CS262 Lecture 4, Win06, Batzoglou Example: the dishonest casino Let the sequence of rolls be: x = 1, 6, 6, 5, 6, 2, 6, 6, 3, 6 Now, what is the likelihood  = F, F, …, F? ½  (1/6) 10  (0.95) 9 = 0.5  10 -9, same as before What is the likelihood  = L, L, …, L? ½  (1/10) 4  (1/2) 6 (0.95) 9 =.00000049238235134735 ~= 0.5  10 -7 So, it is 100 times more likely the die is loaded

31 CS262 Lecture 4, Win06, Batzoglou The three main questions on HMMs 1.Evaluation GIVEN a HMM M, and a sequence x, FIND Prob[ x | M ] 2.Decoding GIVENa HMM M, and a sequence x, FINDthe sequence  of states that maximizes P[ x,  | M ] 3.Learning GIVENa HMM M, with unspecified transition/emission probs., and a sequence x, FINDparameters  = (e i (.), a ij ) that maximize P[ x |  ]

32 CS262 Lecture 4, Win06, Batzoglou Let’s not be confused by notation P[ x | M ]: The probability that sequence x was generated by the model The model is: architecture (#states, etc) + parameters  = a ij, e i (.) So, P[x | M] is the same with P[ x |  ], and P[ x ], when the architecture, and the parameters, respectively, are implied Similarly, P[ x,  | M ], P[ x,  |  ] and P[ x,  ] are the same when the architecture, and the parameters, are implied In the LEARNING problem we always write P[ x |  ] to emphasize that we are seeking the  * that maximizes P[ x |  ]

33 CS262 Lecture 4, Win06, Batzoglou Problem 1: Decoding Find the best parse of a sequence

34 CS262 Lecture 4, Win06, Batzoglou Decoding GIVEN x = x 1 x 2 ……x N We want to find  =  1, ……,  N, such that P[ x,  ] is maximized  * = argmax  P[ x,  ] We can use dynamic programming! Let V k (i) = max {  1…  i-1} P[x 1 …x i-1,  1, …,  i-1, x i,  i = k] = Probability of most likely sequence of states ending at state  i = k 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2

35 CS262 Lecture 4, Win06, Batzoglou Decoding – main idea Given that for all states k, and for a fixed position i, V k (i) = max {  1…  i-1} P[x 1 …x i-1,  1, …,  i-1, x i,  i = k] What is V l (i+1)? From definition, V l (i+1) = max {  1…  i} P[ x 1 …x i,  1, …,  i, x i+1,  i+1 = l ] = max {  1…  i} P(x i+1,  i+1 = l | x 1 …x i,  1,…,  i ) P[x 1 …x i,  1,…,  i ] = max {  1…  i} P(x i+1,  i+1 = l |  i ) P[x 1 …x i-1,  1, …,  i-1, x i,  i ] = max k [ P(x i+1,  i+1 = l |  i = k) max {  1…  i-1} P[x 1 …x i-1,  1,…,  i-1, x i,  i =k] ] = e l (x i+1 ) max k a kl V k (i)

36 CS262 Lecture 4, Win06, Batzoglou The Viterbi Algorithm Input: x = x 1 ……x N Initialization: V 0 (0) = 1(0 is the imaginary first position) V k (0) = 0, for all k > 0 Iteration: V j (i) = e j (x i )  max k a kj V k (i – 1) Ptr j (i) = argmax k a kj V k (i – 1) Termination: P(x,  *) = max k V k (N) Traceback:  N * = argmax k V k (N)  i-1 * = Ptr  i (i)

37 CS262 Lecture 4, Win06, Batzoglou The Viterbi Algorithm Similar to “aligning” a set of states to a sequence Time: O(K 2 N) Space: O(KN) x 1 x 2 x 3 ………………………………………..x N State 1 2 K V j (i)

38 CS262 Lecture 4, Win06, Batzoglou Viterbi Algorithm – a practical detail Underflows are a significant problem P[ x 1,…., x i,  1, …,  i ] = a 0  1 a  1  2 ……a  i e  1 (x 1 )……e  i (x i ) These numbers become extremely small – underflow Solution: Take the logs of all values V l (i) = log e k (x i ) + max k [ V k (i-1) + log a kl ]

39 CS262 Lecture 4, Win06, Batzoglou Example Let x be a long sequence with a portion of ~ 1/6 6’s, followed by a portion of ~ ½ 6’s… x = 123456123456…12345 6626364656…1626364656 Then, it is not hard to show that optimal parse is (exercise): FFF…………………...F LLL………………………...L 6 characters “123456” parsed as F, contribute.95 6  (1/6) 6 = 1.6  10 -5 parsed as L, contribute.95 6  (1/2) 1  (1/10) 5 = 0.4  10 -5 “162636” parsed as F, contribute.95 6  (1/6) 6 = 1.6  10 -5 parsed as L, contribute.95 6  (1/2) 3  (1/10) 3 = 9.0  10 -5


Download ppt "Sequence Alignment Cont’d. CS262 Lecture 4, Win06, Batzoglou Indexing-based local alignment (BLAST- Basic Local Alignment Search Tool) 1.SEED Construct."

Similar presentations


Ads by Google