Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cédric Notredame (19/10/2015) Using Dynamic Programming To Align Sequences Cédric Notredame.

Similar presentations


Presentation on theme: "Cédric Notredame (19/10/2015) Using Dynamic Programming To Align Sequences Cédric Notredame."— Presentation transcript:

1 Cédric Notredame (19/10/2015) Using Dynamic Programming To Align Sequences Cédric Notredame

2 Cédric Notredame (19/10/2015) Our Scope Coding a Global and a Local Algorithm Understanding the DP concept Aligning with Affine gap penalties Sophisticated variants… Saving memory

3 Cédric Notredame (19/10/2015) Outline -Coding Dynamic Programming with Non-affine Penalties -Adding affine penalties -Turning a global algorithm into a local Algorithm -Using A Divide and conquer Strategy -The repeated Matches Algorithm -Double Dynamic Programming -Tailoring DP to your needs:

4 Cédric Notredame (19/10/2015) Global Alignments Without Affine Gap penalties Dynamic Programming

5 Cédric Notredame (19/10/2015) How To align Two Sequences With a Gap Penalty, A Substitution matrix and Not too Much Time Dynamic Programming

6 Cédric Notredame (19/10/2015) A bit of History… -DP invented in the 50s by Bellman -Programming  Tabulation -Re-invented in 1970 by Needlman and Wunsch -It took 10 year to find out…

7 Cédric Notredame (19/10/2015) The Foolish Assumption The score of each column of the alignment is independent from the rest of the alignment It is possible to model the relationship between two sequences with: -A substitution matrix -A simple gap penalty

8 Cédric Notredame (19/10/2015) The Principal of DP If you extend optimally an optimal alignment of two sub-sequences, the result remains an optimal alignment X-XX XXXX X-X- XXXX -X-X Deletion Alignment Insertion ???? +

9 Cédric Notredame (19/10/2015) Finding the score of i,j -Sequence 1: [1-i] -Sequence 2: [1-j] -The optimal alignment of [1-i] vs [1-j] can finish in three different manners: X-X- XXXX -X-X

10 Cédric Notredame (19/10/2015) Finding the score of i,j i-i- ijij -j-j 1…i 1…j-1 1…i-1 1…j-1 1…i-1 1…j + + + Three ways to build the alignment 1…i 1…j

11 Cédric Notredame (19/10/2015) Finding the score of i,j 1…i-1 1…j-1 1…i 1…j-1 1…i-1 1…j In order to Compute the score of 1…i 1…j All we need are the scores of:

12 Cédric Notredame (19/10/2015) Formalizing the algorithm F(i,j)= best F(i,j-1) + Gep F(i-1,j-1) + Mat[i,j] F(i-1,j) + Gep X-X- XXXX -X-X 1…i 1…j-1 1…i-1 1…j-1 1…i-1 1…j + + +

13 Cédric Notredame (19/10/2015) Arranging Everything in a Table -FA - F A S T T 1…I-1 1…J-1 1…I 1…J-1 1…I-1 1…J 1…I 1…J

14 Cédric Notredame (19/10/2015) Taking Care of the Limits In a Dynamic Programming strategy, the most delicate part is to take care of the limits: -what happens when you start -what happens when you finish The DP strategy relies on the idea that ALL the cells in your table have the same environment… This is NOT true of ALL the cells!!!!

15 Cédric Notredame (19/10/2015) Taking Care of the Limits -FA - F A S T T -4 Match=2 MisMatch=-1 Gap=-1 -3 FAT --- F-F- -2 FA -- F-F- -2 FA -- -3 FAS --- 0

16 Cédric Notredame (19/10/2015) Filing Up The Matrix

17 Cédric Notredame (19/10/2015) -FA - F A S -3 -2 -2 T -3 T -4 -2 +2 -2 +2 -3 -2 +1 -4 -3 0 0 +1 -2 -3 +1 0 +4 0 0 +3 0 -3 -4 0 +3 0 +3 +2 +3 +2 +3 -4 -5 +2 -2 +2 +5 +1 +5 0

18 Cédric Notredame (19/10/2015) Delivering the alignment: Trace-back Score of 1…3 Vs 1…4  Optimal Aln Score TTTT S-S- AAAA FFFF

19 Cédric Notredame (19/10/2015) Trace-back: possible implementation while (!($i==0 && $j==0)) { if ($tb[$i][$j]==$sub) #SUBSTITUTION { $alnI[$aln_len]=$seqI[--$i]; $alnJ[$aln_len]=$seqJ[--$j]; } elsif ($tb[$i][$j]==$del) #DELETION { $alnI[$aln_len]='-'; $alnJ[$aln_len]=$seqJ[--$j]; } elsif ($tb[$i][$j]==$ins) #INSERTION { $alnI[$aln_len]=$seqI[0][--$i]; $alnJ[$aln_len]='-'; } $aln_len++; }

20 Cédric Notredame (19/10/2015) Local Alignments Without Affine Gap penalties Smith and Waterman

21 Cédric Notredame (19/10/2015) Getting rid of the pieces of Junk between the interesting bits Smith and Waterman

22 Cédric Notredame (19/10/2015)

23 The Smith and Waterman Algorithm F(i,j)= best F(i-1,j) + Gep F(i-1,j-1) + Mat[i,j] F(i,j-1) + Gep X-X- XXXX -X-X 1…i 1…j-1 1…i-1 1…j-1 1…i-1 1…j + + + 0

24 Cédric Notredame (19/10/2015) The Smith and Waterman Algorithm F(i,j)= best F(i-1,j) + Gep F(i-1,j-1) + Mat[i,j] F(i,j-1) + Gep 0

25 Cédric Notredame (19/10/2015) The Smith and Waterman Algorithm 0  Ignore The rest of the Matrix  Terminate a local Aln

26 Cédric Notredame (19/10/2015) Filing Up a SW Matrix 0

27 Cédric Notredame (19/10/2015) Filling up a SW matrix: borders * -ANICECAT - 000000000 C 0 A 0 T 0 A 0 N 0 D 0 O 0 G 0 Easy: Local alignments NEVER start/end with a gap…

28 Cédric Notredame (19/10/2015) Filling up a SW matrix * -ANICECAT - 000000000 C 00 00 2 0 2 0 0 A 02 0 0 0 0 0 4 0 T 00 0 0 0 0 0 2 6 A 02 0 0 0 0 0 0 4 N 0 0 4 20 0 0 0 2 D 00 2 2 0 0 0 0 0 O 00 0 0 0 0 0 0 0 G 0 0 0 00 0 0 0 0 Best Local score  Beginning of the trace-back

29 Cédric Notredame (19/10/2015) for ($i=1; $i<=$len0; $i++) { for ($j=1; $j<=$len1; $j++) { if ($res0[0][$i-1] eq $res1[0][$j-1]){$s=2;} else {$s=-1;} $sub=$mat[$i-1][$j-1]+$s; $del=$mat[$i ][$j-1]+$gep; $ins=$mat[$i-1][$j ]+$gep; if ($sub>$del && $sub>$ins && $sub>0) {$smat[$i][$j]=$sub;$tb[$i][$j]=$subcode;} elsif($del>$ins && $del>0 ) {$smat[$i][$j]=$del;$tb[$i][$j]=$delcode;} elsif( $ins>0 ) {$smat[$i][$j]=$ins;$tb[$i][$j]=$inscode;} else {$smat[$i][$j]=$zero;$tb[$i][$j]=$stopcode;} if ($smat[$i][$j]> $best_score) { $best_score=$smat[$i][$j]; $best_i=$i; $best_j=$j; } Prepare Trace back Turning NW into SW

30 Cédric Notredame (19/10/2015) A few things to remember SW only works if the substitution matrix has been normalized to give a Negative score to a random alignment. Chance should not pay when it comes to local alignments !

31 Cédric Notredame (19/10/2015) More than One match… -SW delivers only the best scoring Match -If you need more than one match: -SIM (Huang and Millers) Or -Waterman and Eggert (Durbin, p91)

32 Cédric Notredame (19/10/2015) Waterman and Eggert -Iterative algorithm: -1-identify the best match -2-redo SW with used pairs forbidden -Delivers a collection of non-overlapping local alignments -Avoid trivial variations of the optimal. -3-finish when the last interesting local extracted

33 Cédric Notredame (19/10/2015) Adding Affine Gap Penalties The Gotoh Algorithm

34 Cédric Notredame (19/10/2015) Forcing a bit of Biology into your alignment The Gotoh Formulation

35 Cédric Notredame (19/10/2015) Why Affine gap Penalties are Biologically better Cost L Afine Gap Penalty GOP GEP GOP Parsimony: Evolution takes the simplest path (So We Think…) Cost=gop+L*gep Or Cost=gop+(L-1)*gep

36 Cédric Notredame (19/10/2015) But Harder To compute… More Than 3 Ways to extend an Alignment X-XX XXXX X-X- XXXX -X-X Deletion Alignment Insertion ???? + Opening Extension Opening Extension

37 Cédric Notredame (19/10/2015) More Questions Need to be asked For instance, what is the cost of an insertion ? 1…I-1 ??X 1…J-1 ??X 1…I ??- 1…J ??X 1…I ??- 1…J-1 ??X GOP GEP

38 Cédric Notredame (19/10/2015) Solution:Maintain 3 Tables Ix: Table that contains the score of every optimal alignment 1…i vs 1…j that finishes with an Insertion in sequence X. Iy: Table that contains the score of every optimal alignment 1…I vs 1…J that finishes with an Insertion in sequence Y. M: Table that contains the score of every optimal alignment 1…I vs 1…J that finishes with an alignment between sequence X and Y

39 Cédric Notredame (19/10/2015) The Algorithm M(i,j)= best M(i-1,j-1) + Mat(i,j) XXXX 1…i-1 1…j-1 + Ix(i-1,j-1) + Mat(i,j) Iy(i-1,j-1) + Mat(i,j) X-X- 1…i-1 X 1…j X + Ix(i,j)= best M(i-1,j) + gop Ix(i-1,j) + gep X-X- 1…i-1 X 1…j - + -X-X 1…i X 1…j-1 X + Iy(i,j)= best M(i,j-1) + gop Iy(i,j-1) + gep -X-X 1…i - 1…j-1 X +

40 Cédric Notredame (19/10/2015) Trace-back? M IxIy Start From BEST M(i,j) Ix(i,j) Iy(i,j)

41 Cédric Notredame (19/10/2015) Trace-back? M Iy Navigate from one table to the next, knowing that a gap always finishes with an aligned column… Ix

42 Cédric Notredame (19/10/2015) Going Further ? With the affine gap penalties, we have increased the number of possibilities when building our alignment. CS talk of states and represent this as a Finite State Automaton (FSA are HMM cousins)

43 Cédric Notredame (19/10/2015) Going Further ?

44 Cédric Notredame (19/10/2015) Going Further ? In Theory, there is no Limit on the number of states one may consider when doing such a computation.

45 Cédric Notredame (19/10/2015)

46 Going Further ? Imagine a pairwise alignment algorithm where the gap penalty depends on the length of the gap. Can you simplify it realistically so that it can be efficiently implemented?

47 Cédric Notredame (19/10/2015) Ly Lx

48 Cédric Notredame (19/10/2015) A divide and Conquer Strategy The Myers and Miller Strategy

49 Cédric Notredame (19/10/2015) Remember Not To Run Out of Memory The Myers and Miller Strategy

50 Cédric Notredame (19/10/2015) A Score in Linear Space You never Need More Than The Previous Row To Compute the optimal score

51 Cédric Notredame (19/10/2015) A Score in Linear Space For I For J R2[i][j]=best For J, R1[j]=R2[j] R1 R2 R2[j-1], +gep R1[j-1]+mat R1[j]+gep

52 Cédric Notredame (19/10/2015) A Score in Linear Space

53 Cédric Notredame (19/10/2015) A Score in Linear Space You never Need More Than The Previous Row To Compute the optimal score You only need the matrix for the Trace-Back, Or do you ????

54 Cédric Notredame (19/10/2015) An Alignment in Linear Space Forward Algorithm F(i,j)=Optimal score of 0…i Vs 0…j Backward algorithm B(i,j)=Optimal score of M…i Vs N…j B(i,j)+F(i,j)=Optimal score of the alignment that passes through pair i,j

55 Cédric Notredame (19/10/2015) An Alignment in Linear Space Backward algorithm Forward Algorithm Optimal B(i,j)+F(i,j) Backward algorithm Forward Algorithm

56 Cédric Notredame (19/10/2015)

57 An Alignment in Linear Space Backward algorithm Forward Algorithm Recursive divide and conquer strategy: Myers and Miller (Durbin p35)

58 Cédric Notredame (19/10/2015) An Alignment in Linear Space

59 Cédric Notredame (19/10/2015) A Forward-only Strategy(Durbin, p35) Forward Algorithm -Keep Row M in memory -Keep track of which Cell in Row M lead to the optimal score -Divide on this cell M

60 Cédric Notredame (19/10/2015) M M

61 An interesting application: finding sub-optimal alignments Backward algorithm Forward Algorithm Backward algorithm Forward Algorithm Sum over the Forw/Bward and identify the score of the best aln going through cell i,j

62 Cédric Notredame (19/10/2015) Application: Non-local models Double Dynamic Programming

63 Cédric Notredame (19/10/2015) Outline The main limitation of DP: Context independent measure

64 Cédric Notredame (19/10/2015) 1 1 9 12 13 8 14 5 Double Dynamic Programming High Level Smith and Waterman Dynamic Programming Score=Max S(i-1, j-1)+RMSd score S(i, j-1)+gp { Rigid Body Superposition where i and j are forced together RMSd Score

65 Cédric Notredame (19/10/2015) Double Dynamic Programming

66 Cédric Notredame (19/10/2015) Application: Repeats The Durbin Algorithm

67 Cédric Notredame (19/10/2015)

68 In The End: Wraping it Up

69 Cédric Notredame (19/10/2015) Dynamic Programming Needleman and Wunsch: Delivers the best scoring global alignment Smith and Waterman: NW with an extra state 0 Affine Gap Penalties: Making DP more realistic

70 Cédric Notredame (19/10/2015) Dynamic Programming Linear space: Using Divide and Conquer Strategies Not to run out of memory Double Dynamic Programming, repeat extraction: DP can easily be adapted to a special need


Download ppt "Cédric Notredame (19/10/2015) Using Dynamic Programming To Align Sequences Cédric Notredame."

Similar presentations


Ads by Google