Presentation is loading. Please wait.

Presentation is loading. Please wait.

Probabilistic Sequence Alignment BMI 877 Colin Dewey February 25, 2014.

Similar presentations


Presentation on theme: "Probabilistic Sequence Alignment BMI 877 Colin Dewey February 25, 2014."— Presentation transcript:

1 Probabilistic Sequence Alignment BMI 877 Colin Dewey cdewey@biostat.wisc.edu February 25, 2014

2 What you’ve seen thus far The pairwise sequence alignment task Notion of a “best” alignment – scoring schemes Dynamic programming algorithms for efficiently finding a “best” alignment Variants on the task –global, local, different gap penalty functions Heuristic methods for large-scale alignment - BLAST

3 Tasks addressed today How can we express the uncertainty of an alignment? How can we estimate the parameters for alignment? How can we align multiple sequences to each other?

4 Picking Alignments mel TGTTGTGTGATGTTGATTTCTTTACGACTCCTATCAAACTAAACCCATAAAGCATTCAATTCAAAGCATATA------------ pse T----TTTGATGTTGATTTCTTTACGAGTTTGATAGAACTAAACCCATAAAGCATTCAATTCGTAGCATATAGCTCTCCTCTGC * * ******************** * ** ************************** ******** mel -------CATGTGAAAATCCCAGCGAGAACTCCTTATTAATCCAG--------------------CGCAGTCGGCGGCGGCGGC pse CATTCGGCATGTGAAAA-------------TCCTTATTAATCCAGAACGTGTGCGCCAGCGTCAGCGCCAGCGCCGGCAGCAGC ********** *************** *** ** **** ** ** mel GCGCAGTCAGC----------GGTGGCAGCGCAGTATATAAATAAAGTCTTATAAGAAACTCGTGAGCG--------------- pse -CGCAG-CAGCAAAACGGCACGCTGGCAGCGGAGTATATAAATAA--TCTTATAAGAAACTCGTGTGAGCGCAACCGGGCAGCG ***** **** * ******** ************* ****************** * * mel ---AAAGAGAGCG-TTTTATTTATGTGCGTCAGCGTCGGCCGCAACAGCGCCGTCAGCACTGGCAGCGACTGCGAC pse GCCAAAGAGAGCGATTTTATTTATGTG-----------------ACTGCGCTGCCTG----------GTCCTCGGC ********** ************* ** **** * * * * * ** * mel TGTTGTGTGATGTTGATTTCTTTACGACTCCTATCAAACTAAACCCATAAAGCATTCAATTCAAAGCATATACATGTGAAAATC pse T----TTTGATGTTGATTTCTTTACGAGTTTGATAGAACTAAACCCATAAAGCATTCAATTCGTAGCATATAGCTCTCCTCTGC * * ******************** * ** ************************** ******** * * * mel CCAGCGAGA------ACTCCTTATTAATCCAGCGCAGTCGGCGGCGGCGGCGCGCAGTCAGCGGTGGCAGCGCAGTATATAAAT pse CATTCGGCATGTGAAAATCCTTATTAATCCAGAAC------------------------------------------------- * ** * * *************** * mel AAAGTCTTATAAGAAACTCGTGAGCGAAAGAGAGCGTTTTATTTATGTGCGTCAGCGTCGGCCGCAACAGCGCCGTCAGCACTG pse --------------------------------------------GTGTGCGCCAGCGTCAGCGCCAGCGCCGGCAGCAGCCGCA ****** ******* ** ** * ** * **** mel GCAGCGA----------------------------------------------------------------------------- pse GCAGCAAAACGGCACGCTGGCAGCGGAGTATATAAATAATCTTATAAGAAACTCGTGTGAGCGCAACCGGGCAGCGGCCAAAGA ***** * mel ----------------------------------CTGCGAC pse GAGCGATTTTATTTATGTGACTGCGCTGCCTGGTCCTCGGC * ** * Alignment summary: 27 mismatches, 12 gaps, 116 spaces Alignment summary: 45 mismatches, 4 gaps, 214 spaces Alignment 1 Alignment 2

5 An Elusive Cis-Regulatory Element Drosophila melanogaster polytene chromosomes >chr3R:2824911-2825170 TGTTGTGTGATGTTGATTTCTTTACGACTCCTATCAAACTAAACCCATAAAGCA TTCAAT TCAAAGCATATACATGTGAAAATCCCAGCGAGAACTCCTTATTAATCCAGCGC AGTCGGC GGCGGCGGCGCGCAGTCAGCGGTGGCAGCGCAGTATATAAATAAAGTCTTA TAAGAAACT CGTGAGCGAAAGAGAGCGTTTTATTTATGTGCGTCAGCGTCGGCCGCAACA GCGCCGTCA GCACTGGCAGCGACTGCGAC Adf1→Antp:06447: Binding site for transcription factor Adf1 Antp: antennapedia

6 Alignment 1 Alignment 2 The Conservation of Adf1→Antp:06447 mel TGTTGTGTGATGTTGATTTCTTTACGACTCCTATCAAACTAAACCCATAAAGCATTCAATTCAAAGCATATA------------ pse T----TTTGATGTTGATTTCTTTACGAGTTTGATAGAACTAAACCCATAAAGCATTCAATTCGTAGCATATAGCTCTCCTCTGC * * ******************** * ** ************************** ******** mel -------CATGTGAAAATCCCAGCGAGAACTCCTTATTAATCCAG--------------------CGCAGTCGGCGGCGGCGGC pse CATTCGGCATGTGAAAA-------------TCCTTATTAATCCAGAACGTGTGCGCCAGCGTCAGCGCCAGCGCCGGCAGCAGC ********** *************** *** ** **** ** ** mel GCGCAGTCAGC----------GGTGGCAGCGCAGTATATAAATAAAGTCTTATAAGAAACTCGTGAGCG--------------- pse -CGCAG-CAGCAAAACGGCACGCTGGCAGCGGAGTATATAAATAA-- TCTTATAAGAAACTCGTGTGAGCGCAACCGGGCAGCG ***** **** * ******** ************* ****************** * * mel ---AAAGAGAGCG-TTTTATTTATGTGCGTCAGCGTCGGCCGCAACAGCGCCGTCAGCACTGGCAGCGACTGCGAC pse GCCAAAGAGAGCGATTTTATTTATGTG-----------------ACTGCGCTGCCTG----------GTCCTCGGC ********** ************* ** **** * * * * * ** * mel TGTTGTGTGATGTTGATTTCTTTACGACTCCTATCAAACTAAACCCATAAAGCATTCAATTCAAAGCATATACATGTGAAAATC pse T----TTTGATGTTGATTTCTTTACGAGTTTGATAGAACTAAACCCATAAAGCATTCAATTCGTAGCATATAGCTCTCCTCTGC * * ******************** * ** ************************** ******** * * * mel CCAGCGAGA------ ACTCCTTATTAATCCAGCGCAGTCGGCGGCGGCGGCGCGCAGTCAGCGGTGGCAGCGCAGTATATAAAT pse CATTCGGCATGTGAAAATCCTTATTAATCCAGAAC------------------------------------------------- * ** * * *************** * mel AAAGTCTTATAAGAAACTCGTGAGCGAAAGAGAGCGTTTTATTTATGTGCGTCAGCGTCGGCCGCAACAGCGCCGTCAGCACTG pse --------------------------------------------GTGTGCGCCAGCGTCAGCGCCAGCGCCGGCAGCAGCCGCA ****** ******* ** ** * ** * **** mel GCAGCGA----------------------------------------------------------------------------- pse GCAGCAAAACGGCACGCTGGCAGCGGAGTATATAAATAATCTTATAAGAAACTCGTGTGAGCGCAACCGGGCAGCGGCCAAAGA ***** * mel ----------------------------------CTGCGAC pse GAGCGATTTTATTTATGTGACTGCGCTGCCTGGTCCTCGGC * ** * Alignment summary: 27 mismatches, 12 gaps, 116 spaces Alignment summary: 45 mismatches, 4 gaps, 214 spaces TGTGCGTCAGCGTCGGCCGCAACA GCG TGTG-----------------ACTGCG **** ** *** 33% identity TGTGCGTCAGCGTCGGCCGCAACA GCG TGTGCGCCAGCGTCAGCGCCAGCG CCG ****** ******* ** ** * ** 74% identity

7 The Polytope TGTGCGTCAGCGTCGGCCGCAACA GCG TGTGCGCCAGCGTCAGCGCCAGCG CCG ****** ******* ** ** * ** TGTGCGTCAGCGTCGGCCGCAACA GCG TGTG-----------------ACTGCG **** ** *** 364 Vertices 760 Ridges 398 Facets Sequence lengths = 260bp, 280bp

8 Methodological machinery to be used Hidden Markov models (HMMs) –Viterbi and Forward algorithms –Profile HMMs –Pair HMMs –Connections to classical sequence alignment

9 Hidden Markov models Generative probabilistic models of sequences Explicitly models unobserved (hidden) states that “emit” the characters of the observed sequence Primary task of interest is to infer the hidden states given the observed sequence Alignment case: hidden states = alignment

10 Two HMM random variables Observed sequence Hidden state sequence HMM: –Markov chain over hidden sequence –Dependence between

11 The Parameters of an HMM since we’ve decoupled states and characters, we also have emission probabilities probability of emitting character b in state k probability of a transition from state k to l represents a path (sequence of states) through the model as in Markov chain models, we have transition probabilities

12 A Simple HMM with Emission Parameters 0.8 probability of emitting character A in state 2 probability of a transition from state 1 to state 3 0.4 A 0.4 C 0.1 G 0.2 T 0.3 A 0.1 C 0.4 G 0.4 T 0.1 A 0.2 C 0.3 G 0.3 T 0.2 beginend 0.5 0.2 0.8 0.6 0.1 0.9 0.2 05 4 3 2 1 A 0.4 C 0.1 G 0.1 T 0.4

13 Three Important Questions How likely is a given sequence? the Forward algorithm What is the most probable “path” (sequence of hidden states) for generating a given sequence? the Viterbi algorithm How can we learn the HMM parameters given a set of sequences? the Forward-Backward (Baum-Welch) algorithm

14 How Likely is a Given Path and Sequence? the probability that the path is taken and the sequence is generated: (assuming begin/end are the only silent states on path)

15 How Likely Is A Given Path and Sequence? A 0.1 C 0.4 G 0.4 T 0.1 A 0.4 C 0.1 G 0.1 T 0.4 beginend 0.5 0.2 0.8 0.4 0.6 0.1 0.9 0.2 0.8 05 4 3 2 1 A 0.4 C 0.1 G 0.2 T 0.3 A 0.2 C 0.3 G 0.3 T 0.2

16 How Likely is a Given Sequence? We usually only observe the sequence, not the path To find the probability of a sequence, we must sum over all possible paths but the number of paths can be exponential in the length of the sequence... the Forward algorithm enables us to compute this efficiently

17 How Likely is a Given Sequence: The Forward Algorithm A dynamic programming solution subproblem: define to be the probability of generating the first i characters and ending in state k we want to compute, the probability of generating the entire sequence (x) and ending in the end state (state N) can define this recursively

18 The Forward Algorithm because of the Markov property, don’t have to explicitly enumerate every path e.g. compute using A 0.4 C 0.1 G 0.2 T 0.3 A 0.1 C 0.4 G 0.4 T 0.1 A 0.4 C 0.1 G 0.1 T 0.4 A 0.2 C 0.3 G 0.3 T 0.2 beginend 0.5 0.2 0.8 0.4 0.6 0.1 0.9 0.2 0.8 0 5 4 3 2 1

19 The Forward Algorithm initialization: probability that we’re in the start state and have observed 0 characters from the sequence

20 The Forward Algorithm recursion for silent states: recursion for emitting states (i =1…L):

21 The Forward Algorithm termination: probability that we’re in the end state and have observed the entire sequence

22 Forward Algorithm Example A 0.4 C 0.1 G 0.2 T 0.3 A 0.1 C 0.4 G 0.4 T 0.1 A 0.4 C 0.1 G 0.1 T 0.4 A 0.2 C 0.3 G 0.3 T 0.2 beginend 0.5 0.2 0.8 0.4 0.6 0.1 0.9 0.2 0.8 0 5 4 3 2 1 given the sequence x = TAGA

23 Forward Algorithm Example given the sequence x = TAGA initialization computing other values

24 Three Important Questions How likely is a given sequence? What is the most probable “path” for generating a given sequence? How can we learn the HMM parameters given a set of sequences?

25 Finding the Most Probable Path: The Viterbi Algorithm Dynamic programming approach, again! subproblem: define to be the probability of the most probable path accounting for the first i characters of x and ending in state k we want to compute, the probability of the most probable path accounting for all of the sequence and ending in the end state can define recursively can use DP to find efficiently

26 Finding the Most Probable Path: The Viterbi Algorithm initialization:

27 The Viterbi Algorithm recursion for emitting states (i =1…L): recursion for silent states: keep track of most probable path

28 The Viterbi Algorithm traceback: follow pointers back starting at termination:

29 Forward & Viterbi Algorithms beginend Forward/Viterbi algorithms effectively consider all possible paths for a sequence – Forward to find probability of a sequence –Viterbi to find most probable path consider a sequence of length 4…

30 HMM parameter estimation Easy if the hidden path is known for each sequence In general, the paths are unknown Baum-Welch (Forward-Backward) algorithm is used to compute maximum likelihood estimates Backward algorithm is analog of forward algorithm for computing probabilities of suffixes of a sequence

31 Learning Parameters: The Baum-Welch Algorithm algorithm sketch: –initialize the parameters of the model –iterate until convergence calculate the expected number of times each transition or emission is used adjust the parameters to maximize the likelihood of these expected values

32 How can we use HMMs for pairwise alignment? What is the observed sequence? –one of the two sequences? –both sequences? What is the hidden path? –the alignment

33 Profile HMM for pairwise alignment Select one sequence to be observed (the query) The other sequence (the reference) defines the states of the HMM Three classes of states –Match: corresponds to aligned positions –Delete: positions of the reference that are deleted in the query –Insert: positions on the query that are insertions relative to the reference

34 Profile HMMs i 2 i 3 i 1 i 0 d 1 d 2 d 3 m 1 m 3 m 2 startend Match states represent key conserved positions Insert states account for extra characters in some sequences Delete states are silent; they Account for characters missing in some sequences A0.01 R0.12 D0.04 N0.29 C0.01 E0.03 Q0.02 G0.01 Insert and match states have emission distributions over sequence characters

35 Example Profile HMM Figure from A. Krogh, An Introduction to Hidden Markov Models for Biological Sequences match states delete states (silent) insert states

36 Profile HMM considerations Odd asymmetry: have to pick one sequence as reference Models conditional probability P(X|Y) of query sequence (X) given reference sequence (Y) Is there something more natural here? –Yes, Pair HMMs We will revisit Profile HMMs for multiple alignment a bit later

37 Pair Hidden Markov Models each non-silent state emits one or a pair of characters I: insert state D: delete state H: homology (match) state

38 PHMM Paths = Alignments HAAHAA HATHAT IGIG ICIC HGGHGG DTDT HCCHCC hidden: observed: sequence 1 : AAGCGC sequence 2 : ATGTC BE

39 Transition Probabilities probabilities of moving between states at each stepBHIDEB 1-2δ-τδδτ H δδτ I 1-ε-τετ D ετ E state i+1 state i

40 Emission Probabilities A 0.3 C 0.2 G 0.3 T 0.2 A 0.1C 0.4 G T 0.1ACGTA 0.130.030.060.03 C 0.130.030.06 G 0.030.130.03 T 0.060.030.13 Homology (H)Insertion (I)Deletion (D) single character pairs of characters

41 Pair HMM Viterbi probability of most likely sequence of hidden states generating length i prefix of x and length j prefix of y, with the last state being: H I D note that the recurrence relations here allow I  D and D  I transitions

42 PHMM Alignment calculate probability of most likely alignment traceback, as in Needleman-Wunsch (NW), to obtain sequence of state states giving highest probability HIDHHDDIIHH...

43 Correspondence with Needleman-Wunsch (NW) NW values ≈ logarithms of Pair HMM Viterbi values

44 Posterior Probabilities there are similar recurrences for the Forward and Backward values from the Forward and Backward values, we can calculate the posterior probability of the event that the path passes through a certain state S, after generating length i and j prefixes

45 Uses for Posterior Probabilities sampling of suboptimal alignments posterior probability of pairs of residues being homologous (aligned to each other) posterior probability of a residue being gapped training model parameters (EM)

46 Posterior Probabilities plot of posterior probability of each alignment column

47 Parameter Training supervised training –given: sequences and correct alignments –do: calculate parameter values that maximize joint likelihood of sequences and alignments unsupervised training –given: sequence pairs, but no alignments –do: calculate parameter values that maximize marginal likelihood of sequences (sum over all possible alignments)

48 Multiple Alignment with Profile HMMs given a set of sequences to be aligned –use Baum-Welch to learn parameters of model –may also adjust length of profile HMM during training to compute a multiple alignment given the profile HMM –run the Viterbi algorithm on each sequence –Viterbi paths indicate correspondences among sequences

49 Multiple Alignment with Profile HMMs

50 More common multiple alignment strategy: Progressive alignment TGTAACTGTAC ATGT--C ATGTGGC ATGTCATGTGGC TGTAAC TGT-AC -TGTAAC -TGT-AC ATGT--C ATGTGGC TGTTAAC -TGTTAAC -TGT-AAC -TGT--AC ATGT---C ATGT-GGC

51 Classification w/ Profile HMMs To classify sequences according to family, we can train a profile HMM to model the proteins of each family of interest Given a sequence x, use Bayes’ rule to make classification β-galactosidase β-glucanase β-amylase α-amylase

52 PFAM Large database of protein families Each family has a trained Profile HMM Example search with a globin sequence:

53 Summary Probabilistic models for alignment are more powerful than classical combinatorial alignment algorithms –Captures uncertainty in alignment –Allows for principled estimation of parameters –Easily used in classification settings


Download ppt "Probabilistic Sequence Alignment BMI 877 Colin Dewey February 25, 2014."

Similar presentations


Ads by Google