Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS5263 Bioinformatics RNA Secondary Structure Prediction.

Similar presentations


Presentation on theme: "CS5263 Bioinformatics RNA Secondary Structure Prediction."— Presentation transcript:

1 CS5263 Bioinformatics RNA Secondary Structure Prediction

2 Outline Biological roles for RNA RNA secondary structure –What’s “secondary structure”? –How is it represented? –Why is it important? How to predict?

3 Central dogma The flow of genetic information DNA RNAProtein transcription translation Replication

4 Classical Roles for RNA mRNA tRNA rRNA Ribosome

5 “Semi-classical” RNA snRNA - small nuclear RNA (60-300nt), involved in splicing (removing introns), etc. RNaseP - tRNA processing (~300 nt) SRP RNA - signal recognition particle RNA: membrane targeting (~100-300 nt) tmRNA - resetting stalled ribosomes, destroy aberrant mRNA Telomerase - (200-400nt) snoRNA - small nucleolar RNA (many varieties; 80-200nt)

6 Non-coding RNAs Dramatic discoveries in last 10 years 100s of new families Many roles: regulation, transport, stability, catalysis, … siRNA: Small interfering RNA (Nobel prize 2006) and miRNAs: both are ~21-23 nt –Regulating gene expression –Evidence of disease- association 1% of DNA codes for protein, but 30% of it is copied into RNA, i.e. ncRNA >> mRNA

7 Take-home message RNAs play many important roles in the cell beyond the classical roles –Many of which yet to be discovered RNA functions are determined by structures

8 Example: Riboswitch Riboswitch: an mRNA regulates its own activity

9 RNA structure Primary: sequence Secondary: base-pairing Tertiary: 3D shape

10 RNA base-pairing Watson-Crick Pairing –C-G~3kcal/mole –A-U~2kcal/mole “Wobble Pair” G – U ~1kcal/mole Non-canonical Pairs

11 tRNA structure

12 Secondary structure prediction Given: CAUUUGUGUACCU…. Goal: How can we compute that?

13 Hairpin Loops Stems Bulge loop Interior loops Multi-branched loop Terminology

14 Pseudoknot Makes structure prediction hard. Not considered in most algorithms. 5’ 5 10 15 20 25 30 35 4045 3’ ucgacuguaaaaaagcgggcgacuuucagucgcucuuuuugucgcgcgc 5’--3’ 10203040

15 The Nussinov algorithm Goal: maximizing the number of base- pairs Idea: Dynamic programming –Loop matching –Nussinov, Pieczenik, Griggs, Kleitman ’78 Too simple for accurate prediction, but stepping-stone for later algorithms

16 The Nussinov algorithm Problem: Find the RNA structure with the maximum (weighted) number of nested pairings Nested: no pseudoknot A G A C C U C U G G G CG GC AG UC U A U G C G A A C G C GU CA UC AG C U G G A A G A A G G G A G A U C U U C A C C A A U A C U G A A U U G C A ACCACGCUUAAGACACCUAGCUUGUGUCCUGGAGGUCUAUAAGUCAGACCGCGAGAGGGAAGACUCGUAUAAGCG

17 The Nussinov algorithm Given sequence X = x 1 …x N, Define DP matrix: F(i, j) = maximum number of base-pairs if x i …x j folds optimally –Matrix is symmetric, so let i < j

18 The Nussinov algorithm Can be summarized into two cases: –(i, j) paired: optimal score is 1 + F(i+1, j-1) –(i, j) unpaired: optimal score is max k F(i, k) + F(k+1, j) k = i..j-1

19 The Nussinov algorithm F(i, i) = 0 F(i+1, j-1) + S(x i, x j ) F(i, j) = max max k=i..j-1 F(i, k) + F(k+1, j) S(x i, x j ) = 1 if x i, x j can form a base-pair, and 0 otherwise –Generalize: S(A, U) = 2, S(C, G) = 3, S(G, U) = 1 –Or other types of scores (later) F(1, N) gives the optimal score for the whole seq

20 How to fill in the DP matrix? F(i+1, j-1) + S(x i, x j ) F(i, j) = max max k=i..j-1 F(i, k) + F(k+1, j) 0 0 0(i, j) 0 0 0 0 0 0 0 i i+1 j–1j

21 How to fill in the DP matrix? F(i+1, j-1) + S(x i, x j ) F(i, j) = max max k=i..j-1 F(i, k) + F(k+1, j) 0 0 0 0 0 0 0 0 0 0 j – i = 1

22 How to fill in the DP matrix? F(i+1, j-1) + S(x i, x j ) F(i, j) = max max k=i..j-1 F(i, k) + F(k+1, j) 0 0 0 0 0 0 0 0 0 0 j – i = 2

23 How to fill in the DP matrix? F(i+1, j-1) + S(x i, x j ) F(i, j) = max max k=i..j-1 F(i, k) + F(k+1, j) 0 0 0 0 0 0 0 0 0 0 j – i = 3

24 How to fill in the DP matrix? F(i+1, j-1) + S(x i, x j ) F(i, j) = max max k=i..j-1 F(i, k) + F(k+1, j) 0 0 0 0 0 0 0 0 0 0 j – i = N - 1

25 Minimum Loop length Sharp turns unlikely Let minimum length of hairpin loop be 1 (3 in real preds) F(i, i+1) = 0 00 00 00 00 00 00 00 00 00 0 U  A G  C C  G G C

26 Algorithm Initialization: F(i, i) = 0;for i = 1 to N F(i, i+1) = 0;for i = 1 to N-1 Iteration: For L = 1 to N-1 For i = 1 to N – l j = min(i + L, N) F(i+1, j -1) + s(x i, x j ) F(i, j) = max max{ i  k < j } F(i, k) + F(k+1, j) Termination: Best score is given by F(1, N) (For trace back, refer to the Durbin book)

27 Complexity For L = 1 to N-1 For i = 1 to N – l j = min(i + L, N) F(i+1, j -1) + s(x i, x j ) F(i, j) = max max{ i  k < j } F(i, k) + F(k+1, j) Time complexity: O(N 3 ) Memory: O(N 2 )

28 Example RNA sequence: GGGAAAUCC Only count # of base-pairs –A-U = 1 –G-C = 1 –G-U = 1 Minimum hairpin loop length = 1

29 00 00 00 00 00 00 00 00 0 G G G A A A U C C G G G A A A U C CG G G A A A U C C

30 000 000 000 000 001 000 000 00 0 G G G A A A U C CG G G A A A U C C

31 0000 0000 0000 0001 0011 0000 000 00 0 G G G A A A U C CG G G A A A U C C

32 00000 00000 00001 00011 00111 0000 000 00 0 G G G A A A U C CG G G A A A U C C

33 000000123 00000123 0000122 000111 00111 0000 000 00 0 G G G A A A U C CG G G A A A U C C A  U G  C G AA G  U G  C AAA A  U G G  C AA

34 000000123 00000123 0000122 000111 00111 0000 000 00 0 G G G A A A U C C G G G A A A U C CG G G A A A U C C A  U G  C G AA G  U G  C AAA A  U G G  C AA

35 000000123 00000123 0000122 000111 00111 0000 000 00 0 G G G A A A U C C G G G A A A U C CG G G A A A U C C A  U G  C G AA G  U G  C AAA A  U G G  C AA

36 000000123 00000123 0000122 000111 00111 0000 000 00 0 G G G A A A U C C G G G A A A U C CG G G A A A U C C A  U G  C G AA G  U G  C AAA A  U G G  C AA

37 Energy minimization For L = 1 to N-1 For i = 1 to N – l j = min(i + L, N); E(i+1, j -1) + e(x i, x j ) E(i, j) = min min{ i  k < j } E(i, k) + E(k+1, j) e(x i, x j ) represents the energy for x i base pair with xj Energy are negative values. Therefore minimization rather than maximize. More complex energy rules: energy depends on neighboring bases

38 More realistic energy rules UU AA A A A GC GC GC UA AU CG AU 4nt hairpin +5.9 -1.1, Terminal mismatch of hairpin -2.9, stacking -2.9, stacking (special for 1nt bulge) -1.8, stack -0.9, stack -1.8, stack -2.1, stack 5’ 3’ 5’-dangle, -0.3 unstructured, 0 Overall  G = -4.6 kcal/mol 1nt bulge, +3.3 Complete energy rules at http://www.bioinfo.rpi.edu/zukerm/cgi-bin/efiles.cgi

39 The Zuker algorithm – main ideas 1.Instead of base pairs, pairs of base pairs (more accurate) 2.Separate score for bulges 3.Separate score for different-size & composition of loops 4.Separate score for interactions between stem & beginning of loop 5.Use additional matrix to remember current state. e.g, to model stacking energy: W(i, j): energy of the best structure on i, j V(i, j): energy of the best structure on i, j given that i, j are paired Similar to affine-gap alignment.

40 Two popular implementations mfold (Zuker) http://mfold.bioinfo.rpi.edu/ RNAfold in the Vienna package (Hofacker) http://www.tbi.univie.ac.at/~ivo/RNA/

41 Accuracy 50-70% for sequences up to 300 nt Not perfect, but useful Possible reasons: –Energy rule not perfect: 5-10% error –Many alternative structures within this error range –Alternative structure do exist –Structure may change in presence of other molecules

42 Comparative structure prediction To maintain structure, two nucleotides that form a base-pair tend to mutate together Given K homologous aligned RNA sequences: Human aagacuucggaucuggcgacaccc Mouse uacacuucggaugacaccaaagug Worm aggucuucggcacgggcaccauuc Fly ccaacuucggauuuugcuaccaua Orc aagccuucggagcgggcguaacuc If i th and j th positions are always base paired and covary, then they are likely to be paired

43 Mutual information f ab (i,j) : Prob for a, b to be in positions i, j f a (i) : Prob for a to be in positions i aagacuucggaucuggcgacaccc uacacuucggaugacaccaaagug aggucuucggcacgggcaccauuc ccaacuucggauuuugcuaccaua aagccuucggagcgggcguaacuc f gc (3,13) = 3/5 f cg (3,13) = 1/5 f au (3,13) = 1/5 f g (3) = 3/5 f c (3) = 1/5 f a (3) = 1/5 f c (13) = 3/5 f g (13) = 1/5 f u (13) = 1/5

44 Mutual information Also called covariance score M is high if base a in position i always follow by base b in position j –Does not require a to base-pair with b –Advantage: can detect non-canonical base-pairs However, M = 0 if no mutation at all, even if perfect base-pairs aagacuucggaucuggcgacaccc uacacuucggaugacaccaaagug aggucuucggcacgggcaccauuc ccaacuucggauuuugcuaccaua aagccuucggagcgggcguaacuc One way to get around is to combine covariance and energy scores

45 Comparative structure prediction Given a multiple alignment, can infer structure that maximizes the sum of mutual information, by DP However, alignment is hard, since structure often more important than sequence

46 Comparative structure prediction In practice: 1.Get multiple alignment 2.Find covarying bases – deduce structure 3.Improve multiple alignment (by hand) 4.Go to 2 A manual EM process!!

47 Comparative structure prediction Align then fold Fold then align Align and fold

48 Context-free Grammar for RNA Secondary Structure S = SS | aSu | cSg | uSa | gSc | L L = aL | cL | gL | uL |  aaacgg ugcc ag u cg a c g g a g u g c c c g u S S S S L S L a L S La 

49 Stochastic Context-free Grammar (SCFG) Probabilistic context-free grammar Probabilities can be converted into weights CFG vs SCFG is similar to RG vs HMM S = SS S = aSu | uSa S = cSg | gSc S = uSg | gSu S = L L = aL | cL | gL | uL |  0 2 3 0 1 e(x i, x j ) + S(i+1, j-1) S(i, j) = max L(i, j) max k (S(i, k) + S(k+1, j)) L(i, j) = 0 0

50 SCFG Decoding Decoding: given a grammar (SCFG/HMM) and a sequence, find the best parse (highest probability or score) –Cocke-Younger-Kasami (CYK) algorithm (analogous to Viterbi in HMM) –The Nussinov and Zuker algorithms are essentially special cases of CYK –CYK and SCFG are also used in other domains (NLP, Compiler, etc).

51 SCFG Evaluation Given a sequence and a SCFG model –Estimate P(seq is generated by model), summing over all possible paths (analogous to forward- algorithm in HMM) Inside-outside algorithm –Analogous to forward-background –Inside: bottom-up parsing (P(x i..x j )) –Outside: top-down parsing (P(x 1..x i-1 x j+1..x N )) Can calculate base-paring probability –Analogous to posterior decoding –Essentially the same idea implemented in the Vienna RNAfold package

52 SCFG Learning Covariance model: similar to profile HMMs –Given a set of sequences with common structures, simultaneously learn SCFG parameters and optimally parse sequences into states –EM on SCFG –Inside-outside algorithm –Efficiency is a bottleneck Have been successfully applied to predict tRNA genes and structures –tRNAScan

53 Summary: SCFG and HMM algorithms GOALHMM algorithmSCFG algorithm Optimal parseViterbiCYK EstimationForwardInside BackwardOutside LearningEM: Fw/BckEM: Ins/Outs Memory ComplexityO(N K)O(N 2 K) Time ComplexityO(N K 2 )O(N 3 K 3 ) Where K: # of states in the HMM # of nonterminal symbols in the SCFG

54 Open research problems ncRNA gene prediction ncRNA regulatory networks Structure prediction –Secondary, including pseudoknots –Tertiary Structural comparison tools –Structural alignment Structure search tools –“RNA-BLAST” Structural motif finding –“RNA-MEME”


Download ppt "CS5263 Bioinformatics RNA Secondary Structure Prediction."

Similar presentations


Ads by Google