Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 6243 Machine Learning Advanced topic: pattern recognition (DNA motif finding)

Similar presentations


Presentation on theme: "CS 6243 Machine Learning Advanced topic: pattern recognition (DNA motif finding)"— Presentation transcript:

1 CS 6243 Machine Learning Advanced topic: pattern recognition (DNA motif finding)

2 Final Project Draft description available on course website More details will be posted soon Group size 2 to 4 acceptable, with higher expectation for larger teams Predict Protein-DNA binding

3 Biological background for TF-DNA binding

4 Genome is fixed – Cells are dynamic A genome is static –(almost) Every cell in our body has a copy of the same genome A cell is dynamic –Responds to internal/external conditions –Most cells follow a cell cycle of division –Cells differentiate during development

5 Gene regulation … is responsible for the dynamic cell Gene expression (production of protein) varies according to: –Cell type –Cell cycle –External conditions –Location –Etc.

6 Where gene regulation takes place Opening of chromatin Transcription Translation Protein stability Protein modifications

7 GenePromoter RNA polymerase (Protein) Transcription Factor (TF) (Protein) DNA Transcriptional Regulation of genes

8 Gene TF binding site, cis-regulatory element RNA polymerase (Protein) Transcription Factor (TF) (Protein) DNA Transcriptional Regulation of genes

9 Gene RNA polymerase Transcription Factor (Protein) DNA TF binding site, cis-regulatory element

10 Gene RNA polymerase Transcription Factor DNA New protein Transcriptional Regulation of genes TF binding site, cis-regulatory element

11 The Cell as a Regulatory Network ABMake DC If C then D If B then NOT D If A and B then D D Make BD If D then B C gene D gene B

12 Transcription Factors Binding to DNA Transcriptional regulation: Transcription factors bind to DNA Binding recognizes specific DNA substrings: Regulatory motifs

13 Experimental methods DNase footprinting –Tedious –Time-consuming High-throughput techniques: ChIP-chip, ChIP- seq –Expensive –Other limitations

14 Protein Binding Microarray

15 Computational methods for finding cis-regulatory motifs Given a collection of genes that are believed to be regulated by the same/similar protein –Co-expressed genes –Evolutionarily conserved genes Find the common TF-binding motif from promoters......

16 Essentially a Multiple Local Alignment Find “best” multiple local alignment Multidimensional Dynamic Programming? –Heuristics must be used...... instance

17 Characteristics of cis-Regulatory Motifs Tiny (6-12bp) Intergenic regions are very long Highly Variable ~Constant Size –Because a constant-size transcription factor binds Often repeated Often conserved

18 Motif representation Collection of exact words –{ACGTTAC, ACGCTAC, AGGTGAC, …} Consensus sequence (with wild cards) –{AcGTgTtAC} –{ASGTKTKAC} S=C/G, K=G/T (IUPAC code) Position-specific weight matrices (PWM)

19 Position-Specific Weight Matrix 123456789 A.97.10.02.03.10.01.05.85.03 C.01.40.01.04.05.01.05.03 G.01.40.95.03.40.01.3.05.03 T.01.10.02.90.45.97.6.05.91 ASGTKTKA C

20 Sequence Logo frequency 123456789 A.97.10.02.03.10.01.05.85.03 C.01.40.01.04.05.01.05.03 G.01.40.95.03.40.01.3.05.03 T.01.10.02.90.45.97.6.05.91 http://weblogo.berkeley.edu/ http://biodev.hgen.pitt.edu/cgi-bin/enologos/enologos.cgi

21 Sequence Logo 123456789 A.97.10.02.03.10.01.05.85.03 C.01.40.01.04.05.01.05.03 G.01.40.95.03.40.01.3.05.03 T.01.10.02.90.45.97.6.05.91 http://weblogo.berkeley.edu/ http://biodev.hgen.pitt.edu/cgi-bin/enologos/enologos.cgi

22 Entropy and information content Entropy: a measure of uncertainty The entropy of a random variable X that can assume the n different values x 1, x 2,..., x n with the respective probabilities p 1, p 2,..., p n is defined as

23 Entropy and information content Example: A,C,G,T with equal probability  H = 4 * (-0.25 log 2 0.25) = log 2 4 = 2 bits  Need 2 bits to encode (e.g. 00 = A, 01 = C, 10 = G, 11 = T)  Maximum uncertainty 50% A and 50% C:  H = 2 * (-0. 5 log 2 0.5) = log 2 2 = 1 bit 100% A  H = 1 * (-1 log 2 1) = 0 bit  Minimum uncertainty Information: the opposite of uncertainty  I = maximum uncertainty – entropy  The above examples provide 0, 1, and 2 bits of information, respectively

24 Entropy and information content 123456789 A.97.10.02.03.10.01.05.85.03 C.01.40.01.04.05.01.05.03 G.01.40.95.03.40.01.3.05.03 T.01.10.02.90.45.97.6.05.91 H.241.72.36.631.600.241.400.850.58 I1.760.281.641.370.401.760.601.151.42 Mean 1.15 Total 10.4 Expected occurrence in random DNA: 1 / 2 10.4 = 1 / 1340 Expected occurrence of an exact 5-mer: 1 / 2 10 = 1 / 1024

25 Sequence Logo 123456789 A.97.10.02.03.10.01.05.85.03 C.01.40.01.04.05.01.05.03 G.01.40.95.03.40.01.3.05.03 T.01.10.02.90.45.97.6.05.91 I 1.760.281.641.370.401.760.601.151.42

26 Real example E. coli. Promoter “TATA-Box” ~ 10bp upstream of transcription start TACGAT TAAAAT TATACT GATAAT TATGAT TATGTT Consensus: TATAAT Note: none of the instances matches the consensus perfectly

27 Finding Motifs

28 Classification of approaches Combinatorial algorithms –Based on enumeration of words and computing word similarities Probabilistic algorithms –Construct probabilistic models to distinguish motifs vs non-motifs

29 Combinatorial motif finding Idea 1: find all k-mers that appeared at least m times –m may be chosen such that # occurrence is statistically significant –Problem: most motifs have divergence. Each variation may only appear once. Idea 2: find all k-mers, considering IUPAC nucleic acid codes –e.g. ASGTKTKAC, S = C/G, K = G/T –Still inflexible Idea 3: find k-mers that approximately appeared at least m times –i.e. allow some mismatches

30 Combinatorial motif finding Given a set of sequences S = {x 1, …, x n } A motif W is a consensus string w 1 …w K Find motif W * with “best” match to x 1, …, x n Definition of “best”: d(W, x i ) = min hamming dist. between W and a word in x i d(W, S) =  i d(W, x i ) W* = argmin( d(W, S) )

31 Exhaustive searches 1. Pattern-driven algorithm: For W = AA…A to TT…T (4 K possibilities) Find d( W, S ) Report W* = argmin( d(W, S) ) Running time: O( K N 4 K ) (where N =  i |x i |) Guaranteed to find the optimal solution.

32 Exhaustive searches 2. Sample-driven algorithm: For W = a K-char word in some x i Find d( W, S ) Report W* = argmin( d( W, S ) ) OR Report a local improvement of W * Running time: O( K N 2 )

33 Exhaustive searches Problem with sample-driven approach: If: –True motif does not occur in data, and –True motif is “weak” Then, –random strings may score better than any instance of true motif

34 Example E. coli. Promoter “TATA-Box” ~ 10bp upstream of transcription start TACGAT TAAAAT TATACT GATAAT TATGAT TATGTT Consensus: TATAAT Each instance differs at most 2 bases from the consensus None of the instances matches the consensus perfectly

35 Heuristic methods Cannot afford exhaustive search on all patterns Sample-driven approaches may miss real patterns However, a real pattern should not differ too much from its instances in S Start from the space of all words in S, extend to the space with real patterns

36 Some of the popular tools Consensus (Hertz & Stormo, 1999) WINNOWER (Pevzner & Sze, 2000) MULTIPROFILER (Keich & Pevzner, 2002) PROJECTION (Buhler & Tompa, 2001) WEEDER (Pavesi et. al. 2001) And dozens of others

37 Probabilistic modeling approaches for motif finding

38 Probabilistic modeling approaches A motif model –Usually a PWM –M = (P ij ), i = 1..4, j = 1..k, k: motif length A background model –Usually the distribution of base frequencies in the genome (or other selected subsets of sequences) –B = (b i ), i = 1..4 A word can be generated by M or B

39 Expectation-Maximization For any word W,  P(W | M) = P W[1] 1 P W[2] 2 …P W[K] K  P(W | B) = b W[1] b W[2] …b W[K] Let = P(M), i.e., the probability for any word to be generated by M. Then P(B) = 1 - Can compute the posterior probability P(M|W) and P(B|W)  P(M|W) ~ P(W|M) *  P(B|W) ~ P(W|B) * (1- )

40 Expectation-Maximization Initialize: Randomly assign each word to M or B Let Z xy = 1 if position y in sequence x is a motif, and 0 otherwise Estimate parameters M,, B Iterate until converge: E-step: Z xy = P(M | X[y..y+k-1]) for all x and y M-step: re-estimate M, given Z (B usually fixed)

41 Expectation-Maximization E-step: Z xy = P(M | X[y..y+k-1]) for all x and y M-step: re-estimate M, given Z Initialize E-step M-step probability position 1 9 5 1 9 5

42 MEME Multiple EM for Motif Elicitation Bailey and Elkan, UCSD http://meme.sdsc.edu/ Multiple starting points Multiple modes: ZOOPS, OOPS, TCM

43 Gibbs Sampling Another very useful technique for estimating missing parameters EM is deterministic –Often trapped by local optima Gibbs sampling: stochastic behavior to avoid local optima

44 Gibbs Sampling Initialize: Randomly assign each word to M or B Let Z xy = 1 if position y in sequence x is a motif, and 0 otherwise Estimate parameters M, B, Iterate: Randomly remove a sequence X* from S Recalculate model parameters using S \ X* Compute Z x*y for X* Sample a y* from Z x*y. Let Z x*y = 1 for y = y* and 0 otherwise

45 Gibbs Sampling Gibbs sampling: sample one position according to probability –Update prediction of one training sequence at a time Viterbi: always take the highest EM: take weighted average Sampling Simultaneously update predictions of all sequences position probability

46 Better background model Repeat DNA can be confused as motif –Especially low-complexity CACACA… AAAAA, etc. Solution: more elaborate background model –Higher-order Markov model 0 th order: B = { p A, p C, p G, p T } 1 st order: B = { P(A|A), P(A|C), …, P(T|T) } … K th order: B = { P(X | b 1 …b K ); X, b i  {A,C,G,T} } Has been applied to EM and Gibbs (up to 3 rd order)

47 Gibbs sampling motif finders Gibbs Sampler –First appeared as: Larence et.al. Science 262(5131):208-214. –Continually developed and updated. webpagewebpage –The newest version: Thompson et. al. Nucleic Acids Res. 35 (s2):W232- W237 AlignACE –Hughes et al., J. of Mol Bio, 2000 10;296(5):1205-14.Hughes et al., J. of Mol Bio, 2000 10;296(5):1205-14. –Allow don’t care positions –Additional tools to scan motifs on new seqs, and to compare and group motifs BioProspector, X. Liu et. al. PSB 2001, an improvement of AlignACE –Liu, Brutlag and Liu. Pac Symp Biocomput. 2001;:127-38.Liu, Brutlag and Liu. Pac Symp Biocomput. 2001;:127-38 –Allow two-block motifs –Consider higher-order markov models


Download ppt "CS 6243 Machine Learning Advanced topic: pattern recognition (DNA motif finding)"

Similar presentations


Ads by Google