Presentation is loading. Please wait.

Presentation is loading. Please wait.

Course Introduction What these courses are about What I expect What you can expect.

Similar presentations


Presentation on theme: "Course Introduction What these courses are about What I expect What you can expect."— Presentation transcript:

1 Course Introduction What these courses are about What I expect What you can expect

2 What these courses are about overview of ways in which computers are used to solve problems in biology supervised learning of illustrative or frequently-used algorithms and programs supervised learning of programming techniques and algorithms selected from these uses

3 I Expect students will have basic knowledge of biology and chemistry (at the level of Modern Biology/Chemistry) students will have basic familiarity with statistics students have some programming experience and willingness to work to improve

4 You can expect Homework assignments –80% of grade Final (20% of grade) Grades totally determined by points system

5 Textbook Required textbook: Biological Sequence Analysis: Probabilistic models of proteins and nucleic acids by Durbin et al. Recommended additional textbook: Introduction to Computational Biology by Waterman

6 Chapter 1 Introduction

7 Purpose A great acceleration in the accumulation of biological knowledge started in our era Part of the challenge is to organize, classify and parse the immense richness of sequence data This is not just a task of string parsing, for behind the string of base or amino acids is the whole complexity of molecular biology

8 A major task in computational molecular biology is to “ decipher ” information contained in biological sequences Since the nucleotide sequence of a genome contains all information necessary to produce a functional organism, we should in theory be able to duplicate this decoding using computers Information Flow

9 Review of basic biochemistry Central Dogma: DNA makes RNA makes protein Sequence determines structure determines function

10 Structure DNA composed of four nucleotides or "bases": A,C,G,T RNA composed of four also: A,C,G,U (T transcribed as U) proteins are composed of amino acids

11 Purpose This class is about methods which are in principle capable of capturing some of the complexity of biology, by integrating diverse sources of biological information into clean, general, and tractable probabilistic models for sequence analysis.

12 However, The most reliable way to determine a biological molecule’s structure or function is by direct experimentation. It is far easier to obtain the DNA sequence of the gene corresponding to an RNA or protein than it is to experimentally determine its function or its structure.

13 The Human Genome Project Gives us the raw sequence of an estimated 20,000-25,000 human genes, only a small fraction of which have been studied experimentally. The development of computational methods have become more important (computer science, statisticians, and etc….)

14 Basic Information New sequences are adapted from pre-existing sequences We compare a new sequence with an old sequence with known structure or function Two related sequences are called homologous and we are transferring information by homology It is somewhat similar to determine the similarity between two text strings In fact, we will be trying to find a plausible alignment between sequences

15 Definition A sequence is a linear set of characters (sequence elements) representing nucleotides or amino acids –DNA composed of four nucleotides or "bases": A,C,G,T –RNA composed of four also: A,C,G,U (T transcribed as U) –proteins are composed of amino acids (20)

16 Character representation of sequences DNA or RNA –use 1-letter codes (e.g., A,C,G,T) protein –use 1-letter codes can convert to/from 3-letter codes (e.g., A = Ala = Alanine C = Cys = Cysteine)

17 Alignment Find the best alignment between two strings under some scoring system “+1” for a match; “-1” for a mismatch Most important, we want a scoring system to give the biologically most likely alignment the highest score Note that biological molecules have evolutionary histories, 3D folded structures, and other features This is more the realm of statistics than computer science Probabilistic modeling approach might be used and extend

18 Probabilities & Probabilistic Models A model means a system that simulates the object under consideration A probabilistic model is to produce different outcomes with different probabilities That is, it stimulates a whole class of objects, and assign each object an associated probability The objects will be sequences, and a model might describe a family of related sequences

19 Example: Rolling a six-sided die A probability model of rolling a 6-sided die involves 6 parameters p 1, p 2, p 3, p 4, p 5, and p 6 The probability of rolling i is p i p i ≧ 0 and Σp i =1 Rolling the die 3 times independently, P([1,6,3])= p 1 p 6 p 3

20 Example: Biological Sequence Biological sequences are strings from finite alphabet of residues (4 nucleotides or 20 amino acids) A residue a occurs at random with probability q a, independent of all other residues in the sequence If the sequence is denoted by x 1 … x n, the probability of the whole sequence is q x1 q x2 … q xn

21 Maximum Likelihood Estimation The parameters of a probability model is estimated from a training set (sample) The probability q a for amino acid a can be estimated as the observed frequency of residues in a database of known protein sequences (SWISS-PROT) The training sequences are not systematically biased towards a peculiar residue composition

22 http://au.expasy.org/sprot/

23 MLE (continued) This way of estimating models is called maximum likelihood estimation (MLE) The MLE maximizes the total probability of all sequences given the model (the likelihood) Given a model with parameters θ and a set of data D, the maximum likelihood estimate for θ is that value which maximizes P(D|θ)

24 Estimation If estimating parameters from a limited amount of data, there is a danger of overfitting Overfitting: The model becomes very well adapted to the training data, but it will not generalize well to new data For example, observing the three flips of a coin [tail, tail, tail] would lead to the maximum likelihood estimate that the probability of head is 0 and that of tail is 1

25 Conditional, Joint, and Marginal We have two dies, D 1 and D 2 The conditional probability of rolling i given die D 1 is called P(i|D 1 ) We pick a die with probability P(D j ), j=1, 2 The probability for picking die D j and rolling an i is the product of the two probabilities, P(i, D j )=P( D j )P(i|D j ), the joint probability P(X, Y)=P(X|Y)P(Y) P(X)=Σ Y P(X, Y)=Σ Y P(X|Y)P(Y), the marginal probability

26 Bayes Theorem Bayes’ theorem The denominator is the marginal The numerator is the joint

27 Example 1 Consider an occasionally dishonest casino that uses two kinds of dice. Of the dice 99% are fair but 1% are loaded so that a six comes up 50% of the time. Suppose we pick a die at random and roll it three times, getting three consecutive sixes. What is P(D loaded |3 sixes)?

28 Example 1 (Continued)

29 We will still more likely pick up a fair die, despite seeing three successive sixes.

30 Example 2 Assume that, on average, extracellular protein have a slightly different amio acid composition than intracellular proteins For example, cysteine is more common in extracellular than intracellular proteins Question: whether a new protein sequence x=x 1 …x n is intracellular or extracellular?

31 Example 2 (continued) We first split our training examples from SWISS- PROT into extracellular and intracellular proteins Estimate a set of frequencies for intracellular proteins, and a corresponding set of extracellular frequencies The probability that any new sequence is extracellular is p ext, and the corresponding probability of being intracellular is p int. Note that p int =1- p ext

32 Example 2 (continued)

33 Bayesian Model θ is the parameter of interest Before collecting data, the information regarding θ is called the prior information, P(θ) After collected the data, the information regarding θ is called the posterior information, P(θ|D) If we do not have enough data to reliably estimate the parameters, we can use prior knowledge to constrain the estimates

34 Bayesian and Frequentist D ~ N(θ,1) To frequentists, θ is fixed (unknown) To Bayesians, θ is random If θ is random, what should its distribution be? Frequentists argue that the determination of the prior distribution of θ is very subjective

35 Prior and Posterior Suppose that θ has a probability distribution P(θ) (prior) Assume that θ and D|θ are independent P(D, θ) is the joint distribution of D and θ P(D | θ) is the conditional distribution of D given θ P(θ | D) is the conditional distribution of θ given D (the posterior)

36 Prior and Posterior P(D| θ)P(θ)=P(D, θ)=P(θ | D)P(D) Bayes’ theorem:

37 Posterior Distribution Given D’s density p(D|θ) and a prior probability density P(θ), the posterior density for θ is given as p(θ|D)=cp(θ) p(D|θ), where c -1 =∫ p(θ) p(D|θ) dθ (the marginal of D).

38 Example D ~ N(θ,  2 ),  is known. P(  )=N(  0,  0 2 ) Then the posterior density is normal with and

39 Conjugate Prior D ~ N(θ,  2 ) is a normal distribution Prior distribution, P(  )=N(  0,  0 2 ), is also a normal distribution Posterior distribution, P(  |D), is also a normal distribution The normal distribution is conjugate to the normal

40 Specification of the Prior Conjugate priors: –The beta distribution is conjugate to the binomial –The normal distribution is conjugate to the normal –The gamma distribution is conjugate to the Poisson

41 Specification of the Prior Noninformative (uninformative) priors: P(θ)  constant When we don’t have a strong belief or in public policy situations strongly differ

42 Specification of the Prior Sometimes, we will wish to use an informative P(θ). We know a priori that the amino acids phenylalanine (Phe, F), tyrosine (Tyr, Y), and tryptophan (Trp, W) are structurally similar and often evolutionarily interchangeable. We would want to use a P(θ) that tends to favor parameter sets that assign them very similar probabilities.

43 Parameter Estimation Choose the parameter value for θ that maximize P(  |D) This is called maximum a posteriori or MAP estimation MAP estimation maximizes the likelihood times the prior If the prior is flat (uninformative), then MAP is equal to the MLE Another parameter estimation is to choose the mean of the posterior

44 Maximum A Posteriori (MAP) Estimation Ex: estimating probabilities for a die We roll 1, 3, 4, 2, 4, 6, 2, 1, 2, 2 MLE: p 5 =0 ~ overfitting add 1 to each observed number of counts (pseudocount): MAP: p 5 =1/16

45 When estimating large parameter sets from small amounts of data, we believe that Bayesian methods provide a consistent formalism for bringing in additional information from previous experience with the same type of data.


Download ppt "Course Introduction What these courses are about What I expect What you can expect."

Similar presentations


Ads by Google