Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scores and substitution matrices in sequence alignment Sushmita Roy BMI/CS 576 Sushmita Roy Sep 11 th,

Similar presentations


Presentation on theme: "Scores and substitution matrices in sequence alignment Sushmita Roy BMI/CS 576 Sushmita Roy Sep 11 th,"— Presentation transcript:

1 Scores and substitution matrices in sequence alignment Sushmita Roy BMI/CS 576 www.biostat.wisc.edu/bmi576/ Sushmita Roy sroy@biostat.wisc.edu Sep 11 th, 2014

2 Key concepts in today’s class Probability distributions Discrete and continuous continuous distributions Joint, conditional and marginal distributions Statistical independence Probabilistic interpretation of scores in alignment algorithms Different substitution matrices Estimating simple substitution matrices Assessing significance of scores

3 RECAP Four issues in sequence alignment – Type of alignment – Algorithm for alignment – Scores for alignment – Significance of alignment scores

4 Guiding principles of scores in alignments We want to know whether the alignment we observed is meaningful or by chance By meaningful we mean whether the sequences represent a similar biological function To study whether what we observe by chance we need to understand some concepts in probability theory

5 PROBABILITY PRIMER

6 Definition of probability Intuitively, we use “probability” to refer to our degree of confidence in an event of an uncertain nature. Always a number in the interval [0,1] 0 means “never occurs” 1 means “always occurs” Frequentist interpretation: the fraction of times an event will be true if we repeated the experiment indefinitely Bayesian interpretation: degree of belief in an event

7 Sample spaces Sample space: a set of possible outcomes for some experiment Examples – Flight to Chicago: {on time, late} – Lottery: {ticket 1 wins, ticket 2 wins,…,ticket n wins} – Weather tomorrow: {rain, not rain} or {sun, rain, snow} or {sun, clouds, rain, snow, sleet} – Roll of a die: {1,2,3,4,5,6} – Coin toss: {Heads, Tail}

8 Random variables Random variable: A variable that represents the outcome of an experiment A random variable can be – Discrete: Outcomes take a fixed set of values Roll of die, flight to chicago, weather tomorrow – Continuous: Outcomes take continuous values Height, weight

9 Notation Uppercase letters and words denote random variables – X, Y Lowercase letters and words denote values – x, y Probability that X takes value x We’ll also use the shorthand form For Boolean random variables, we’ll use the shorthand

10 Discrete Probability distributions A probability distribution is a mathematical function that specifies the probability of each possible outcome of a random variable We denote this as P(X) for random variable X It specifies the probability of each possible value of X, x Requirements: sun clouds rain snow sleet 0.2 0.3 0.1

11 Joint probability distributions Joint probability distribution: the function given by P(X = x, Y = y) Read “ X equals x and Y equals y ” Example x, y P(X = x, Y = y) sun, on-time 0.20 rain, on-time 0.20 snow, on-time 0.05 sun, late 0.10 rain, late 0.30 snow, late 0.15 probability that it’s sunny and my flight is on time

12 Marginal probability distributions The marginal distribution of X is defined by “the distribution of X ignoring other variables” This definition generalizes to more than two variables, e.g.

13 Marginal distribution example x, y P(X = x, Y = y) sun, on-time 0.20 rain, on-time 0.20 snow, on-time 0.05 sun, late 0.10 rain, late 0.30 snow, late 0.15 xP(X = x) sun 0.3 rain 0.5 snow 0.2 joint distributionmarginal distribution for X

14 Conditional distributions The conditional distribution of X given Y is defined as: Or in short The distribution of X given that we know the value of Y Intuitively, how much does knowing Y tell us about X ?

15 Conditional distribution example x, y P(X = x, Y = y) sun, on-time 0.20 rain, on-time 0.20 snow, on-time 0.05 sun, late 0.10 rain, late 0.30 snow, late 0.15 xP(X = x|Y=on-time) sun 0.20/0.45 = 0.444 rain 0.20/0.45 = 0.444 snow 0.05/0.45 = 0.111 joint distribution conditional distribution for X given Y=on-time

16 Independence Two random variables, X and Y, are independent if Another way to think about this is knowing X does not tell us anything about Y

17 Independence example #1 x, y P(X = x, Y = y) sun, on-time 0.20 rain, on-time 0.20 snow, on-time 0.05 sun, late 0.10 rain, late 0.30 snow, late 0.15 xP(X = x) sun 0.3 rain 0.5 snow 0.2 joint distributionmarginal distributions yP(Y = y) on-time 0.45 late 0.55 Are X and Y independent here?NO.

18 Independence example #2 x, y P(X = x, Y = y) sun, fly-United 0.27 rain, fly-United 0.45 snow, fly-United 0.18 sun, fly-Northwest 0.03 rain, fly-Northwest 0.05 snow, fly-Northwest 0.02 xP(X = x) sun 0.3 rain 0.5 snow 0.2 joint distributionmarginal distributions yP(Y = y) fly-United 0.9 fly-Northwest 0.1 Are X and Y independent here?YES.

19 Two random variables X and Y are conditionally independent given Z if “once you know the value of Z, knowing Y doesn’t tell you anything about X ” Alternatively Conditional independence

20 Conditional independence example FluFeverHeadacheP true 0.04 true false0.04 truefalsetrue0.01 truefalse 0.01 falsetrue 0.009 falsetruefalse0.081 false true0.081 false 0.729 Are Fever and Headache independent? NO.

21 Conditional independence example FluFeverHeadacheP true 0.04 true false0.04 truefalsetrue0.01 truefalse 0.01 falsetrue 0.009 falsetruefalse0.081 false true0.081 false 0.729 Are Fever and Headache conditionally independent given Flu : YES.

22 Chain rule of probability For two variables For three variables etc. to see that this is true, note that

23 Example discrete distributions Binomial distribution Multinomial distribution

24 Two outcomes per trial of an experiment Distribution over the number of successes in a fixed number n of independent trials (with same probability of success p in each) e.g. the probability of x heads in n coin flips The binomial distribution P(X=x) p=0.5p=0.1 xx P(X=x)

25 k possible outcomes on each trial Probability p i for outcome x i in each trial Distribution over the number of occurrences x i for each outcome in a fixed number n of independent trials e.g. with k=6 (a six-sided die) and n=30 The multinomial distribution vector of outcome occurrences

26 Continuous random variables When our outcome is a continuous number we need a continuous random variable Examples: Weight, Height We specify a density function for random variable X as Probabilities are specified over an interval to derive probability values Probability of taking on a single value is 0.

27 Continuous random variables contd To define a probability distribution for a continuous variable, we need to integrate f(x)

28 Examples of continuous distributions Gaussian distribution Exponential distribution Extreme Value distribution

29 Gaussian distribution

30 Extreme Value Distribution Used for describing the distribution of extreme values of another distribution Max values from 1000 sets of 500 samples from a standard normal distribution f(x)

31 END PROBABILITY PRIMER

32 Guiding principles of scores in alignments We need to assess whether an alignment is biologically meaningful Compute the probability of seeing an alignment under two models – Related model R – Unrelated model U

33 R : related model Assume each pair of aligned positions evolved from a common ancestor Let p ab be the probability of observing a pair {a,b} Probability of an alignment between x and y is

34 U : unrelated model Assume the individual amino acids at a position are independent of the amino acid in another position. Let q a be the probability of a The probability of an n-character alignment of x and y is

35 Determine which model is more likely The score of an alignment is the relative likelihood of the alignment from U and R Taking the log we get

36 Score of an alignment Substitution matrix entry should thus be Score of an alignment is

37 How to estimate the probabilities? Need a good set of confirmed alignments Depends upon what we know about when the two sequences might have diverged – p ab for closely related species is likely to be low if a !=b – p ab for species that have diverged a long time ago is likely close to the background.

38 Some common substitution matrices BLOSUM matrices [Henikoff and Henikoff, 1992] – BLOSUM45 – BLOSUM50 – BLOSUM62 Number represents percent identity of sequences used to construct substitution matrices PAM [Dayhoff et al, 1978] Empirically, BLOSUM62 works the best

39 BLOSUM matrices BLOck Substitution Matrix Derived from a set of aligned ungapped regions from protein families called BLOCKS – Reside in the BLOCKS database Cluster proteins such that they have no less than L% of similarity – BLOSUM50: Proteins >50% similarity are in the same group – BLOSUM62: Proteins >62% similarity are in the same group Calculate substitution frequencies A ab of observing a in one cluster and b in another cluster

40 Example substitution scoring matrix (BLOSUM62)

41 Estimating the probabilities in BLOSUM Number of occurrences of a Number of occurrences of a and b

42 Example of BLOSUM matrix calculation 1A T C K Q 2A T C R N 3A S C K N 4S S C R N 5S D C E Q 6S E C E N 7 T E C R Q Three blocks at 50% similarity Seven blocks at 62% similarity Probabilities at 62% similarity

43 Assessing significance of the alignment score There are two ways to do this – Bayesian framework – Classical approach

44 The classical approach to assessing sequence :Extreme Value Distribution Suppose we have a particular substitution matrix and amino-acid frequencies We need to consider random sequences of lengths m and n and finding the best alignment of these sequences This will give us a distribution over alignment scores for random pairs of sequences If the probability of a random score being greater than our alignment score is small, we can consider our score to be significant

45 The extreme value distribution we’re picking the best alignments, so we want to know what the distribution of max scores for alignments against a random set of sequences looks like this is given by an extreme value distribution x P(x)

46 Assessing significance of sequence score alignments It can be shown that the mode of the distribution for optimal scores is – K, λ estimated from the substitution matrix Probability of observing a score greater than S

47 Bayes theorem An extremely useful theorem There are many cases when it is hard to estimate P(x | y) directly, but it’s not too hard to estimate P(y | x) and P(x)

48 Bayes theorem example MDs usually aren’t good at estimating P(Disorder| Symptom) They’re usually better at estimating P(Symptom | Disorder) If we can estimate P(Fever | Flu) and P(Flu) we can use Bayes’ Theorem to do diagnosis


Download ppt "Scores and substitution matrices in sequence alignment Sushmita Roy BMI/CS 576 Sushmita Roy Sep 11 th,"

Similar presentations


Ads by Google