Presentation is loading. Please wait.

Presentation is loading. Please wait.

1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that.

Similar presentations


Presentation on theme: "1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that."— Presentation transcript:

1 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that this particular outcome would occur in a very large (“infinite”) number of replicated experiments Random variable is a mapping assigning real numbers to the set of all possible experimental outcomes - often equivalent to the experimental outcome Probability distribution describes the probability of any outcome, or any particular value of the corresponding random variable in an experiment If we have two different experiments, the probability of any combination of outcomes is the joint probability and the joint probability distribution describes probabilities of observing and combination of outcomes If the outcome of one experiment does not affect the probability distribution of the other, we say that outcomes are independent Event is a set of one or more possible outcomes

2 1-17-062 Back to basics – Probability, Conditional Probability and Independence Let N be the very large number of trials of an experiment, and n i be the number of times that i th outcome (o i ) out of possible infinitely many possible outcomes has been observed p i =n i /N is the probability of the i th outcome Properties of probabilities following from this definition 1) p i  0 2) p i  1 4) For any set of mutually exclusive events (events that don't have any outcomes in common) 5) p(NOT e) = 1-p(e) for any event e

3 1-17-063 Conditional Probabilities and Independence Suppose you have a set of N DNA sequences. Let the random variable X denote the identity of the first nucleotide and the random variable Y the identity of the second nucleotide. Suppose now that you have randomly selected a DNA sequence from this set and looked at the first nucleotide but not the second. Question: what is the probability of a particular second nucleotide y given that you know that the first nucleotide is x * ? The probability of a randomly selected DNA sequence from this set to have the xy dinucleotide at the beginning is equal to P(X=x,Y=y) P(Y=y|X=x*) is the conditional probability of Y=y given that X=x* X and Y are independent if P(Y=y|X=x)=P(Y=y)

4 1-17-064 Conditional Probabilities Another Example Measuring differences between expression levels under two different experimental condition for two genes (1 and 2) in many replicated experiments Outcomes of each experiment are X=1 if the difference for gene 1 is greater than 2 and 0 otherwise Y=1 if the difference for gene 2 is greater than 2 and 0 otherwise Suppose now that in one experiment we look at gene 1 and know that X=0 Question: What is the probability of Y=1 knowing that X=0 The joint probability of differences for both genes being greater than 2 in any single experiment is P(X=1,Y=1) P(Y=1|X=0) is the conditional probability of Y=1 given that X=0 X and Y are independent if P(Y=y|X=x)=P(Y=y) for any x and y

5 1-17-065 Conditional Probabilities and Independence If X and Y are independent, then from Probability of two independent events is equal to the product of their probabilities

6 1-17-066 Suppose we have T genes which we measured under two experimental conditions (Ctl and Nic) in n replicated experiments t i * and p i are the t-statistic and the corresponding p-value for the i th gene, i=1,...,T P-value is the probability of observing as extreme or more extreme value of the t-statistic under the “null-distribution” (i.e. the distributions assuming that  i Ctl =  i Nic ) than the one calculated from the data (t * ) The i th gene is "differentially expressed" if we can reject the i th null hypothesis  i Ctl =  i Nic and conclude that  i Ctl   i Nic at a significance level  (i.e. if p i <  ) Type I error is committed when a null-hypothesis is falsely rejected Type II error is committed when a null-hypothesis is not rejected but it is false Experiment-wise Type I Error is committed if any of a set of (T) null hypothesis is falsely rejected If the significance level is chosen prior to conducting experiment, we know that by following the hypothesis testing procedure, we will have the probability of falsely concluding that any one gene is differentially expressed (i.e. falsely reject the null hypothesis) is equal to  What is the probability of committing a Family-wise Type I Error? Assuming that all null hypothesis are true, what is the probability that we would reject at least one of them? Identifying Differentially Expressed Genes

7 1-17-067 Experiment-wise error rate Assuming that individual tests of hypothesis are independent and true: p(Not Committing The Experiment-Wise Error) = p( Not Rejecting H 0 1 AND Not Rejecting H 0 2 AND... AND Not Rejecting H 0 T ) = (1-  )(1-  )...(1-  ) = (1-  ) T p(Committing The Experiment-Wise Error) =1- (1-  ) T

8 1-17-068 Experiment-wise error rate If we want to keep the FWER at  level: Sidak’s adjustment:  a = 1-(1-  ) 1/T FWER=1- (1-  a ) T = 1- (1-[ 1-(1-  ) 1/T ]) T = 1-((1-  ) 1/T ) T = 1-(1-  ) =  For FWER=0.05  a =0.000003

9 1-17-069 Experiment-wise error rate Another adjustment: p(Committing The Experiment-Wise Error) = ( Rejecting H 0 1 OR Rejecting H 0 2 OR... OR Rejecting H 0 T )  T  (Homework: How does that follow from the probability properties) Bonferroni adjustment:  b =  /T Generally  b <  a  Bonferroni adjustment more conservative The Sidak's adjustment assumes independence – likely not to be satisfied. If tests are not independent, Sidak's adjustment is most likely conservative but it could be liberal

10 1-17-0610 Adjusting p-value Individual Hypotheses: H 0 i :  i W =  i C  p i =p(t n-1 > t i * ), i=1,...,T "Composite" Hypothesis: H 0 : {  i W =  i C, i=1,...,T}  p=min{p i, i=1,...,T} The composite null hypothesis is rejected if even a single individual hypothesis is rejected Consequently the p-value for the composite hypothesis is equal to the minimum of individual p-values If all tests have the same reference distribution, this is equivalent to p=p(t n-1 > t * max ) We can consider a p-value to be itself the outcome of the experiment What is the "null" probability distribution of the p-value for individual tests of hypothesis? What is the "null" probability distribution for the composite p-value?

11 1-17-0611 Given that the null hypothesis is true, probability of observing the p-values smaller than a fixed number between 0 and 1 is: p(p i t a )=a Null distribution of the p-value tata -t a The null distribution of t * The null distribution of p i a

12 1-17-0612 p(p < a) = p(min{p i, i=1,...,T} < a) = = 1- p(min{p i, i=1,...,T} > a) = = 1-p(p 1 > a AND p 2 > a AND... AND p T > a) = =Assuming independence between different tests = =1- [p(p 1 > a) p(p 2 > a)... p(p T > a)] = =1-[1-p(p 1 < a)] [1-p(p 2 < a)]... [1-p(p T < a)]= =1-[1-a] T Instead of adjusting the significance level, can adjust all p-values: p i a = 1-[1-a] T Null distribution of the composite p-value

13 1-17-0613 Null distribution of the composite p-value The null distribution of the composite p-value for 1, 10 and 30000 tests

14 1-17-0614 Seems simple Applying a conservative p-value adjustment will take care of false positives How about false negatives Type II Error arises when we fail to reject H 0 although it is false Power=p(Rejecting H 0 when  W -  C  0) = p(t * > t  |  W -  C  0)=p(p<  |  W -  C  0) Depends on various things ( , df, ,  W -  C ) Probability distribution of is non-central t

15 1-17-0615 Effects multiple comparison adjustments on power http://homepages.uc.edu/%7Emedvedm/documents/Sample%20Size%20for%20arrays%20experiments.pdf t 4 : Green Dashed Line t 9 : Red Dashed Line t 4,nc=6.1 : Green Solid Line t 9,nc=8.6 Red Solid Line T=5000,  =0.05,  a =0.0001,  W -  C = 10,  = 1.5 27.6 8.8

16 1-17-0616 This is not good enough Traditional statistical approaches to multiple comparison adjustments which strictly control the experiment-wise error rates are not optimal Need a balance between the false positive and false negative rates Benjamini Y and Hochberg Y (1995) Controlling the False Discovery Rate: a Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society B 57:289-300. Instead of controlling the probability of generating a single false positive, we control the proportion of false positives Consequence is that some of the implicated genes are likely to be false positives.

17 1-17-0617 False Discovery Rate FDR = E(V/R) If all null hypothesis are true (composite null) this is equivalent to the Family-wise error rate

18 1-17-0618 False Discovery Rate Alternatively, adjust p-values as

19 1-17-0619 Effects > FDRpvalue<-p.adjust(TPvalue,method="fdr") > BONFpvalue<-p.adjust(TPvalue,method="bonferroni")


Download ppt "1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that."

Similar presentations


Ads by Google