Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.

Similar presentations


Presentation on theme: "Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties."— Presentation transcript:

1 Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties of estimators

2 Types of Errors  False positive (Type I): Probability (  ) that H 0 rejected when it is true.  False negative (Type II): Probability (  ) of accepting H 0 when it is false. Accept H 0 Reject H 0 H 0 true 1-  Type I =  H 0 false Type II =  power = 1- 

3 Example: Error H 0 : Central Chi-Square H A : Non-Central Chi-Square with non-centrality parameter E(G)   1-  1- 

4 Power of a Test  Definition: The statistical power of a test is defined as 1-   The power is only defined when H A is defined, the experimental conditions (e.g. sample size) are known and the significance level  has been selected.  Example: calculate sample size needed to obtain particular linkage detection power.

5 Estimation  Hypothesis testing allows us to make qualitative conclusions regarding the suitability or not of a statement (H 0 ).  Often we want to make quantitative inference, e.g. an actual estimate of the recombination fraction, not just evidence that genes are linked.

6 Estimation  Definition: Point estimation is the process of estimating a specific value of the parameter  based on observed data X 1, X 2,…,X n.  Definition: Interval estimation is the process of estimating upper and lower limits within which the unknown parameter occurs with certain probability.

7  Definition: The maximum likelihood estimator is the which maximizes the likelihood function.  The MLE is obtained by analytically solving the score, S=0 grid search Newton-Raphson iteration Expectation and Maximization (EM) algorithm Maximum Likelihood Estimation

8 MLE: Grid Search  Plot likelihood L or log likelihood l vs. parameter throughout the parameter space.  Obtain MLE by visual inspection or search algorithm.

9 MLE: Grid Search Algorithm 1.Initially estimate  0. Pick a step size . 2.At step n, evaluate L (or l) on both sides of  n. 3.Choose  n+1 =  n +  if L is increasing to the right, else choose  n+1 =  n - . 4.Repeat steps 2 and 3 until no longer advance. Choose smaller , and repeat steps 2, 3, and 4 until desired accuracy met.

10 MLE: Grid Search Problems  Multiple peaks result in a failure to find the global maximum likelihood.  Solving for multiple simultaneous parameters gets computationally intensive and difficult to interpret visually when there are more than two parameters.

11 Example: 2-Locus with Non- Penetrant Allele  Let  be the recombinant fraction between marker A and gene of interest B.  Let  be the probability that the allele of the gene of interest (f) fails to be detected at the phenotype level (i.e. 1-  is the penetrance).  Cross +F/+F  –f/–f.  Score gametes of an F1 +F/–f individual for +/- phenotype and P/p phenotype, where P means F or non-penetrant f and p means penetrant f. B(F or f) A(+ or -)

12 Experimental Data  P(+F gamete) = P(–f gamete) = 0.5(1-  )  P(–F gamete) = P(+f gamete) = 0.5   P(+P gamete) = 0.5(1-  ) + 0.5   P(–P gamete) = 0.5  + 0.5(1-  )   P(+p gamete) = 0.5  (1-  )  P(–p gamete) = 0.5(1-  ) (1-  )  Observe n +P, n -P, n +p, n -p.

13 Experimental Data: Log Likelihood

14 Experimental Data: Grid Search

15 Newton-Raphson: One Parameter  Let S(  ) be the score.  Then the MLE is obtained through the equation  Taylor expansion of S(  ) for  n near the MLE, gives

16 Newton-Raphson: Analysis  NR fits a parabola to the likelihood function at the point of the current parameter estimate. Obtain a new parameter estimate at the maximum of the parabola.  NR may fail when there are multiple peaks.  NR may fail when the information is zero (when the estimate is at the extremes).  Recommendations: Use multiple starting initial values. Bound new estimates.

17 Newton-Raphson: Multiple Parameters N is the total sample size. S(  m ) is the score vector evaluated at  m. I -1 (  m ) is the inverse information matrix evaluated at  m.

18 EM Algorithm: Incomplete Data  The notion of incomplete data: ABabAbaB ABAB/ABAB/abAB/AbAB/aB abab/ABab/abab/Abab/aB AbAb/ABAb/abAb/AbAb/aB aBaB/ABaB/abaB/AbaB/aB

19 EM Algorithm: Example of Incomplete Data  +P gamete may result from nonrecombinant +F or from recombinant, non-penetrant +f.  +p gamete can only result from penetrant, nonrecombinant +f.  – P gamete can result from recombinant – F or from nonrecombinant, non-penetrant – f gene.  –p gamete can result only from nonrecombinant –f.

20 EM Algorithm  Make an initial guess  0.  Expectation step: Pretend that  n for iteration n is true. Estimate the complete data. This usually request distribution of complete data conditional on the observed data. For example: P(recombinant|observed phenotype).  Maximization step: Compute the maximum likelihood estimate for step n.  Repeat E & M steps until likelihood converges.

21 Example: E Step  E Step:

22 Example: M Step  M Step:

23 Moment Estimation  Obtain equations for the population moments in terms of the parameters to estimate.  Substitute the sample moments and solve for the parameters.  For example: binomial distribution m 1 = np

24 Example: Moment Estimation

25 Moment Estimation: Problems  Large sample properties of ML estimators are usually better than those for the corresponding moment estimators.  Sometimes solution of moments equations are not unique.

26 Least Squares Estimation

27 Variance of an Estimator Suppose k independent estimates are available for  : Suppose you have a large sample, then the variance of the MLE is approximately: Cramer-Rao lower bound for variance Empirical estimates using resampling techniques.

28 Variance: Linear Estimator

29 Variance: General Function f(  ,  , …,   )

30 Bias  The mean square estimator (MSE) is defined as bias If an estimator is unbiased, the MSE and variance are the same.

31 Estimating Bias  Bootstrap: bootstrap estimator for bootstrap trial i original estimate

32 Confidence Interval  Because of sampling error, the estimate is not exactly the true parameter value .  A confidence interval is symmetric if  A confidence interval is non-symmetric if  A confidence interval is one-sided if

33 Confidence Interval: Normal Approximation I  Need pivotal quantity, i.e. a quantity that depends on the data and the parameters but whose distribution does not.  If the estimate is unbiased and normally distributed with variance, then the pivotal quantity is

34 Confidence Interval: Normal Approximation II  The MLE is asymptotically normally distributed.

35 Confidence Interval: Nonparametric Approximation percentile method

36 Bootstrap Example  Generate a multinomial random variable with the given proportions b times and generate a bootstrap dataset. Estimate parameters  and. +P+p+p –P–P–p–p Count168352163 Proportion0.440.010.130.42

37 Confidence Interval: Likelihood Approach  Let L max be the maximum likelihood for a given model. Find the parameter values  L and  U such that log L max – log L(   ) = 2 Then (  L,  U ) serves as a confidence interval.

38 LOD Score Support  The LOD score support for a confidence interval is log 10 L max –log 10 L  where L is the likelihood at the limit values of the parameter.  In practice, you plot the LOD score support for various values of the parameter and choose the upper and lower bounds such that the LOD score support is 1.

39 Choosing Good Confidence Intervals  The actual coverage probability should be close to the confidence coefficient.  Should be biologically relevant. For example, the following is not:  (0.1,0.6)

40 Good Estimator: Consistent  An estimator is mean squared error consistent if the MSE approaches zero as the sample size approaches infinity.  An estimator is simple consistent if

41 Good Estimator: Unbiased  An unbiased estimator is usually better than a biased one, but this may not always be true. If the variance is larger, what have we gained?  There are bootstrap techniques for obtaining a bias-corrected estimate. These are computationally more intensive than bootstrap, but sometimes worth it.

42 Good Estimator: Asymptotically Normal  If the pivotal quantity  is normal with mean 0 and variance 1 as the sample size goes to infinity, it can be a very convenient property of the estimator.`

43 Good Estimator: Confidence Interval  A good estimator should have a good way to obtain an confidence interval. MLE are good in this way if the sample size is large enough.

44 Sample Size for Power   1-  1-  Need E(G)> E(G)=nE(G unit )

45 Sample Size for Target Confidence Interval  Confidence interval by normal approximation is  The bigger the range, the less precise the confidence interval.  Suppose we wish to have

46 Sample Size for Target Confidence Interval II  Then,

47 Summary  Distributions  Likelihood and Maximum Likelihood Estimation  Hypothesis Tests  Confidence Intervals  Comparison of estimators  Sample size calculations


Download ppt "Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties."

Similar presentations


Ads by Google