Presentation is loading. Please wait.

Presentation is loading. Please wait.

DATA ANALYSIS Module Code: CA660 Lecture Block 5.

Similar presentations

Presentation on theme: "DATA ANALYSIS Module Code: CA660 Lecture Block 5."— Presentation transcript:

1 DATA ANALYSIS Module Code: CA660 Lecture Block 5

2 2 Estimator validity – how good? Basis statistical properties (variance, bias, distributional etc.) Bias where is the point estimate,  the true parameter. Bias can be positive, negative or zero. Permits calculation of other properties, e.g. where this quantity and variance of estimator are the same if estimator is unbiased. Obtained by both analytical and “bootstrap methods” Similarly, for continuous variables or for b bootstrap replications, ESTIMATION /H.T. Rationale, Other Features & Alternatives

3 3 Estimation/H.T. Rationale etc. - contd. For any, estimator, even unbiased, there is a difference between estimator and true parameter = sampling error Hence the need for probability statements around C.L. for estimator = (T 1, T 2 ), similarly to before and  the confidence coefficient. If the estimator is unbiased, in other words,  = P{that true parameter falls into the interval}. In general, confidence intervals can be determined using parametric and non-parametric approaches, where parametric construction needs a pivotal quantity = variable which is a function of parameter and data, but whose distribution does not depend on the parameter.

4 4 Related issues in Hypothesis Testing -POWER of the TEST Probability of False Positive and False Negative errors e.g. false positive if linkage between two genes declared, when really independent Hypothesis Test Result Fact Accept H 0 Reject H 0 H 0 True 1-  False positive = Type I error =  H 0 False False negative Power of the Test =Type II error=  = 1-  Power of the Test or Statistical Power = probability of rejecting H 0 when correct to do so. (Related strictly to alternative hypothesis and  )

5 5 Example on Type II Error and Power Suppose have a variable, with known population S.D. = 3.6. From the population, a r.s. size n=100, used to test at  =0.05, that critical values of C.I for a 2-sided test are: for  =0.05 where for, i = upper or lower and  0  under H 0 So substituting our values gives: But, if H 0 false,  is not 17.5, but some other value …e.g say ??

6 6 Example contd. Want new distribution with mean  = 16.5, i.e. new distribution is shifted w.r.t. the old. Thus the probability of the Type II error - failing to reject false H 0 is the area under the curve in the new distribution which overlaps the non-rejection region specified under H 0 So, this is Thus, probability of taking the appropriate action (rejecting H 0 when this is false) is = Power

7 7 Shifting the distributions Non-Rejection region  /2 Rejection region 

8 8 Example contd. Power under alternative  for given  Possible values of   1-  under H 1 for H 0 false Balancing  and  :  tends to be large c.f.  unless original hypothesis way off. So decision based on a rejected H 0 more conclusive than one based on H 0 not rejected, as probability of being wrong is larger in the latter case.

9 9 SAMPLE SIZE DETERMINATION Example: Suppose wanted to design a genetic mapping experiment, or comparative product survey. Conventional experimental design - ANOVA), genetic marker type (or product type) and sample size considered. Questions might include: What is the statistical power to detect linkage for certain progeny size? (or common ‘shared’ consumer preferences, say) What is the precision of estimated R.F. (or grouped preferences) when sample size is N? Sample size needed for specific Statistical Power Sample size needed for specific Confidence Interval

10 10 Sample size - calculation based on C.I. For some parameter, Normal approximation approach valid, C.I. are U =standardized normal deviate (S.N.D.) and range is from lower to upper limits, i.e. for 95% limits is just a precision measurement for the estimator Given a true parameter , So manipulation gives:

11 11 Sample size - calculation based on Power (firstly, what affects power)? Suppose  = 0.05,  =3.5, n=100, testing H 0 :  0 =25 when true  =24; assume H 1 :  1 < 25. Sample mean found = One-tailed test (U = 1.645) : shift small, lower limit of original distribution virtually coincides with actual sample value Under H 1  Power = = 0.89; correct decision 89% of time Note: Two-sided test at  = 0.05 gives critical values, under H 0 given by : equivalently  U L = , U u = 4.82 for H 1 In general: substitute for limits & then recalculate for new  =  1 So, P{do not reject H 0 :  =25 when true mean  =24} = =  (Type II) Thus, Power = =

12 12 Sample Size and Power contd. Suppose, n=25, other values same. 1-tailed now Power = Suppose  = 0.01, critical values 2-tailed with, equivalently, U L = , U U = So, P{do not reject H 0 :  =25 when true mean  =24} = Power = FACTORS : , n and type of test (1- or 2-sided), true parameter value where subscripts 0 and 1 refer to null and alternative, and  value taken as ‘generic’ (either all in one tail, 1-sided test/limit or split between two, 2-sided test/limit)

13 13 ‘Other’ Estimation/Test Methods NON-PARAMETRICS/DIST N FREE Standard Pdfs can not be assumed for data, sampling distributions or test statistics – uncertain due to small or unreliable data sets, non- independence etc. Parameter estimation - not key issue. Example / Empirical-basis. Weaker assumptions. Less ‘information’ e.g. median used. Simple hypothesis testing as opposed to estimation. Power and efficiency are issues. Counts - nominal, ordinal (natural non-parametric data type/ measure). Nonparametric Hypothesis Tests - (has parallels to parametric case). e.g. H.T. of locus orders requires complex ‘test statistic’ distribution, so need to construct empirical pdf. Usually, assume the null hypothesis and use re-sampling techniques, e.g. permutation tests, bootstrap, jack-knife.

14 14 LIKELIHOOD METHOD - DEFINITIONS Suppose X can take a set of values x 1,x 2,…with where is a vector of parameters affecting observed x’s e.g.. So can say something about P{X} if we know, say, But not usually case, i.e. observe x’s, knowing nothing of Assuming x’s a random sample size n from a known distribution, then likelihood for Finding most likely or s for given data is equivalent to Maximising the Likelihood function, (where M.L.E. is )

15 15 LIKELIHOOD –SCORE and INFO. CONTENT The Log-likelihood is a support function [S(  )] evaluated at point,  ´ say Support function for any other point, say  ´´ can also be obtained – basis for computational iterations for MLE e.g. using Newton-Raphson SCORE = first derivative of support function w.r.t. the parameter or, numerically/discretely, INFORMATION CONTENT evaluated at (i) arbitrary point = Observed Info. (ii)support function maximum = Expected Info.

16 16 Example - Binomial variable (e.g. use of Score, Expected Info. Content to determine type of mapping population and sample size for genomics experiments) Likelihood function Log-likelihood Assume n constant, so first term can be ignored for given x - invariant Maximising w.r.t. p i.e. set the derivative of S w.r.t. to 0 so SCORE so M.L.E. How does it work, why bother?

17 17 Numerical Examples See some data sets and test examples: Basics : ep1&type=pdf Context: All sections useful, but especially examples, sections 1-3 and 6 Also, e.g. for R for SPSS – see e.g. tutorial for data sets or %20Modeling%20in%20SPSS.pdf _Linear%20Mixed%20Effects %20Modeling%20in%20SPSS.pdf general for mixed Linear Models For SAS – of possible interest also for Newton-Raphson estimation-in-sasiml/

18 18 Bayesian Estimation- in context Parametric Estimation - in “classical approach” f(x,  ) for a r.v. X of density f(x), with  the unknown parameter  dependency of distribution on parameter to be estimated. Bayesian Estimation-  is a random variable, so can consider the density as conditional and write f(x|  ) Given a random sample X 1, X 2,… X n the sample random variables are jointly distributed with parameter r.v. . So, joint pdf Objective - to form an estimator that gives value of , dependent on observations of the sample random variables. Thus conditional density of  given X 1, X 2,… X n also plays a role. This is the posterior density

19 19 Bayes - contd. Posterior Density Relationship - prior and posterior: where  (  ) prior density of  Value: Close to MLE for large n, or for small n if sample values compatible with the prior distribution. Also, has strong sample basis, -(simpler to calculate than M.L.E.)

20 20 Estimator Comparison in brief. Classical: uses objective probabilities, intuitive estimators, additional assumptions for sampling distributions: good properties for some estimators. Moments : { less calculation, less efficient. Despite analytical solutions & low bias, not well-used for large-scale data because less good asymptotic properties; even simple solutions may not be unique.} Bayesian - subjective prior knowledge, sample info., close to MLE under certain conditions - see earlier. LSE - if assumptions met,  ’s unbiased + variances obtained, {(X T X) -1 }. Few assumptions for response variable distributions, just expectations, variance-covariance structure. (Unlike MLE where need to specify joint prob. distribution of variables). Requires additional assumptions for sampling distns. Close to MLE if these are met. Computation easier.

Download ppt "DATA ANALYSIS Module Code: CA660 Lecture Block 5."

Similar presentations

Ads by Google