Presentation is loading. Please wait.

Presentation is loading. Please wait.

Section IV Sampling distributions Confidence intervals Hypothesis testing and p values 1.

Similar presentations


Presentation on theme: "Section IV Sampling distributions Confidence intervals Hypothesis testing and p values 1."— Presentation transcript:

1 Section IV Sampling distributions Confidence intervals Hypothesis testing and p values 1

2 Population and sample We wish to make inferences (generalizations) about an entire target population (ie, generalize to “everyone”) even though we only study one sample (have only one study). Population parameters=summary values for the entire population (ex: μ,σ,ρ,β ) Sample statistics=summary values for a sample (ex: Y, S, r, b) 2

3 Samples drawn from a population Population sample Sample is drawn “at random”. Everyone in the target population is eligible for sampling. 3

4 True population distribution of Y (individuals)- not Gaussian Mean Y=μ= 2.5, SD=σ=1.12 4

5 Possible samples & statistics from the population (true mean=2.5) sample (n=4) mean (statistic) 1,1,1,1 1.00 … 2,2,4,3 2.75 … 4,4,4,4 4.00 5

6 Distribution of the sample means (Ys) - Sampling distribution- each observation is a SAMPLE statistic __ Y Mean Y = 2.5, SEM = 0.56, n=4 SEM = SD/  n the square root n law 6

7 Central Limit Theorem For a large enough n, the distribution of any sample statistic (mean, mean difference, OR, RR, hazard, correlation coeff,regr coeff, proportion…) from sample to sample has a Gaussian (“Normal”) distribution centered at the true population value. The standard error is proportional to 1/√n. (Rule of thumb: n> 30 is usually enough. May need non parametric methods for small n) 7

8 8

9 Funnel plot - true difference is δ= 5 Each point is one study (meta analysis) 9

10 Resampling estimation (“bootstrap”) One does not repeatedly sample from the same population, (one only carries out the study once). But a “simulation” of repeated sampling from the population can be obtained by repeatedly sampling from the sample with replacement & computing the statistic from each resample, creating an “estimated” sampling distribution. The SD of the statistics across all “resamples” is an estimate the standard error (SE) for the statistic. 10

11 Samples drawn from a population sample Population Original sample sample Sample is drawn “at random” with replacement. Everyone in the original sample is eligible for sampling. 11 sample

12 Confidence interval (for μ) We do not know μ from a sample. For a sample mean Y and standard error SE, a confidence interval for the population mean μ is formed by Y - Z SE, Y + Z SE (sample statistic is in the middle) For a 95% confidence interval, we use Z=1.96 (Why?) and compute Y – 1.96 SE, Y + 1.96 SE lowerupper mean 12

13 Confidence Intervals (CI) and sampling dist of Y -1.96(  /  n)  1.96(  /  n) 95% CI: Y  1.96 (  /  n) 13

14 95% Confidence intervals 95% of the intervals will contain the true population value But which ones? 14

15 Z vs t (technical note) Confidence intervals made with Z assume that the population σ is known. Since σ is usually not known and is estimated with the sample SD, the Gaussian table areas need to be adjusted. The adjusted tables are called “t” tables instead of Gaussian tables (t distribution). For n > 30, they are about the same. 15

16 Z distribution vs t distribution, about the same for n > 30 16

17 t vs Gaussian Z percentiles %ile85th90th95th97.5th99.5 th Confidence 70%80%90%95%99% t, n=51.1561.4762.0152.5714.032 t, n=101.0931.3721.8122.2283.169 t, n=201.0641.3251.7252.0862.845 t, n=301.0551.3101.6972.0422.750 Gaussian 1.0361.2821.6451.9602.576 What did the z distribution say to the t distribution? You may look like me but you're not normal. 17

18 Confidence Intervals Sample Statistic ± Z tabled SE (using known variance) Sample Statistic ± t tabled SE (using estimate of variance) Example: CI for the difference between two means: __ __ (Y 1 – Y 2 ) ± t tabled (SE d ) Tabled t uses degrees of freedom, df=n 1 +n 2 -2 18

19 CI for a proportion “law” of small numbers n=10, Proportion = 3/10 = 30% What do you think are the 95% confidence bounds? Is is likely that the “real” proportion is more than 50%? 19

20 CI for a proportion “law” of small numbers n=10, Proportion = 3/10 = 30% What do you think are the 95% confidence bounds? Is is likely that the “real” proportion is more than 50%? Answer: 95% CI: 6.7% to 65.3% 20

21 Standard error for the difference between two means __ Y 1 has mean μ 1 and SE = √σ 1 2 /n 1 = SE 1 __ Y 2 has mean μ 2 and SE = √σ 2 2 /n 2 = SE 2 For the difference between two means (δ=  1 -  2 ) SE δ = √(σ 1 2 /n 1 + σ 2 2 /n 2 ) SE d =  (SE 1 2 + SE 2 2 ) 21

22 Statistics for HBA1c change from base to 26 weeks (Pratley et al, Lancet 2010) TxnMeanSDSE Liraglutide 225-1.240.990.066 Sitaglipin 219-0.900.980.066 __ Mean difference = d = 0.34 % Std error of mean difference= SE d =  [0.066 2 + 0.066 2 ] = 0.093% Using t {df=442} =1.97 for the 95% confidence interval: CI: 0.34% ± 1.97 (0.093%) or (0.16%, 0.52%) 22

23 Null hypothesis & p values Null Hypothesis- Assume that, in the population, the two treatments give the same average improvement in HbA1c. So the average difference is δ=0. Under this assumption, how likely is it to observe a sample mean difference of d= 0.34% (or more extreme) in any study? This probability is called the (one sided) p value. The p value is only defined for a given null hypothesis. 23

24 Hypothesis testing for a mean difference, d d =sample mean HBA1c chg difference, _ d = 0.34%, SE d = 0.093% 95% CI for true mean difference = (0.16%, 0.52%) But, under the null hypothesis, the true mean difference (δ) should be zero. How “far” is the observed 0.34% mean difference from zero (in SE units)? t obs = (mean difference – hypothesized difference) / SE diff t obs = (0.34 – 0) / 0.093 = 3.82 SEs p value: probability of observing t=3.82 or larger if null hypothesis is true. p value = 0.00008 (one sided t with df=442) p value = 0.00016 (two sided) 24

25 Hypothesis test statistics Z obs = (Sample Statistic – null value) / Standard error Z (or t)=3.82 25

26 26

27 Difference & Non inferiority (equivalence) hypothesis testing Difference Testing: Null Hyp: A=B (or A-B=0), Alternative: A≠B Z obs = (observed stat – 0) / SE Non inferiority (within δ) Testing: Null Hyp: A > B + δ, Alternative: A <= B + δ Z eq = (observed stat – δ )/ SE Must specify δ for non inferiority testing 27

28 Non inferiority testing-HBA1c data For HBA1c data, assume we declare non inferiority if the true mean difference is δ=0.40% or less. The observed mean difference is d=0.34%, which is smaller than 0.40%. However, the null hypothesis is that the true difference is 0.40% or more versus the alternative of 0.40% or less. So Z eq =(0.34 –0.40)/0.093=-0.643, p=0.260 (one sided) We cannot reject the “null hyp” that the true δ is larger than 0.40%. Our 95% confidence interval of (0.16%, 0.52%) also does NOT exclude 0.40%, even though it excludes zero. 28

29 Confidence intervals versus hypothesis testing Study equivalence demonstrated only from –D tp +D (1 ‑ 8) (brackets show 95% confidence intervals) Stat Sig 1. Yes ----------------------------------------------------------------------------------------------- 2. Yes ----------------------------------------------------------------------------- -------------------- 3. Yes ------------------------------------------------------------------ ----------------------------------- 4. No --------------------------------------------------- --------------------------------------------------- 5. Yes ---------------------------------- ---------------------------------------------------------------- 6. Yes --------------------- ---------------------------------------------------------------------------------- 7. Yes - ----------------------------------------------------------------------------------------------- 8. No --------- ------ | | ‑ D O +D true difference Ref: Statistics Applied to Clinical Trials- Cleophas, Zwinderman, Cleopahas 2000 Kluwer Academic Pub Page 35 29

30 Non inferiority JAMA 2006 - Piaggio et al, p 1152-1160 30

31 Paired Mean Comparisons Serum cholesterol in mmol/L Difference between baseline and end of 4 weeks Subject chol(baseline) chol(4 wks) difference(d i ) 1 9.0 6.5 2.5 2 7.1 6.3 0.8 3 6.9 5.9 1.0 4 6.9 4.9 2.0 5 5.9 4.0 1.9 6 5.4 4.9 0.5 mean 6.87 5.42 1.45 SD 1.24 0.97 0.79 SE 0.51 0.40 0.32 _ Difference (baseline – 4 weeks) = amount lowered : d = 1.45 mmol/L SD = 0.79 mmol/L SE d = 0.79/  6 = 0.323 mmol/L, df = 6-1=5, t 0.975 = 2.571 95% CI: 1.45 ± 2.571 (0.323) = 1.45 ± 0.830 or (0.62 mmol/L, 2.28 mmol/L) t obs = 1.45 / 0.32 = 4.49, p value < 0.001 31

32 Confidence Intervals Hypothesis Tests Confidence intervals are of the form Sample Statistic +/- (Z percentile *) (Standard error) Lower bound = Sample Statistic- (Z percentile )(Standard error) Upper bound = Sample Statistic + (Z percentile )(Standard error) Hypothesis test statistics (Z obs *) are of the form Z obs =(Sample Statistic – null value) / Standard error * t percentile or t obs for continuous data when n is small 32

33 Sample statistics and their SEs Sample Statistic Symbol Standard error (SE) __ Mean Y S/√n = √[S 2 /n] = SEM __ __ _ Mean difference Y 1 – Y 2 =d √[S 1 2 /n 1 + S 2 2 /n 2 ]= SE d Proportion P √[P(1-P)/n] Proportion difference P 1 – P 2 √[P 1 (1-P 1 )/n 1 + P 2 (1-P 2 )/n 2 ] Log Odds ratio* log e OR √[ 1/a + 1/b + 1/c + 1/d] Log Risk ratio* log e RR √[1/a -1/(a+c) + 1/b - 1/(b+d)] Slope (rate) b S error / S x √(n-1) Hazard rate (survival) h h/√[number dead] Transform (z) of the Correlation coefficient r* z=½log e [(1+r)/(1-r)] SE(z)=1/√([n-3]) r = (e 2z -1)/(e 2z + 1) * Form CI bounds on transformed scale, then take anti-transform 33

34 Handy Guide to Testing Sample Statistic & Comparison Population null hypothesis Comparing two meansTrue population mean difference is zero Comparing two proportionsTrue population difference is zero Comparing two mediansTrue population median difference is zero Odds ratio (comparing odds)True population odds ratio is one Risk ratio=relative risk (comparing risks)True population risk ratio is one Correlation coefficient (compare to zero)True population correlation coefficient is zero Slope=rate of change=regression coefficientTrue population slope is zero Comparing two survival curvesTrue difference in survival is zero at all times 34

35 Nomenclature for Testing Delta (δ) = True difference or size of effect Alpha (α) = Type I error = false positive = Probability of rejecting the null hypothesis when it is true. (Usually α is set to 0.05) Beta (β) = Type II error = false negative =Probability of not rejecting the null hypothesis when delta is not zero ( there is a real difference in the population) Power = 1 – β = Probability of getting a p value less than α (ie declaring statistical significance) when, in fact, there really is a non-zero delta. We want small alpha levels and high power. 35

36 Statistical Hypothesis Testing Statistic/type of comparison Mean comparison-unpaired Mean comparison-paired Median comparison-unpaired Median comparison-paired Proportion comparison-unpaired Proportion comparison-paired Odds ratio Risk ratio Correlation, slope Survival curves, hazard rates Test/analysis procedure t test (2 groups), ANOVA (3+ groups) paired t test, repeated measures ANOVA Wilcoxon rank sum test, KruskalWallis test* Wilcoxon signed rank test on differences* chi-square test (or Fishers test) McNemar’s chi-square test chi-square test, Fisher test regression, t statistic log rank test* ANOVA = analysis of variance * non parametric – Gaussian distribution theory is not used to get the p value 36

37 Parametric vs non parametric Compute p values using ranks of the data. Does not assume stats follow Gaussian distribution – particularly in distribution “tails”. Parametric Nonparametric 2 indep means- 2 indep medians- t test Wilcoxon rank sum test=MW 3+ indep mean- 3+ indep medians- ANOVA F test Kruskal Wallis test Paired means- Paired medians- paired t test Wilcoxon signed rank test Pearson correlation Spearman correlation 37


Download ppt "Section IV Sampling distributions Confidence intervals Hypothesis testing and p values 1."

Similar presentations


Ads by Google