Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sample size calculation Ioannis Karagiannis based on previous EPIET material.

Similar presentations


Presentation on theme: "Sample size calculation Ioannis Karagiannis based on previous EPIET material."— Presentation transcript:

1 Sample size calculation Ioannis Karagiannis based on previous EPIET material

2 Objectives: sample size To understand: Why we estimate sample size Principles of sample size calculation Ingredients needed to estimate sample size

3 The idea of statistical inference Sample Population Conclusions based on the sample Generalisation to the population Hypotheses 3

4 Why bother with sample size? Pointless if power is too small Waste of resources if sample size needed is too large

5 Questions in sample size calculation A national Salmonella outbreak has occurred with several hundred cases; You plan a case-control study to identify if consumption of food X is associated with infection; How many cases and controls should you recruit?

6 An outbreak of 14 cases of a mysterious disease has occurred in cohort 2012; You suspect exposure to an activity is associated with illness and plan to undertake a cohort study under the kind auspices of coordinators; With the available cases, how much power will you have to detect a RR of 1.5? Questions in sample size calculation

7 Issues in sample size estimation Estimate sample needed to measure the factor of interest Trade-off between study size and resources Sample size determined by various factors: significance level ( α ) power (1- β ) expected prevalence of factor of interest

8 Which variables should be included in the sample size calculation? The sample size calculation should relate to the study's primary outcome variable. If the study has secondary outcome variables which are also considered important, the sample size should also be sufficient for the analyses of these variables. 8

9 Allowing for response rates and other losses to the sample The sample size calculation should relate to the final, achieved sample. Need to increase the initial numbers in accordance with: –the expected response rate –loss to follow up –lack of compliance The link between the initial numbers approached and the final achieved sample size should be made explicit.

10 Significance testing: null and alternative hypotheses Null hypothesis (H 0 ) There is no difference Any difference is due to chance Alternative hypothesis (H 1 ) There is a true difference

11 Examples of null hypotheses Case-control study H 0 : OR=1 the odds of exposure among cases are the same as the odds of exposure among controls Cohort study H 0 : RR=1 the AR among the exposed is the same as the AR among the unexposed

12 Significance level (p-value) probability of finding a difference (RR1, reject H 0 ), when no difference exists; α or type I error; usually set at 5%; p-value used to reject H 0 (significance level); NB: a hypothesis is never accepted

13 Type II error and power β is the type II error –probability of not finding a difference, when a difference really does exist Power is (1- β ) and is usually set to 80% –probability of finding a difference when a difference really does exist (=sensitivity)

14 Significance and power Truth H 0 true No difference H 0 false Difference Decision Cannot reject H 0 Correct decisionType II error = β Reject H 0 Type I error level = α significance Correct decision power = 1- β

15 How to increase power increase sample size increase desired difference (or effect size) required NB: increasing the desired difference in RR/OR means move it away from 1! increase significance level desired ( α error) Narrower confidence intervals

16 The effect of sample size Consider 3 cohort studies looking at exposure to oysters with N=10, 100, 1000 In all 3 studies, 60% of the exposed are ill compared to 40% of unexposed (RR = 1.5)

17 Table A (N=10) Became ill YesTotalAR Ate oysters Yes353/5 No252/5 Total5105/10 RR=1.5, 95% CI: , p=0.53

18 Table B (N=100) Became ill YesTotalAR Ate oysters Yes305030/50 No205020/50 Total /100 RR=1.5, 95% CI: , p=0.046

19 Table C (N=1000) Became ill YesNoAR Ate oysters Yes /500 No /500 Total /1000 RR=1.5, 95% CI: , p<0.001

20 Sample size and power In Table A, with n=10 sample, there was no significant association with oysters, but there was with a larger sample size. In Tables B and C, with bigger samples, the association became significant.

21 Cohort sample size: parameters to consider Risk ratio worth detecting Expected frequency of disease in unexposed population Ratio of unexposed to exposed Desired level of significance ( α ) Power of the study (1- β )

22 Cohort: Episheet Power calculation Risk of α error 5% Population exposed 100 Exp freq disease in unexposed5% Ratio of unexposed to exposed1:1 RR to detect 1.5

23 23

24

25 Case-control sample size: parameters to consider Number of cases Number of controls per case OR ratio worth detecting % of exposed persons in source population Desired level of significance ( α ) Power of the study (1- β )

26 Case-control: Power calculation α error 5% Number of cases 200 Proportion of controls exposed5% OR to detect 1.5 No. controls/case1:1

27

28 Statistical Power of a Case-Control Study for different control-to-case ratios and odds ratios (50 cases)

29 29 Statistical Power of a Case-Control Study

30 Sample size for proportions: parameters to consider Population size Anticipated p α error Design effect Easy to calculate on openepi.com 30

31 Conclusions Dont forget to undertake sample size/power calculations Use all sources of currently available data to inform your estimates Try several scenarios Adjust for non-response Let it be feasible

32 Acknowledgements Nick Andrews, Richard Pebody, Viviane Bremer


Download ppt "Sample size calculation Ioannis Karagiannis based on previous EPIET material."

Similar presentations


Ads by Google