Presentation is loading. Please wait.

Presentation is loading. Please wait.

Session 4: Analysis and reporting Managing missing data Rob Coe (CEM, Durham) Developing a statistical analysis plan Hannah Buckley (York Trials Unit)

Similar presentations


Presentation on theme: "Session 4: Analysis and reporting Managing missing data Rob Coe (CEM, Durham) Developing a statistical analysis plan Hannah Buckley (York Trials Unit)"— Presentation transcript:

1 Session 4: Analysis and reporting Managing missing data Rob Coe (CEM, Durham) Developing a statistical analysis plan Hannah Buckley (York Trials Unit) Panel on EEF reporting and data archiving Jonathan Sharples, Camilla Nevill, Steve Higgins and Andrew Bibby

2 Managing missing data Rob Coe EEF Evaluators Conference, York, 2 June 2014

3 ∂ The problem  Only if everyone responds to everything is it still a randomised trial –Any non-response (post-randomisation) → not an RCT  It may not matter (much) if –Response propensity is unrelated to outcome –Non-response is low  Lack of ‘middle ground’ solutions –Mostly people either ignore or use very complex stats 3

4 ∂ What problem are we trying to solve?  We want to estimate the distribution of likely effects of [an intervention] in [a population] –Typically represented by an effect size and CI  Missing data may introduce bias and uncertainty –Point estimate effect size different from observed –Probability distribution for ES (CI) widens 4

5 What kinds of analysis are feasible to reduce the risk of bias from missing data? 5

6 ∂ Vocabulary Missing Completely at Random (MCAR) –Response propensity is unrelated to outcome Missing at Random (MAR) –Missing responses can be perfectly predicted from observed data Missing Not at Random (MNAR) –We can’t be sure that either of the above apply 6 Ignore missingness Statistics: IWP, MI ??

7 ∂ “When data are missing not at random, no method of obtaining unbiased estimates exists that does not incorporate the mechanism of non-random missingness, which is nearly always unknown. Some evidence, however, shows that the use of a method that is valid under missing at random can provide some reduction in bias.” Bell et al, BMJ 2013 7

8 ∂ Recommendations 1.Plan for dealing with missing data should be in protocol before trial starts 2.Where attrition likely, use randomly allocated differential effort to get outcomes 3.Report should clearly state the proportion of outcomes lost to follow up in each arm 4.Report should explore (with evidence) the reasons for missing data 5.Conduct simple sensitivity analyses for strength of relationship between Outcome score and missingness Treatment/Outcome interaction and missingness 8

9 ∂ If attrition is not low (>5%?) 6.Model outcome response propensity from observed variables 7.Conduct MAR analyses Inverse weighted probabilities Multiple imputation 8.Explicitly evaluate plausibility of MAR assumptions (with evidence) 9

10 ∂ 10

11 ∂ 11

12 ∂ Useful references  Bell, M. L., Kenward, M. G., Fairclough, D. L., & Horton, N. J. (2013). Differential dropout and bias in randomised controlled trials: when it matters and when it may not. BMJ: British Medical Journal, 346:e8668. http://www.bmj.com/content/346/bmj.e8668http://www.bmj.com/content/346/bmj.e8668  Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual review of psychology, 60, 549-576.  National Research Council. The Prevention and Treatment of Missing Data in Clinical Trials. Washington, DC: The National Academies Press, 2010. http://www.nap.edu/catalog.php?record_id=12955 http://www.nap.edu/catalog.php?record_id=12955  Shadish, W. R., Hu, X., Glaser, R. R., Kownacki, R., & Wong, S. (1998). A method for exploring the effects of attrition in randomized experiments with dichotomous outcomes. Psychological Methods, 3(1), 3.  www.missingdata.org.uk www.missingdata.org.uk 12

13 Developing a statistical analysis plan (SAP) Hannah Buckley York Trials Unit hannah.buckley@york.ac.uk June 2014

14 Overview What is a SAP? When is a SAP developed? Why is a SAP needed? What should be included in a SAP?

15 What is a SAP? Pre-specifies analyses Expands on the analysis section of a protocol Provides technical information

16 When is a SAP developed? After protocol finalised Before final data received Written in the future tense

17 Why create a SAP Pre-specify analyses Think through potential pitfalls Benefit to other analysts

18 ACTIVITY What do you think should be covered in a SAP? Sort the cards into two piles What should be in a SAP?

19 ACTIVITY DISCUSSION Which topics do you think do not need to be covered in a SAP? Are there any topics which you were unsure about? What should be in a SAP?

20 ACTIVITY 1.Which of the cards cover key background information and which are related to analysis? 2.Which order would you deal with the topics in?

21 Setting the scene Restate study objectives Study design Sample size Randomisation methods The structure of a SAP

22 Description of outcomes Primary outcome Secondary outcome(s) When outcomes will be measured Why outcomes chosen The structure of a SAP

23 Analysis - overview Analysis set (ITT) Software package Significance levels Blankets statements on confidence intervals, effect sizes or similar Methods for handling missing data The structure of a SAP

24 Analysis methods Baseline data Primary analysis Secondary analyses Subgroup analyses Sensitivity analyses The structure of a SAP

25 Conclusions Producing a SAP is good practice Can help avoid problems in analysis Finalised before final data received Fairly detailed Flexible but should cover key points

26 References and resources References ICH E9 ‘Statistical principles for clinical trials’ http://www.ich.org/products/guidelines/efficacy/article/efficacy- guidelines.html Resources PSI ‘Guidelines for standard operating procedures for good statistical practice in clinical research’ www.psiweb.org/docs/gsop.pdf

27 Thank you! Any questions or discussion points?

28 EEF reporting and data archiving Jonathan Sharples (EEF) Camilla Nevill (EEF) Steve Higgins (Durham) - Chair Andrew Bibby (FFT)

29 The reporting process and publication of results on EEF’s website Jonathan Sharples (EEF)

30 Classifying the security of findings from EEF evaluations Camilla Nevill (EEF) www.educationendowmentfoundation.org.uk/evaluation

31 Example Appendix: Chatterbooks

32 Combining the results of evaluations with the meta-analysis in the Teaching and Learning Toolkit Steve Higgins (Durham)

33 Andrew Bibby Archiving EEF project data

34

35

36

37 1.Include permission for linking and archiving in consent forms 2.Retain pupil identifiers 3.Label values and variables 4.Save Syntax or Do files Prior to archiving…


Download ppt "Session 4: Analysis and reporting Managing missing data Rob Coe (CEM, Durham) Developing a statistical analysis plan Hannah Buckley (York Trials Unit)"

Similar presentations


Ads by Google