Funded through the ESRC’s Researcher Development Initiative

Slides:



Advertisements
Similar presentations
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
Advertisements

The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
EVAL 6970: Meta-Analysis Vote Counting, The Sign Test, Power, Publication Bias, and Outliers Dr. Chris L. S. Coryn Spring 2011.
Systematic Review of Literature Part XIX Analyzing and Presenting Results.
Reporting systematic reviews and meta-analyses: PRISMA
Statistical Issues. Statement of the Problem How often are articles published with errors in statistical methods? – –So what? Should we believe only articles.
Significance and effect sizes What is the problem with just using p-levels to determine whether one variable has an effect on another? Don’t EVER just.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.3 – Publication bias.
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
The Campbell Collaborationwww.campbellcollaboration.org Moderator analyses: Categorical models and Meta-regression Terri Pigott, C2 Methods Editor & co-Chair.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.4: 3-level meta-analyses.
Introduction to Meta-Analysis Joseph Stevens, Ph.D., University of Oregon (541) , © Stevens 2006.
THE NEWCASTLE CRITICAL APPRAISAL WORKSHEET
Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.3: Assessing Quality.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Are the results valid? Was the validity of the included studies appraised?
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
Determining Sample Size
Meta Analysis MAE Course Meta-analysis The statistical combination and analysis of data from separate and independent studies to determine if there.
Department of O UTCOMES R ESEARCH. Daniel I. Sessler, M.D. Michael Cudahy Professor and Chair Department of O UTCOMES R ESEARCH The Cleveland Clinic Clinical.
Advanced Statistics for Researchers Meta-analysis and Systematic Review Avoiding bias in literature review and calculating effect sizes Dr. Chris Rakes.
Systematic Reviews Professor Kate O’Donnell. Reviews Reviews (or overviews) are a drawing together of material to make a case. These may, or may not,
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
The Audit Process Tahera Chaudry March Clinical audit A quality improvement process that seeks to improve patient care and outcomes through systematic.
The Effect of Computers on Student Writing: A Meta-Analysis of Studies from 1992 to 2002 Amie Goldberg, Michael Russell, & Abigail Cook Technology and.
Publication Bias in Medical Informatics evaluation research: Is it an issue or not? Mag. (FH) Christof Machan, M.Sc. Univ-Prof. Elske Ammenwerth Dr. Thomas.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
Simon Thornley Meta-analysis: pooling study results.
LECTURE 2 EPSY 642 META ANALYSIS FALL CONCEPTS AND OPERATIONS CONCEPTUAL DEFINITIONS: HOW ARE VARIABLES DEFINED? Variables are operationally defined.
Methods: Pointers for good practice Ensure that the method used is adequately described Use a multi-method approach and cross-check where possible - triangulation.
Funded through the ESRC’s Researcher Development Initiative Prof. Herb MarshMs. Alison O’MaraDr. Lars-Erik Malmberg Department of Education, University.
Appraising Randomized Clinical Trials and Systematic Reviews October 12, 2012 Mary H. Palmer, PhD, RN, C, FAAN, AGSF University of North Carolina at Chapel.
Systematic reviews to support public policy: An overview Jeff Valentine University of Louisville AfrEA – NONIE – 3ie Cairo.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to meta-analysis.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Developing a Review Protocol. 1. Title Registration 2. Protocol 3. Complete Review Components of the C2 Review Process.
Critical appraisal of randomized controlled trial
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Guidelines for Critically Reading the Medical Literature John L. Clayton, MPH.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
1 f02laitenberger7 An Internally Replicated Quasi- Experimental Comparison of Checklist and Perspective-Based Reading of Code Documents Laitenberger, etal.
The expanding evidence for the efficacy of ACT: results from a meta analysis on clinical applications.
Retain H o Refute hypothesis and model MODELS Explanations or Theories OBSERVATIONS Pattern in Space or Time HYPOTHESIS Predictions based on model NULL.
Module 3 Finding the Evidence: Pre-appraised Literature.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods.
Systematic Synthesis of the Literature: Introduction to Meta-analysis Linda N. Meurer, MD, MPH Department of Family and Community Medicine.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 18 Systematic Review and Meta-Analysis.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.1 – Revision of Day 1.
Evidence Based Practice (EBP) Riphah College of Rehabilitation Sciences(RCRS) Riphah International University Islamabad.
Producing Data: Experiments BPS - 5th Ed. Chapter 9 1.
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 27 Systematic Reviews of Research Evidence: Meta-Analysis, Metasynthesis,
Is a meta-analysis right for me? Jaime Peters June 2014.
An Application of Multilevel Modelling to Meta-Analysis, and Comparison with Traditional Approaches Alison O’Mara & Herb Marsh Department of Education,
Systematic Reviews of Evidence Introduction & Applications AEA 2014 Claire Morgan Senior Research Associate, WestEd.
Week Seven.  The systematic and rigorous integration and synthesis of evidence is a cornerstone of EBP  Impossible to develop “best practice” guidelines,
Meta-analysis Overview
Do Adoptees Have Lower Self Esteem?
Supplementary Table 1. PRISMA checklist
Critical Reading of Clinical Study Results
Gerald Dyer, Jr., MPH October 20, 2016
Sociology Outcomes Assessment
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
Meta-analysis, systematic reviews and research syntheses
Presentation transcript:

Funded through the ESRC’s Researcher Development Initiative Session 3.1: Revision of Day 2 Meta-analysis Prof. Herb Marsh Ms. Alison O’Mara Dr. Lars-Erik Malmberg Department of Education, University of Oxford

Modelling meta-analytic data

Questions Based on the meta20 data used in the practical, how similar are the results of the fixed, random, and multilevel models? Which model seems the most appropriate for this data and why?

The formulae for multilevel models can be simplified to fixed and random effects If between-study variance = 0, the multilevel model simplifies to the fixed effects regression model If no predictors are included the model simplifies to random effects model If the level 2 variance = 0 , the model simplifies to the fixed effects model

Study quality

Questions Why is it important to consider study quality? Does it make a difference? What are the approaches to evaluating study quality? Would you consider excluding “low quality” studies?

Study quality: Does it make a difference? 2 June 2008 Study quality: Does it make a difference? Meta-analyses should always include subjective and/or objective indicators of study quality. In Social Sciences, there is some evidence that studies with highly inadequate control for pre-existing differences leads to inflated effect sizes. However, it is surprising that other indicators of study quality make so little difference. In medical research, studies are largely limited to RCTs where there is MUCH more control than in social science research. Here, there is evidence that inadequate concealment of assignment and lack of double-blind inflate effect sizes, but perhaps only for subjective outcomes. These issues are likely to be idiosyncratic to individual discipline areas and research questions. 7

Evaluation of study quality 2 June 2008 Evaluation of study quality Sometimes this is a global holistic (subjective) rating. In this case it is important to have multiple raters to establish inter-rater agreement Sometimes study quality is quantified in relation to objective criteria of a good study, e.g. larger sample sizes; more representative samples; better measures; use of random assignment; appropriate control for potential bias; double blinding, and low attrition rates (particularly for longitudinal studies) 8

Evaluation of study quality Requires designing the code materials to include adequate questions about the study design and reporting May require additional analyses: Quality weighting (Rosenthal, 1991) Use of kappa statistic in determining validity of quality filtering for meta-analysis (Sands & Murphy, 1996) Regression with “quality” as a predictor of effect size (see Valentine & Cooper, 2008) 9

Quality assessment Uses of information about quality: Narrative discussion of impact of quality on results Display study quality and results in a tabular format Weight the data by quality - not usually recommended because scales are not always consistent (see Juni et al., 1999; Valentine & Cooper, 2008) Subgroup analysis by quality Include quality as a covariate in meta-regression

Caveats & considerations... Quality of reporting It is often hard to separate quality of reporting from methodological quality - “Not reported” is not always “Not done” Should code “Unspecified” as distinct from “Criteria not met” Consult as many materials as possible when developing coding materials There are some good references for systematic reviews that also apply to meta-analysis Torgerson’s (2003) book Gough’s (2007) framework Search Cochrane Collaboration (http://www.cochrane.org/) for “assessing quality”

Publication bias

Questions What is publication bias? Why is it considered to be an issue for meta- analysis? Describe the arguments for inclusion and exclusion of unpublished studies What are some ways of assessing the impact of potential publication bias?

Exclusion? Inclusion of unpublished papers is likely to add considerable “noise” to the analyses Methods typically used to find unpublished papers are ‘ad hoc’ the resulting selection of studies is likely to be less representative of the unknown population of studies than is the population of published studies, and typically will be more homogeneous (White, 1994). “Whether bias is reduced or increased by including unpublished studies cannot formally be assessed as it is impossible to be certain that all unpublished studies have been located (Smith & Egger, 1998)” Hence, for published papers, there is a more clearly defined population of studies to which to generalize than would be the case if unpublished studies were included.

Inclusion? A central goal of meta-analysis is to be inclusive. Meta-analyses call for a balance between practicality and comprehensiveness (Durlak & Lipsey, 1991). A compromise is for meta-analysts to report how they dealt with publication bias

Methods for assessing publication bias Examination of the focus of the included studies Fail-safe N Trim & Fill Sensitivity analysis (Vevea & Woods, 2005)

Conclusion: publication bias “The author of the meta-analysis, then, is faced with a logically impossible task: to show that publication bias is not a problem for the particular data set at hand. We describe the task as logically impossible because it amounts, in essence, to an attempt at confirming a null hypothesis. (Vevea & Woods, 2005, p. 438)” Different methods can attempt to address (assess?) the issue, but none is perfect. At least we can conclude that the Fail-safe N is not appropriate! Include unpublished studies?

3-level multilevel models

Questions What are some examples of data structures that might require 3-level modelling? What were the results of the 3-level multilevel model based on the peer review dataset? What does this mean in the ‘real world’ for the peer review process (i.e., does there appear to be a gender bias)?

Results of peer review study The mean effect size was very small, but significantly in favour of men. However, the results did not generalise across studies (there was study-to-study variation). The effect size was significantly moderated by the type; it was almost exactly 0 for grants and in favour of men for fellowship applications. This difference was not moderated or mediated by other moderators. There appeared to be some discipline effects (bias in favour of men in social sciences) and country effects (large bias in favour of men for Sweden). However, when all “main” effects included, discipline effects disappeared. For Grant Proposals there was no evidence of any effect of gender on outcome.