Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.3: Assessing Quality.

Slides:



Advertisements
Similar presentations
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
Advertisements

The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Critical Reading VTS 22/04/09. “How to Read a Paper”. Series of articles by Trisha Greenhalgh - published in the BMJ - also available as a book from BMJ.
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
Student Learning Development, TCD1 Systematic Approaches to Literature Reviewing Dr Tamara O’Connor Student Learning Development Trinity College Dublin.
Doug Altman Centre for Statistics in Medicine, Oxford, UK
Reporting systematic reviews and meta-analyses: PRISMA
Reading the Dental Literature
Elements of a clinical trial research protocol
Critical Appraisal Dr Samira Alsenany Dr SA 2012 Dr Samira alsenany.
THE NEWCASTLE CRITICAL APPRAISAL WORKSHEET
Evidenced Based Practice; Systematic Reviews; Critiquing Research
Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Introduction to evidence based medicine
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Dr Amanda Perry Centre for Criminal Justice Economics and Psychology, University of York.
Are the results valid? Was the validity of the included studies appraised?
STrengthening the Reporting of OBservational Studies in Epidemiology
Their contribution to knowledge Morag Heirs. Research Fellow Centre for Reviews and Dissemination University of York PhD student (NIHR funded) Health.
Funded through the ESRC’s Researcher Development Initiative
A systematic review of school-based skills building behavioural interventions for preventing sexually transmitted infections in young people Jonathan Shepherd.
Systematic Review of the Literature: A Novel Research Approach.
Systematic Reviews Professor Kate O’Donnell. Reviews Reviews (or overviews) are a drawing together of material to make a case. These may, or may not,
Systematic Reviews.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Data Evaluation: Initial screening and Coding Adapted from David.
ARROW Trial Design Professor Greg Brooks, Sheffield University, Ed Studies Dr Jeremy Miles York University, Trials Unit Carole Torgerson, York University,
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
Plymouth Health Community NICE Guidance Implementation Group Workshop Two: Debriding agents and specialist wound care clinics. Pressure ulcer risk assessment.
Appraising Randomized Clinical Trials and Systematic Reviews October 12, 2012 Mary H. Palmer, PhD, RN, C, FAAN, AGSF University of North Carolina at Chapel.
Clinical Writing for Interventional Cardiologists.
Evidence-Based Medicine: What does it really mean? Sports Medicine Rounds November 7, 2007.
Classifying Designs of MSP Evaluations Lessons Learned and Recommendations Barbara E. Lovitts June 11, 2008.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Developing a Review Protocol. 1. Title Registration 2. Protocol 3. Complete Review Components of the C2 Review Process.
Critical appraisal of randomized controlled trial
2nd Concertation Meeting Brussels, September 8, 2011 Reinhard Prior, Scientific Coordinator, HIM Evidence in telemedicine: a literature review.
Evidence Based Practice RCS /9/05. Definitions  Rosenthal and Donald (1996) defined evidence-based medicine as a process of turning clinical problems.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
7.0 Evaluating Reviews of Research. Systematic Review (Meta-Analysis)
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Guidelines for Critically Reading the Medical Literature John L. Clayton, MPH.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
Methodological quality of malaria RCTs conducted in Africa Vittoria Lutje*^, Annette Gerritsen**, Nandi Siegfried***. *Cochrane Infectious Diseases Group.
Doing a Systematic Review Jo Hunter Linda Atkinson Oxford University Health Care Libraries 1 March 2006 Workshops in Information Skills and Electronic.
Module 3 Finding the Evidence: Pre-appraised Literature.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 18 Systematic Review and Meta-Analysis.
Lecture 2: Evidence Level and Types of Research. Do you recommend flossing to your patients? Of course YES! Because: I have been taught to. I read textbooks.
Evidence Based Practice (EBP) Riphah College of Rehabilitation Sciences(RCRS) Riphah International University Islamabad.
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 27 Systematic Reviews of Research Evidence: Meta-Analysis, Metasynthesis,
Is a meta-analysis right for me? Jaime Peters June 2014.
Corso di clinical writing. What to expect today? Core modules IntroductionIntroduction General principlesGeneral principles Specific techniquesSpecific.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
Systematic Reviews of Evidence Introduction & Applications AEA 2014 Claire Morgan Senior Research Associate, WestEd.
Trial Sequential Analysis (TSA)
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Critically Appraising a Medical Journal Article
NURS3030H NURSING RESEARCH IN PRACTICE MODULE 7 ‘Systematic Reviews’’
Supplementary Table 1. PRISMA checklist
Randomized Trials: A Brief Overview
Research Designs, Threats to Validity and the Hierarchy of Evidence and Appraisal of Limitations (HEAL) Grading System.
STROBE Statement revision
Association between risk-of-bias assessments and results of randomized trials in Cochrane reviews: the ROBES study Jelena Savović1, Becky Turner2, David.
What are systematic reviews and why do we need them?
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
Presentation transcript:

Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.3: Assessing Quality

 Does it make a difference?  How do we assess primary study quality? 3

 The quality of evidence generated by a review depends entirely on the quality of primary studies which make up the review  Garbage-in, garbage-out!  Quality assessment helps to set apart a meta- analysis or systematic review from a narrative review

 In meta-analysis, quality refers to methodological quality (the internal validity of primary studies)  High quality reduces the risk of bias by eliminating or taking into account sources of bias such as:  Selection bias  Information bias  Confounding

 Increasingly, meta-analysts evaluate the quality of each study included in a meta-analysis  Sometimes this is a global holistic (subjective) rating. In this case it is important to have multiple raters to establish inter-rater agreement  Sometimes study quality is quantified in relation to objective criteria of a good study, e.g.  larger sample sizes;  more representative samples;  better measures;  use of random assignment;  appropriate control for potential bias;  double blinding, and  low attrition rates (particularly for longitudinal studies) 6

 In a meta-analysis of Social Science meta-analyses, Wilson & Lipsey (1993) found an effect size of.50. They evaluated how this was related to study quality:  For meta-analyses providing a global (subjective) rating of the quality of each study, there was no significant difference between high and low quality studies; the average correlations between effect size and quality was almost exactly zero.  Almost no difference between effect sizes based on random- and non-random assignment (effect sizes slightly larger for random assignment).  The only study quality characteristic to make a difference was positively biased effects due to one-group pre/post design with no control group at all 7

 Goldring (1990) evaluated the effects of gifted education programs on achievement. She found a positive effect, but emphasised that findings were questionable because of weak studies:  21 of the 24 studies were unpublished and only one used random assignment.  Effects varied with matching procedures:  largest effects for achievement outcomes were for studies in which all non-equivalent groups' differences controlled by only one pretest variable.  Effect sizes reduced as the number of control variables increase and  disappeared altogether with random assignment.  Goldring (1990, p. 324) concluded policy makers need to be aware of the limitations of the GAT literature. 8

 Schulz (1995) evaluated study quality in 250 randomized clinical trials (RCTs) from 33 meta-analyses. Poor quality studies led to positively biased estimates:  lack of concealment (30-41%),  lack of double-blind (17%),  participants excluded after randomization (NS).  Moher et al. (1998) reanalysed 127 RCTs randomized clinical trials from 11 meta-analyses for study quality.  Low quality trials resulted in significantly larger effect sizes, 30-50% exaggeration in estimates of treatment efficacy.  Wood et al. (2008) evaluated study quality (1,346 RCTs from 146 meta-analyses.  subjective outcomes: inadequate/unclear concealment & lack of blinding resulted in substantial biases.  objective outcomes: no significant effects.  conclusion: systematic reviewers should assess risk of bias. 9

 Meta-analyses should always include subjective and/or objective indicators of study quality.  In Social Sciences, there is some evidence that studies with highly inadequate control for pre-existing differences leads to inflated effect sizes. However, it is surprising that other indicators of study quality make so little difference.  In medical research, studies are largely limited to RCTs where there is MUCH more control than in social science research. Here, there is evidence that inadequate concealment of assignment and lack of double-blind inflate effect sizes, but perhaps only for subjective outcomes.  These issues are likely to be idiosyncratic to individual discipline areas and research questions. 10

11

 It is important to code the study quality characteristics  Juni, Witschi, Bloch, and Egger (1999):  Evaluation of scales designed to assess the quality of randomized field trials in medicine  Used an identical set of 17 studies  Applied the quality weightings dictated by 25 different scales  Seven of the scales showed that high quality trials showed an effect whereas low quality trials did not.  Six of the scales found that high quality trials showed no effect whereas low quality trials did (the reverse conclusion).  For the remaining 12 scales, effect estimates were similar across the quality levels  Overall summary quality scores were not significantly associated with treatment effects.  In summary: the scale used to evaluate study quality can determine whether a difference in quality levels is detected 12

 Requires designing the code materials to include adequate questions about the study design and reporting  It requires skill and training in identifying quality characteristics  May require additional analyses:  Quality weighting (Rosenthal, 1991)  Use of kappa statistic in determining validity of quality filtering for meta-analysis (Sands & Murphy, 1996)  Regression with “quality” as a predictor of effect size (see Valentine & Cooper, 2008) 13

 Uses of information about quality:  Narrative discussion of impact of quality on results  Display study quality and results in a tabular format  Weight the data by quality - not usually recommended because scales are not always consistent (see Juni et al., 1999; Valentine & Cooper, 2008)  Subgroup analysis by quality  Include quality as a covariate in meta-regression

 Developed an instrument for assessing study quality for inclusion in meta-analysis and systematic review  “focuses on the operational details of studies and results in a profile of scores instead of a single score to represent study quality (p. 130)”  Study Design and Implementation Assessment Device (DIAD)  Hierarchical: consists of “global”, “composite”, and “design and implementation” questions 15

 From Valentine & Cooper (2008, p. 139).

 Multiple questions within each global and composite question  Example excerpt of Table 4 (p. 144)

 Critical Appraisal Skills Programme (CASP) to help with the process of critically appraising articles:  Systematic Reviews (!)  Randomised Controlled Trials (RCTs)  Qualitative Research  Economic Evaluation Studies  Cohort Studies  Case Control Studies  Diagnostic Test Studies  Good start, but not comprehensive  Download from

 Example from the CASP Cohort Studies form 

 Centre for Reviews and Dissemination (CRD), University of York  Report on “Study quality assessment”  Development of quality assessment instruments  Also, quality assessment of:  effectiveness studies  accuracy studies  qualitative research  economic evaluations

 Some questions to consider regarding quality in case series designs (can also be used for survey data designs)

 Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre), Institute of Education, University of London  Some issues to consider when coding are listed in the Guidelines for the REPOrting of primary empirical research Studies in Education (The REPOSE Guidelines) Not as detailed/thorough (prescriptive?) as in the medical research guidelines...

 When designing your code materials, you can look at guidelines for what should be reported, and turn those into questions to evaluate quality  For example, Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)  STROBE Statement—Checklist of items that should be included in reports of cross-sectional studies  Point 13(b): Give reasons for non-participation at each stage  Could be rephrased in the coding materials as:  Did the study give a reason for non-participation?  STROBE checklists available for cohort, case- control, and cross-sectional studies at

 Quality of reporting  It is often hard to separate quality of reporting from methodological quality - “Not reported” is not always “Not done”  Should code “Unspecified” as distinct from “Criteria not met”  Consult as many materials as possible when developing coding materials  There are some good references for systematic reviews that also apply to meta-analysis  Torgerson’s (2003) book  Gough’s (2007) framework  Search Cochrane Collaboration ( for “assessing quality”

 Gough, D. (2007). Weight of evidence: a framework for the appraisal of the quality and relevance of evidence. In J. Furlong, A. Oancea (Eds.) Applied and Practice-based Research. Special Edition of Research Papers in Education, 22,  Juni, P., Witschi, A., Bloch R., & Egger, M. (1999). The hazards of scoring the quality of clinical trials for meta-analysis, JAMA, 282,1054–1060  Rosenthal, R. (1991). Quality-weighting of studies in meta-analytic research. Psychotherapy Research, 1,  Sands, M. L., & Murphy, J. R. (1996). Use of kappa statistic in determining validity of quality filtering for meta-analysis: A case study of the health effects of electromagnetic radiation. Journal of Clinical Epidemiology, 49,  Torgerson, C. (2003). Systematic reviews. UK: Continuum International.  Valentine, J. C., & Cooper, H. M. (2008). A systematic and transparent approach for assessing the methodological quality of intervention effectiveness research: The Study Design and Implementation Assessment Device (Study DIAD). Psychological Methods, 13,