When To Select Observational Studies as Evidence for Comparative Effectiveness Reviews Prepared for: The Agency for Healthcare Research and Quality (AHRQ)

Slides:



Advertisements
Similar presentations
Agency for Healthcare Research and Quality (AHRQ)
Advertisements

Synthesizing the evidence on the relationship between education, health and social capital Dan Sherman, PhD American Institutes for Research 25 February,
Study Objectives and Questions for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Protocol Development.
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Grading the Strength of a Body of Evidence on Diagnostic Tests Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for.
Introduction to the User’s Guide for Developing a Protocol for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research.
Study Designs in Epidemiologic
Participation Requirements for a Guideline Panel PGIN Representative.
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Sensitivity Analysis for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Journal Club Alcohol, Other Drugs, and Health: Current Evidence November-December 2007.
Writing a Research Protocol Michael Aronica MD Program Director Internal Medicine-Pediatrics.
Clinical Trials Hanyan Yang
Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Selection of Data Sources for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Introduction to evidence based medicine
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
As noted by Gary H. Lyman (JCO, 2012) “CER is an important framework for systematically identifying and summarizing the totality of evidence on the effectiveness,
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
Are the results valid? Was the validity of the included studies appraised?
STrengthening the Reporting of OBservational Studies in Epidemiology
Their contribution to knowledge Morag Heirs. Research Fellow Centre for Reviews and Dissemination University of York PhD student (NIHR funded) Health.
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
CHP400: Community Health Program - lI Mohamed M. B. Alnoor Research Methodology STUDY DESIGNS Observational / Analytical Studies Present: Disease Past:
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Exposure Definition and Measurement in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Systematic Reviews.
Introduction to Systematic Reviews Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /9/20151.
Grading Strength of Evidence Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for Systematic Reviews Methods Guide.
Evidence-Based Public Health Nancy Allee, MLS, MPH University of Michigan November 6, 2004.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
CHP400: Community Health Program - lI Research Methodology STUDY DESIGNS Observational / Analytical Studies Present: Disease Past: Exposure Cross - section.
Appraising Randomized Clinical Trials and Systematic Reviews October 12, 2012 Mary H. Palmer, PhD, RN, C, FAAN, AGSF University of North Carolina at Chapel.
Criteria to assess quality of observational studies evaluating the incidence, prevalence, and risk factors of chronic diseases Minnesota EPC Clinical Epidemiology.
Clinical Writing for Interventional Cardiologists.
VSM CHAPTER 6: HARM Evidence-Based Medicine How to Practice and Teach EMB.
Systematic Review Module 3: Study Eligibility Criteria Melissa McPheeters, PhD, MPH Associate Director Vanderbilt University Evidence-based Practice Center.
Conducting a Sound Systematic Review: Balancing Resources with Quality Control Eric B. Bass, MD, MPH Johns Hopkins University Evidence-based Practice Center.
Selecting Evidence for Comparative Effectiveness Reviews Melissa McPheeters, PhD., MPH Associate Director, Vanderbilt University Evidence-based Practice.
Causal relationships, bias, and research designs Professor Anthony DiGirolamo.
When to Select Observational Studies Interactive Case Study Quiz: Dan Jonas, MD, MPH Meera Viswanathan, PhD Karen Crotty, PhD, MPH RTI-UNC Evidence-based.
Systematic Reviews Michael Chaiton Tobacco and Health: From Cells to Society September 24, 2014.
EBM Conference (Day 2). Funding Bias “He who pays, Calls the Tune” Some Facts (& Myths) Is industry research more likely to be published No Is industry.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
Moving the Evidence Review Process Forward Alex R. Kemper, MD, MPH, MS September 22, 2011.
WHO GUIDANCE FOR THE DEVELOPMENT OF EVIDENCE-BASED VACCINE RELATED RECOMMENDATIONS August 2011.
Type Your Title Here Author’s First Name Last Name, degree,…. Mentor’s First Name Last Name, degree Dept. Name here, NYU Lutheran Medical Center, Brooklyn,
When To Select Observational Studies Interactive Quiz Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for Systematic.
Selecting Evidence for Comparative Effectiveness Reviews: When to use Observational Studies Dan Jonas, MD, MPH Meera Viswanathan, PhD Karen Crotty, PhD,
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Journal Club Alcohol, Other Drugs, and Health: Current Evidence November-December 2012.
RTI International is a trade name of Research Triangle Institute Nancy Berkman, PhDMeera Viswanathan, PhD
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
Is a meta-analysis right for me? Jaime Peters June 2014.
Introduction to Systematic Reviews Afshin Ostovar 6/24/
PRAGMATIC Study Designs: Elderly Cancer Trials
Clinical Studies Continuum
Supplementary Table 1. PRISMA checklist
Critical Reading of Clinical Study Results
Study Eligibility Criteria Interactive Quiz
Level of Evidence Lecture 4.
Meta-analysis, systematic reviews and research syntheses
Introduction to Systematic Reviews
Presentation transcript:

When To Select Observational Studies as Evidence for Comparative Effectiveness Reviews Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for Systematic Reviews Methods Guide

Systematic Review Process Overview

 To understand why reviewers should consider including observational studies in comparative effectiveness reviews (CERs)  To understand when to include observational studies in CERs  To review important considerations for deciding whether to include observational studies to assess benefits and harms Learning Objectives

 Comparative effectiveness reviews should always consider including observational studies.  Reviewers should explicitly state the rationale for including or excluding observational studies. Current Perspective

 Systematic reviews that compare the relative benefits and harms among a range of available treatments or interventions for a given condition. Comparative Effectiveness Reviews

 May be unnecessary, inappropriate, inadequate, or impractical  May be too short in duration  May report intermediate outcomes rather than main health outcomes of interest  Often not available for vulnerable populations  Generally report efficacy rather than effectiveness Danger of Overreliance on Randomized Controlled Trials

 In these studies, investigators do not assign the exposure or intervention. These studies include:  All nonexperimental studies  Cohort, case-control, cross-sectional studies  We present considerations for including observational studies to assess benefits and harms separately. Observational Studies

 Reviewers should answer two questions:  Are there gaps in trial evidence for the review questions under consideration?  Will observational studies provide valid and useful information to fill these gaps and, thereby, answer the review questions? Using Observational Studies To Assess Benefits (I)

OS = observational study; PICOTS = population, intervention, comparator, outcome, timing, and setting. Using Observational Studies To Assess Benefits (II) Consider OSs Always consider: Controlled trials Will OSs provide valid and useful information? Assess whether OSs address the review question Are there gaps in trial evidence? Systematic review question ( including PICOTS) Yes Refocus the review question on gaps Assess the suitability of OSs: Natural history of the disease or exposure Potential biases Confine review to controlled trials No

 Trial data may be insufficient for a number of reasons:  Population: may not be available for subpopulations or vulnerable populations  Intervention: may not be able to assign high-risk interventions randomly  Comparator: may be insufficient evidence for comparators of interest  Outcome: may report intermediate outcomes rather than main health outcomes of interest  Timing: duration of follow-up for outcomes assessment may be too short  Setting: may not represent typical practice Gaps in Trial Evidence: PICOTS

 Risk of bias (internal validity)  The degree to which the findings may be attributed to factors other than the intervention under review  Consistency  Extent to which effect size and direction vary within and across studies  Inconsistency may be due to heterogeneity across PICOTS  Directness  Degree to which outcomes that are important to users of the comparative effective review (patients, clinicians, or policymakers) are encompassed by trial data  Health outcomes are generally most important Are Trial Data Sufficient? (I)

 Precision  Includes sample size, number of studies, and heterogeneity of effect sizes  Reporting bias  Extent to which trial authors appear to have reported all outcomes examined  Applicability  Extent to which the trial data are likely to be applicable to populations, interventions, and settings of interest to the user  The review questions should reflect the study characteristics (PICOTS) of interest Are Trial Data Sufficient? (II)

 Gaps in trial evidence can be identified at a number of points in the comparative effectiveness review:  In scoping of the review  In consulting with the Technical Expert Panel  In reviewing titles and abstracts  In reviewing trial data in detail When To Identify Gaps in Trial Evidence

Prepare Topic Refine topic Refine topic Develop analytic framework Develop analytic framework Search for and Select Studies for Inclusion Identify study eligibility criteria Identify study eligibility criteria Search for relevant studies Search for relevant studies Select evidence for inclusion Select evidence for inclusion Extract Data from Studies Analyze and Synthesize Studies Assess the quality of individual studies Assess the quality of individual studies Assess applicability Assess applicability Present findings Present findings Synthesize quantitative data Synthesize quantitative data Grade strength of evidence Grade strength of evidence Report Systematic Review Iterative Process for Identifying Gaps in Evidence

 Reviewers may perform initial searches broadly to identify both observational studies and trials.  Or, they may perform searches sequentially and search for observational studies after reviewing trials in detail to identify gaps in evidence. Gaps in Trial Evidence

Using Observational Studies To Assess Benefits Systematic review question ( including PICOTS) OS = observational study; PICOTS = population, intervention, comparator, outcome, timing, and setting. OS = observational study; PICOTS = population, intervention, comparator, outcome, timing, and setting Consider OSs Always consider: Controlled trials Will OSs provide valid and useful information? Assess whether OSs address the review question Are there gaps in trial evidence? Yes Refocus the review question on gaps Assess the suitability of OSs: Natural history of the disease or exposure Potential biases Confine review to controlled trials No

 Refocus the study question on gaps in trial evidence.  Respecify the PICOTS for gaps in trial evidence.  Assess whether available observational studies (OSs) may address the review questions.  Assess the suitability of OSs to answer the review questions. Will Observational Studies Provide Valid and Useful Information?

 After the gaps in evidence have been identified that observational studies (OSs) could potentially fill, reviewers should:  Consider the clinical context and natural history of the condition under investigation  Assess how potential biases may influence the results of OSs Assessing the Suitability of Observational Studies To Answer the Review Questions

 Fluctuating or intermittent conditions are more difficult to assess with observational studies (OSs), especially if there is no well-formed comparison group.  For most chronic conditions, the natural history is for symptoms to wax and wane over time; regression to the mean is an important consideration.  OSs may be more useful for conditions with steady progression or decline. Clinical Context

 Selection bias  Performance bias  Detection bias  Attrition bias Potential Biases That May Limit the Suitability of Including Observational Studies

 Is a type of selection bias  Occurs when different diagnoses, severity of illness, or comorbid conditions are important reasons for physicians to assign different treatments  Is a common problem in pharmacoepidemiological studies comparing benefits  Is often difficult to adjust for, making studies with a high degree of this potential bias usually unsuitable for inclusion in a comparative effectiveness review Confounding by Indication

 Observational studies (OSs) without a comparison group are rarely helpful in assessing benefits because of a high risk of bias.  In general, OSs must have a well-formed comparison group to be useful.  Establishing treatment benefits from OSs is uncommon; generally, it is necessary that efficacy be established first in randomized controlled trials. Using Observational Studies To Assess Benefits

 Assessing harms can be difficult.  Trials often focus on benefits, with little effort to balance assessment of benefits with assessment of harms.  Observational studies are almost always necessary to assess harms adequately.  There are trade-offs between increasing comprehensiveness by reviewing all possible observational studies that present harms and the decreased quality that may occur from increased risk of bias. Harms Assessments

 Randomized controlled trials (RCTs) are the gold standard for evaluating efficacy.  Relying solely on RCTs to evaluate harms in comparative effectiveness reviews is problematic.  Most RCTs lack prespecified hypotheses for harms because they are designed to evaluate benefits.  Assessment of harms is often a secondary consideration.  The quality and quantity of harms reporting is frequently inadequate.  Few studies have sufficient sample sizes or duration to adequately assess uncommon or long-term harms. Using Randomized Controlled Trials To Assess Harms (I)

 Most randomized controlled trials (RCTs) are efficacy trials.  They assess benefits and harms in ideal, homogenous populations and settings.  Patients who are more susceptible to harms are often underrepresented.  Few RCTs directly compare alternative treatment strategies.  The potential for publication bias and selective outcome reporting bias should be considered.  RCTs may not be available. Using Randomized Controlled Trials To Assess Harms (II)

 Nevertheless, head-to-head randomized controlled trials (RCTs) provide the most direct evidence on comparative harms.  Placebo-controlled RCTs can provide important information.  Comparative effectiveness reviews (CERs) should include both head-to-head and placebo- controlled RCTs for assessment of harms.  In lieu of RCTs, CERs may incorporate findings of well-conducted systematic reviews if they evaluated the specific harms of interest. Using Randomized Controlled Trials To Assess Harms (III)

 Consider including the results of unpublished completed or terminated randomized controlled trials and unpublished results from published trials.  The United States Food and Drug Administration Web site and are important sources.  Reviewers must consider whether or not the risk of bias can be fully assessed.  When significant numbers of published trials fail to report important harms, reviewers should report this gap in the evidence and consider efforts to obtain unpublished data. Using Data From Unpublished Trials To Assess Harms

 Observational studies (OSs) are almost always necessary to assess harms adequately.  The exception is when there are sufficient data from randomized controlled trials to estimate harms reliably.  OSs may provide the best or only data for assessing harms in minority or vulnerable populations who are underrepresented in trials.  The types of OSs included in a comparative effectiveness research will vary.  Different types of OSs might be included or rendered irrelevant by data available from stronger study designs. Using Observational Studies To Assess Harms

Chou R, et al. J Clin Epidemiol 2010;63:  Determining whether or not a hypothesis is being tested or generated is an important consideration in deciding which observational studies to include in harms assessments.  Case reports and case series are hypothesis generating.  Cohort and case-control studies are well suited for testing hypotheses that one intervention is associated with a greater risk for an adverse event than another and for quantifying the risk. Hypothesis Testing Versus Hypothesis Generating

 Cohort and case-control studies  Routinely search for and include cohort and case-controlled studies, except when randomized controlled trial data are sufficient and valid  OSs based on patient registries  OSs based on analyses of large databases  Case reports, case series, and postmarketing surveillance studies  Include studies of new medications for which sufficient harms data are not available  Other OSs Types of Observational Studies That Can Be Used To Assess Harms

 Often there are many more observational studies (OSs) than trials; evaluating a large number of OSs can be impractical when conducting a comparative effectiveness review (CER).  Criteria commonly used to screen OSs for inclusion in CERs:  Minimum duration of followup  Minimum sample size  Defined threshold for risk of bias  Study design restrictions (cohort and case-control)  Specific population of interest Screening Observational Studies for Inclusion in Harms Assessments

 Evidence from trials is often insufficient to answer all the key questions to be addressed in comparative effectiveness reviews (CERs).  The default strategy for CERs should be to consider including observational studies (OSs).  CERs should explicitly state the rationale for including or excluding OSs.  To assess benefits, reviewers should consider two questions:  Are there gaps in trial evidence for the review questions under consideration?  Will observational studies provide valid and useful information to address key questions?  To assess harms, reviewers should routinely search for and include comparative cohort studies and case-control studies. Key Messages

 Norris S, Atkins D, Bruening W, et al. Comparative effectiveness reviews and observational studies In: Agency for Healthcare Research and Quality. Methods Guide for Comparative Effectiveness Reviews. Rockville, MD. In press.  Chou R, Aronson N, Atkins D, et al. AHRQ series paper 4: assessing harms when comparing medical interventions: AHRQ and the Effective Health Care Program. J Clin Epidemiol 2010;63: References

 This presentation was prepared by Dan Jonas, M.D., M.P.H., and Karen Crotty, Ph.D., M.P.H., members of the Research Triangle Institute– University of North Carolina Evidence-based Practice Center.  The module is based on Norris S, Atkins D, Bruening W, et al. Comparative effectiveness reviews and observational studies In: Agency for Healthcare Research and Quality. Methods Guide for Comparative Effectiveness Reviews. Rockville, MD. In press. Authors