Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ) www.ahrq.gov.

Slides:



Advertisements
Similar presentations
Agency for Healthcare Research and Quality (AHRQ)
Advertisements

Study Size Planning for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Study Objectives and Questions for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Protocol Development.
Grading the Strength of a Body of Evidence on Diagnostic Tests Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for.
Conclusion Epidemiology and what matters most
Introduction to the User’s Guide for Developing a Protocol for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research.
Deriving Biological Inferences From Epidemiologic Studies.
Study Designs in Epidemiologic
KINE 4565: The epidemiology of injury prevention Randomized controlled trials.
Sensitivity Analysis for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Reading the Dental Literature
Some comments on the 3 papers Robert T. O’Neill Ph.D.
PHSSR IG CyberSeminar Introductory Remarks Bryan Dowd Division of Health Policy and Management School of Public Health University of Minnesota.
Chance, bias and confounding
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Writing a Research Protocol Michael Aronica MD Program Director Internal Medicine-Pediatrics.
Chapter 51 Experiments, Good and Bad. Chapter 52 Experimentation u An experiment is the process of subjecting experimental units to treatments and observing.
Clinical Trials Hanyan Yang
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Selection of Data Sources for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Covariate Selection for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
Multiple Choice Questions for discussion
Clinical Trials. What is a clinical trial? Clinical trials are research studies involving people Used to find better ways to prevent, detect, and treat.
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
Lecture 8 Objective 20. Describe the elements of design of observational studies: case reports/series.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 7: Gathering Evidence for Practice.
Lecture 16 (Oct 28, 2004)1 Lecture 16: Introduction to the randomized trial Introduction to intervention studies The research question: Efficacy vs effectiveness.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
1 Experimental Study Designs Dr. Birgit Greiner Dep. of Epidemiology and Public Health.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Joe Selby, MD MPH EBRI December 15, 2011 What Might Patient (Employee)- Centered Research Look Like?
Exposure Definition and Measurement in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Study Designs Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /4/20151.
Experimental Design making causal inferences Richard Lambert, Ph.D.
Clinical Trial Designs An Overview. Identify: condition(s) of interest, intended population, planned treatment protocols Recruitment of volunteers: volunteers.
Chapter 2 Research in Abnormal Psychology. Slide 2 Research in Abnormal Psychology  Clinical researchers face certain challenges that make their investigations.
Evidence-Based Public Health Nancy Allee, MLS, MPH University of Michigan November 6, 2004.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
Design and Analysis of Clinical Study 2. Bias and Confounders Dr. Tuan V. Nguyen Garvan Institute of Medical Research Sydney, Australia.
A short introduction to epidemiology Chapter 10: Interpretation Neil Pearce Centre for Public Health Research Massey University, Wellington, New Zealand.
Study Designs for Clinical and Epidemiological Research Carla J. Alvarado, MS, CIC University of Wisconsin-Madison (608)
Causal relationships, bias, and research designs Professor Anthony DiGirolamo.
Interpreting observational studies of cardiovascular risk of NSAIDs. Richard Platt, MD, MS Harvard Medical School and Harvard Pilgrim Health Care HMO Research.
Issues concerning the interpretation of statistical significance tests.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
1 Study Design Issues and Considerations in HUS Trials Yan Wang, Ph.D. Statistical Reviewer Division of Biometrics IV OB/OTS/CDER/FDA April 12, 2007.
How To Design a Clinical Trial
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
1 Health and Disease in Populations 2002 Session 8 – 21/03/02 Randomised controlled trials 1 Dr Jenny Kurinczuk.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 18 Systematic Review and Meta-Analysis.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Analytical Studies Case – Control Studies By Dr. Sameh Zaytoun (MBBch, DPH, DM, FRCP(Manch), DTM&H(UK),Dr.PH) University of Alexandria - Egypt Consultant.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
Producing Data: Experiments BPS - 5th Ed. Chapter 9 1.
1 Study Design Imre Janszky Faculty of Medicine, ISM NTNU.
Methodological Issues in Implantable Medical Device(IMDs) Studies Abdallah ABOUIHIA Senior Statistician, Medtronic.
PRAGMATIC Study Designs: Elderly Cancer Trials
Analytical Observational Studies
Associations of Maternal Antidepressant Use During the First Trimester of Pregnancy With Preterm Birth, Small for Gestational Age, Autism Spectrum Disorder,
Clinical Studies Continuum
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
Strategies to incorporate pharmacoeconomics into pharmacotherapy
A Primer on Health Economics and Cost-Effectiveness
Critical Reading of Clinical Study Results
11/20/2018 Study Types.
Critical Appraisal วิจารณญาณ
Regulatory Perspective of the Use of EHRs in RCTs
Presentation transcript:

Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

This presentation will:  Show how to choose concurrent, active comparators from the same source population (or justify the use of no-treatment comparisons/ historical comparators/different data sources)  Discuss potential bias (and methods to minimize it) associated with comparator choice  Define time 0 for all comparator groups in describing planned analyses Outline of Material

 In comparative effectiveness research, the choice of comparator directly affects the clinical implication, interpretation, and validity of study results.  Treatment decisions are based on factors associated with underlying disease and its severity, general health status or frailty, quality of life, and patient preferences.  There is potential for confounding by indication or severity and selection bias associated with different comparison groups.  Internal validity relies on defining appropriate dose, intensity of treatment, and exposure window for comparator groups. Introduction

 Confounding arises when a risk factor for the study outcome of interest directly or indirectly affects exposure (e.g., treatment assignment).  The magnitude of potential confounding is generally expected to be smaller when the comparator:  Has the same indication  Has similar contraindications  Shares the same treatment modality (e.g., tablet or capsule)  Conduct sensitivity analyses to quantify effects of potential unmeasured confounding. Consequences of Comparator Choice (1 of 2)

 Exposure misclassification:  Arises when exposure measurement differs between the exposure and comparator groups  Is often more complex in comparative effectiveness research, since each group represents active treatment (nonuse of exposure treatment does not imply use of the comparator treatment)  Can differ in each group, especially if different treatment modalities are used  Assess separately for exposure versus comparison groups Consequences of Comparator Choice (2 of 2)

Spectrum of Possible Comparisons (1 of 3)  Alternative treatments  Most common scenario and typically least biased  More clinically meaningful and methodologically valid  Could still result in confounding by severity if not adequately controlled through design/analysis  No treatment/testing  Absence of exposure or absence of exposure and use of an unrelated treatment (active comparator)  Choice of time 0 must be clinically appropriate in order to reduce bias

Spectrum of Possible Comparisons (2 of 3)  Usual or standard care  Develop a valid operational definition for care and for time at initiation (none, single, or a set of treatment/testing modalities)  Real-world use must be understood for proper definition  Can vary across geographic regions/treatment settings or change over time; avoid a “wastebasket definition”  Historical comparison  Used with a dramatic shift from one treatment to another  May be the only choice with strong selection for a new treatment that is uncontrollable and randomization is unethical/not realistic  Vulnerable to confounding by indication/severity when this information is unmeasured (overcome by instrumental variable analysis using calendar time)

 Comparison groups from different data sources  Multiple data sources can be linked to enhance the validity of observational comparative effectiveness studies  Residual confounding might occur due to:  Incomparability of information in exposure and comparison groups  Differences in observed and unobserved domains as they are sampled differently or different source populations  Issues with generalizability when exposure and comparison groups come from different databases Spectrum of Possible Comparisons (3 of 3)

 Indication  Another treatment used for the same indication as the exposure treatment typically is used as the comparison group  Treatments approved for multiple indications—appropriate indication will have to be ensured by defining the indication and restricting the study population  Initiation  New-user design prevents underascertainment of early events and avoids selection bias arising from prevalent users  Inclusion of prevalent users may be justified when outcomes are rare or occur after long periods of use Operationalizing the Comparison Group in Comparative Effectiveness Research (1 of 2)

 Exposure time window  Period where therapeutic benefit and/or risk would plausibly occur  Sensitivity analysis to assess whether results are sensitive to different specifications of the exposure window(s)  Nonadherence  May differ between treatment and comparators  Treatment effects should be compared at adherence levels observed in clinical practice, rather than adjusting for the difference in adherence  Dose/intensity of drug comparison  Assess and report dose in each group  Make comparisons at clinically equivalent dose levels Operationalizing the Comparison Group in Comparative Effectiveness Research (2 of 2)

 Confounding by indication or severity:  Medications may be used for patients with a milder disease, and surgery might be reserved for those with more severe disease.  Selection of healthier patients to receive more invasive treatments:  Sicker patients are less likely to be considered for invasive procedures.  Selection becomes more problematic in comparisons across different treatment modalities. Considerations for Comparisons Across Different Treatment Modalities (1 of 3)

 Time from disease onset to a treatment:  Pay careful attention to the time from initial diagnosis and the general sequence of different treatment modalities needed to prevent immortal person-time bias.  Different magnitude of misclassification in drug exposure versus procedure comparison:  Misclassification of exposure might be greater with drugs than with devices/procedures.  Pharmacy records do not provide information on actual intake. Considerations for Comparisons Across Different Treatment Modalities (2 of 3)

 Provider effects in using devices or surgeries:  Consider the characteristics of the operating physician and institution where the device implantation or surgery was carried out  Be aware of the documented direct relationship between the level of physician experience and better patient outcomes for complex procedures  Adherence to drugs and device failure or removal:  Requires assumptions in most data sources  May be appropriate to compare without adjusting, as it reflects real-world use Considerations for Comparisons Across Different Treatment Modalities (3 of 3)

 Understanding the impact of comparator choice on study design is important.  Selection of the comparator group should be primarily driven by a comparative effectiveness question prioritized by the stakeholder community.  An over-riding consideration is the generation of evidence that should directly inform decisions on treatments, testing, or health care – delivery systems.  Some study questions may not be answered validly due to intractable bias in observational comparative effectiveness research. Conclusions

Summary Checklist GuidanceKey Considerations Choose concurrent, active comparators from the same source population (or justify use of no-treatment comparisons/historical comparators/different data sources) Comparator choice should be primarily driven by a comparative effectiveness question prioritized by the informational needs of the stakeholder community and secondarily as a strategy to minimize bias Discuss potential bias associated with comparator choice and methods to minimize such bias, when possible Be sure to also describe how study design/analytic methods will be used to minimize bias Define time 0 for all comparator groups in describing planned analyses Choice of time 0, particularly in no-treatment or usual care, should be carefully considered in light of potential immortal time bias and prevalent user bias Employ a new-user design as a default, if possible