Some comments on the 3 papers Robert T. O’Neill Ph.D.

Slides:



Advertisements
Similar presentations
Nursing Diagnosis: Definition
Advertisements

What Is the Role for Analyses of Administrative Data in Assessing Drug Safety in Post-Market Commitment (PMC) Studies? Cathy W. Critchlow, PhD Global Epidemiology,
Andrea M. Landis, PhD, RN UW LEAH
Randomized Controlled Trial
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Introduction to the User’s Guide for Developing a Protocol for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research.
8. Evidence-based management Step 3: Critical appraisal of studies
天 津 医 科 大 学天 津 医 科 大 学 Clinical trail. 天 津 医 科 大 学天 津 医 科 大 学 1.Historical Background 1537: Treatment of battle wounds: 1741: Treatment of Scurvy 1948:
Econometric Modeling More on Experimental Design.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Common Problems in Writing Statistical Plan of Clinical Trial Protocol Liying XU CCTER CUHK.
Statistics Micro Mini Threats to Your Experiment!
EXPERIMENTS AND OBSERVATIONAL STUDIES Chance Hofmann and Nick Quigley
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
As noted by Gary H. Lyman (JCO, 2012) “CER is an important framework for systematically identifying and summarizing the totality of evidence on the effectiveness,
Validity and Reliability Dr. Voranuch Wangsuphachart Dept. of Social & Environmental Medicine Faculty of Tropical Medicine Mahodil University 420/6 Rajvithi.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
FRAMING RESEARCH QUESTIONS The PICO Strategy. PICO P: Population of interest I: Intervention C: Control O: Outcome.
Empowering Evidence: Basic Statistics June 3, 2015 Julian Wolfson, Ph.D. Division of Biostatistics School of Public Health.
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
Section Inference for Experiments Objectives: 1.To understand how randomization differs in surveys and experiments when comparing two populations.
Julio A. Ramirez, MD, FACP Professor of Medicine Chief, Infectious Diseases Division, University of Louisville Chief, Infectious Diseases Section, Veterans.
Applied Epidemiology Sharla Smith. Discussion Assignments How to complete a discussion assignment –Read the chapters –Evaluate the question –Be very specific.
Challenges of Non-Inferiority Trial Designs R. Sridhara, Ph.D.
Consumer behavior studies1 CONSUMER BEHAVIOR STUDIES STATISTICAL ISSUES Ralph B. D’Agostino, Sr. Boston University Harvard Clinical Research Institute.
Part III Gathering Data.
1 f02kitchenham5 Preliminary Guidelines for Empirical Research in Software Engineering Barbara A. Kitchenham etal IEEE TSE Aug 02.
Successful Concepts Study Rationale Literature Review Study Design Rationale for Intervention Eligibility Criteria Endpoint Measurement Tools.
Alec Walker September 2014 Core Characteristics of Randomized Clinical Trials.
Landmark Trials: Recommendations for Interpretation and Presentation Julianna Burzynski, PharmD, BCOP, BCPS Heme/Onc Clinical Pharmacy Specialist 11/29/07.
Potential Errors In Epidemiologic Studies Bias Dr. Sherine Shawky III.
VSM CHAPTER 6: HARM Evidence-Based Medicine How to Practice and Teach EMB.
Design and Analysis of Clinical Study 2. Bias and Confounders Dr. Tuan V. Nguyen Garvan Institute of Medical Research Sydney, Australia.
통계적 추론 (Statistical Inference) 삼성생명과학연구소 통계지원팀 김선우 1.
WHS AP Psychology Research Methods: Experiments. I CAN ANSWER How do psychologists use the scientific method to study behavior and mental processes? What.
TUJUAN MEMONITOR HASIL KERJA MAHASISWA (MEMBACA ARTIKEL) MENJELASKAN BAGIAN-BAGIAN KERTAS KERJA UNTUK MENELAAH ARTIKEL DAN KRITERIA PENILAIAN KUALITAS.
EXPERIMENTAL EPIDEMIOLOGY
How to Read Scientific Journal Articles
Using Predictive Classifiers in the Design of Phase III Clinical Trials Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
1 Study Design Issues and Considerations in HUS Trials Yan Wang, Ph.D. Statistical Reviewer Division of Biometrics IV OB/OTS/CDER/FDA April 12, 2007.
Presentations in this series 1.Overview and Randomization 2.Self-matching 3.Proxies 4.Intermediates 5.Instruments 6.Equipoise Avoiding Bias Due to Unmeasured.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
PTP 661 EVIDENCE ABOUT INTERVENTIONS CRITICALLY APPRAISE THE QUALITY AND APPLICABILITY OF AN INTERVENTION RESEARCH STUDY Min Huang, PT, PhD, NCS.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Sample Size Determination
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Course: Research in Biomedicine and Health III Seminar 5: Critical assessment of evidence.
Learning Objectives After this section, you should be able to: The Practice of Statistics, 5 th Edition1 DESCRIBE the shape, center, and spread of the.
Making Randomized Clinical Trials Seem Less Random Andrew P.J. Olson, MD Assistant Professor Departments of Medicine and Pediatrics University of Minnesota.
1 Chapter 11 Understanding Randomness. 2 Why Random? What is it about chance outcomes being random that makes random selection seem fair? Two things:
Methodological Issues in Implantable Medical Device(IMDs) Studies Abdallah ABOUIHIA Senior Statistician, Medtronic.
CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.
AP Statistics Review Day 2 Chapter 5. AP Exam Producing Data accounts for 10%-15% of the material covered on the AP Exam. “Data must be collected according.
Clinical Trials for Comparative Effectiveness Research Mark Hlatky MD Mark Hlatky MD Stanford University January 10, 2012.
Experiments, Simulations Confidence Intervals
Sample Size Determination
Critically Appraising a Medical Journal Article
CLINICAL PROTOCOL DEVELOPMENT
Journal Club Notes.
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
Randomized Trials: A Brief Overview
CHAPTER 10 Comparing Two Populations or Groups
Chapter 7 The Hierarchy of Evidence
Critical Reading of Clinical Study Results
S1316 analysis details Garnet Anderson Katie Arnold
Tim Auton, Astellas September 2014
Clinical Event Classification: Strategies and Practices
Critical Appraisal วิจารณญาณ
Regulatory Perspective of the Use of EHRs in RCTs
Presentation transcript:

Some comments on the 3 papers Robert T. O’Neill Ph.D

Comments on G. Anderson u WHISH is a nice example u Randomiztion (Zelen) but using different sources of data for outcome u Outcome data: self reported, adjudicated for medical records, Medicare claim (hybrid-ability to estimate SE and SP u Impact of outcome misclassification u Event data not defined by protocol-you depend on the health care system u Claims data DO NOT provide standardized data – see Mini-Sentinel and OMOP

Comments on A J Cook u Key component is randomization at patient or clinic level and use of electronic health record for data capture (cluster randomization addresse different issues) u Missing data, informative censoring, switching, measuring duration of exposure (repeat Rx, gaps), different answers depending upon definition u Validation of outcomes makes the pragmatic trial less simple u Only some outcomes (endpoints), populations, questions are addressable before complexities of interpretation overwhelm

Comments on M Gaffney u Precision and Eagle are not large simple trials – they are large difficult trials u Outcome adjudication, monitoring strategies u Non-inferiority poses significant challenges for pragmatic trials – generally no assay sensitivity u Margin selection based upon evidence vs. based upon close enough but not sure if both are equally good or bad

Other comments on NI studies u Pre-specifying the margins – why and what is the difference in these two situations u What treatment difference is detectable and credible with the playoff of bias and huge sample size u When pre-specification is not possible because there is no historical information, the width of the confidence interval makes sense – but two conclusions – both treatments the same and comparably effective vs. both the same but both ineffective u What endpoints are eligible : Hard endpoints (y), patient symptoms(n)

Other comments u Are NI designs appropriate for claims data of EHR without independent all case adjudication – implications for poor sensitivity and specificity to drive estimate to null – what does a null result mean u Experience suggests that Exposure (drugs) has better accuracy than diagnoses or procedure in claims data bases(outcomes) u Duration of exposure dependent upon algorithms for repeat prescriptions – different results depending upon definitions of gaps between repeated RX

Can randomization overcome lack of blinding and personal choices after randomization u Use of observational methods of accounting for unmeasured confounding of assigned treatment and time to event outcomes subject to censoring u Directed Acylic Graphs to explore the confounding-censoring problem - diagnostics u Instrumental variables

Lessons learned from Mini-Sentinel and the Observational Medical Outcomes Partnership (OMOP) u Distributed data models u Common data models u Limits of detectability of effect sizes of two or more competing agents – calibration, interpretation of p-values for non randomized studies u Not all outcomes and exposures can be dealt with in similar manner u Know the limitations of your data base – is this possible in advance of conducting the study – part of the intensive study planning, protocol and prospective analysis plan

An example of Medicare data use but not a RCT

Some other views and opinions on CER using the learning health care system

Lessons Learned from OMOP and Mini- Sentinel About observational studies using health care claims data or EHR – but no randomization u Lessons about the limitations of the data bases, outcome capturing, ascertainment, missing data (confounders) are relevant to the RCT use of the same data source u Lessons about data models, and challenges for data (outcome standardization)

The Observational Medical Outcomes Partnership – many findings

Some ideas on what to evaluate about a given data source before committing to conducting a study – focus on observational studies – but also relevant to pragmatic RCTs

How do these presentations relate to pragmatic trials within a health care system u Two or more competing therapies on a formulary – never compared with each other u Randomize patients under equipoise principle – do you need patient consent, physician consent if health plan and no data to think ‘I or We don’t know but want to find out ‘ u Collect electronic medical record data, including exposures and outcomes – and decide if any additional adjudication is needed u Analyze according to best practices – but with some prospective SAPs – causal inference strategies ?