Department of O UTCOMES R ESEARCH. Daniel I. Sessler, M.D. Professor and Chair Department of O UTCOMES R ESEARCH The Cleveland Clinic Clinical Research.

Slides:



Advertisements
Similar presentations
Agency for Healthcare Research and Quality (AHRQ)
Advertisements

Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Department of O UTCOMES R ESEARCH. Daniel I. Sessler, M.D. Professor and Chair Department of O UTCOMES R ESEARCH The Cleveland Clinic Clinical Research.
Research Study Designs
Designing Clinical Research Studies An overview S.F. O’Brien.
Observational Studies and RCT Libby Brewin. What are the 3 types of observational studies? Cross-sectional studies Case-control Cohort.
Study Designs in Epidemiologic
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
1 Case-Control Study Design Two groups are selected, one of people with the disease (cases), and the other of people with the same general characteristics.
天 津 医 科 大 学天 津 医 科 大 学 Clinical trail. 天 津 医 科 大 学天 津 医 科 大 学 1.Historical Background 1537: Treatment of battle wounds: 1741: Treatment of Scurvy 1948:
Chance, bias and confounding
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
Elements of a clinical trial research protocol
Biostatistics ~ Types of Studies. Research classifications Observational vs. Experimental Observational – researcher collects info on attributes or measurements.
Basics of Study Design Janice Weinberg ScD
Clinical Trials Hanyan Yang
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Sample Size Determination
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Experimental Study.
Department of O UTCOMES R ESEARCH. Daniel I. Sessler, M.D. Professor and Chair Department of O UTCOMES R ESEARCH The Cleveland Clinic Clinical Research.
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
Cohort Study.
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
Intervention Studies Principles of Epidemiology Lecture 10 Dona Schneider, PhD, MPH, FACE.
Research Design for Quantitative Studies
Quantitative Research Designs
Department of O UTCOMES R ESEARCH. Daniel I. Sessler, M.D. Michael Cudahy Professor and Chair Department of O UTCOMES R ESEARCH The Cleveland Clinic Clinical.
Lecture 16 (Oct 28, 2004)1 Lecture 16: Introduction to the randomized trial Introduction to intervention studies The research question: Efficacy vs effectiveness.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
1 Experimental Study Designs Dr. Birgit Greiner Dep. of Epidemiology and Public Health.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Sample Size Determination Donna McClish. Issues in sample size determination Sample size formulas depend on –Study design –Outcome measure Dichotomous.
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
 Is there a comparison? ◦ Are the groups really comparable?  Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
Julio A. Ramirez, MD, FACP Professor of Medicine Chief, Infectious Diseases Division, University of Louisville Chief, Infectious Diseases Section, Veterans.
Study Designs in Epidemiologic
LT 4.2 Designing Experiments Thanks to James Jaszczak, American Nicaraguan School.
Alec Walker September 2014 Core Characteristics of Randomized Clinical Trials.
Criteria to assess quality of observational studies evaluating the incidence, prevalence, and risk factors of chronic diseases Minnesota EPC Clinical Epidemiology.
Biostatistics in Practice Peter D. Christenson Biostatistician Session 1: Design and Fundamentals of Inference.
Lecture 5 Objective 14. Describe the elements of design of experimental studies: clinical trials and community intervention trials. Discuss the advantages.
What is a non-inferiority trial, and what particular challenges do such trials present? Andrew Nunn MRC Clinical Trials Unit 20th February 2012.
Study Designs for Clinical and Epidemiological Research Carla J. Alvarado, MS, CIC University of Wisconsin-Madison (608)
BIOE 301 Lecture Seventeen. Progression of Heart Disease High Blood Pressure High Cholesterol Levels Atherosclerosis Ischemia Heart Attack Heart Failure.
EXPERIMENTAL EPIDEMIOLOGY
Case Control Study Dr. Ashry Gad Mohamed MB, ChB, MPH, Dr.P.H. Prof. Of Epidemiology.
Critical Reading. Critical Appraisal Definition: assessment of methodological quality If you are deciding whether a paper is worth reading – do so on.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
Study designs. Kate O’Donnell General Practice & Primary Care.
Daniel I. Sessler Department of O UTCOMES R ESEARCH Cleveland Clinic on behalf of POISE-2 Investigators PeriOperative ISchemic Evaluation-2 Trial POISE-2POISE-2.
Penn CTSI Research Seminar Clinical Trials November 10, 2015 Vernon M. Chinchilli, PhD Distinguished Professor and Chair Department of Public Health Sciences.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Session 6: Other Analysis Issues In this session, we consider various analysis issues that occur in practice: Incomplete Data: –Subjects drop-out, do not.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
Making Randomized Clinical Trials Seem Less Random Andrew P.J. Olson, MD Assistant Professor Departments of Medicine and Pediatrics University of Minnesota.
1 Study Design Imre Janszky Faculty of Medicine, ISM NTNU.
Purpose of Epi Studies Discover factors associated with diseases, physical conditions and behaviors Identify the causal factors Show the efficacy of intervening.
Copyright ©2011 Brooks/Cole, Cengage Learning Gathering Useful Data for Examining Relationships Observation VS Experiment Chapter 6 1.
Journal Club Curriculum-Study designs. Objectives  Distinguish between the main types of research designs  Randomized control trials  Cohort studies.
Welcome Clinical Trials October 11, 2016 Vernon M. Chinchilli, PhD
Types of Research Studies Architecture of Clinical Research
CLINICAL PROTOCOL DEVELOPMENT
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
Randomized Trials: A Brief Overview
Design of Clinical Trials
Evidence Based Practice
Enhancing Causal Inference in Observational Studies
Enhancing Causal Inference in Observational Studies
Presentation transcript:

Department of O UTCOMES R ESEARCH

Daniel I. Sessler, M.D. Professor and Chair Department of O UTCOMES R ESEARCH The Cleveland Clinic Clinical Research Design Sources of Error Types of Clinical Research Randomized Trials

Sources of Error There is no perfect study All are limited by practical and ethical considerations It is impossible to control all potential confounders Multiple studies required to prove a hypothesis Good design limits risk of false results Statistics at best partially compensate for systematic error Major types of error Selection bias Measurement bias Confounding Reverse causation Chance

Statistical Association

Selection Bias Non-random selection for inclusion / treatment Or selective loss Subtle forms of disease may be missed When treatment is non-random: Newer treatments assigned to patients most likely to benefit “Better” patients seek out latest treatments “Nice” patients may be given the preferred treatment Compliance may vary as a function of treatment Patients drop out for lack of efficacy or because of side effects Largely prevented by randomization

Confounding Association between two factors caused by third factor For example: Transfusions are associated with high mortality But larger, longer operations require more blood Increased mortality consequent to larger operations Another example: Mortality greater in Florida than Alaska But average age is much higher in Florida Increased mortality from age, rather than geography of FL Largely prevented by randomization

Measurement Bias Quality of measurement varies non-randomly Quality of records generally poor Not necessarily randomly so Patients given new treatments watched more closely Subjects with disease may better remember exposures When treatment is unblinded Benefit may be over-estimated Complications may be under-estimated Largely prevented by blinding

Example of Measurement Bias Reported parental historyArthritis (%)No arthritis (%) Neither parent2750 One parent5842 Both parents158 From Schull & Cobb, J Chronic Dis, 1969 P = 0.003

Reverse Causation Factor of interest causes or unmasks disease For example: Morphine use is common in patients with gall bladder disease But morphine worsens symptoms which promotes diagnosis Conclusion that morphine causes gall bladder disease incorrect Another example: Patients with cancer have frequent bacterial infections However, cancer is immunosuppressive Conclusion that bacteria cause cancer is incorrect Largely prevented by randomization

External Threats to Validity Population of interest Eligible Subjects Subjects enrolled Selection bias Measurement bias Confounding Chance Conclusion Internal validityExternal validity ? ??

Types of Clinical Research Observational Case series –Implicit historical control –“The pleural of anecdote is not data” Single cohort (natural history) Retrospective cohort Case-control Retrospective versus prospective Prospective data usually of higher quality Randomized clinical trial Strongest design; gold standard First major example: use of streptomycin for TB in 1948

Case-Control Studies Identify cases & matched controls Look back in time and compare on exposure Time Case Group Control Group ExposureExposure

Cohort Studies Identify exposed & matched unexposed patients Look forward in time and compare on disease Time Exposed Unexposed DiseaseDisease

Timing of Cohort Studies Time Initial exposuresDisease onset or diagnosis PROSPECTIVE COHORT STUDY AMBIDIRECTIONAL COHORT STUDY RETROSPECTIVE COHORT STUDY

Randomized Clinical Trials (RCTs) A type of prospective cohort study Best protection again bias and confounding Randomization: reduces selection bias & confounding Blinding: reduces measurement error Not subject to reverse causation RCTs often “correct” observational results Types Parallel group Cross-over Factorial Cluster

Parallel Group Randomize participants to treatment groups Intervention AIntervention B Outcome AOutcome B Enrollment Criteria

Cross-over Diagram Treatment A ± Washout Treatment B ± Washout Treatment A Randomize individuals To sequential treatment Enrollment Criteria

Pros & Cons of Cross-over Design Strategy Sequential treatments in each participant Patients act as their own controls Advantages Paired statistical analysis markedly increases power Good when treatment effect small versus population variability Disadvantages Assumes underlying disease state is static Assumes lack of carry-over effect May require a treatment-free washout period Evaluate markers rather than “hard” outcomes Can not be used for one-time treatments such as surgery

Factorial Trials Simultaneously test 2 or more interventions Clonidine +ASAPlacebo + ASA Clonidine + PlaceboPlacebo + Placebo Clonidine +ASAPlacebo + ASA Clonidine + PlaceboPlacebo + Placebo Clonidine vs. Placebo ASA vs. Placebo

Pros & Cons Advantages More efficient than separate trials Can test for interactions Disadvantages Complexity, potential for reduced compliance Reduces fraction of eligible subjects and enrollment Rarely powered for interactions –But interactions influence sample size requirements

Factorial Outcome Example Apfel, et al. NEJM 2004

Subject Selection Tight criteria Reduces variability and sample size Excludes subjects at risk of treatment complications Includes subjects most likely to benefit May restrict to advance disease, compliant patients, etc. Slows enrollment “Best case” results –Compliant low-risk patients with ideal disease stage Loose criteria Includes more “real world” participants Increases variability and sample size Speeds enrollment Enhances generalizability

Randomization and Allocation Only reliable protection against Selection bias Confounding Concealed allocation Independent of investigators Unpredictable Methods Computer-controlled Random-block Envelopes, web-accessed, telephone Stratification Rarely necessary

Blinding / Masking Only reliable prevention for measurement bias Essential for subjective responses –Use for objective responses whenever possible Careful design required to maintain blinding Potential groups to blind Patients Providers Investigators, including data collection & adjudicators Maintain blinding throughout data analysis Even data-entry errors can be non-random Statisticians are not immune to bias! Placebo effect can be enormous

Placebo Effect Kaptchuk, PLoS ONE, 2010

Selection of Outcomes Surrogate or intermediate May not actually relate to outcomes of interest –Bone density for fractures –Intraoperative hypotension for stroke Usually continuous: implies smaller sample size Rarely powered for complications Major outcomes Severe events (i.e., myocardial infarction, stroke) Usual dichotomous: implies larger sample size Mortality Cost effectiveness / cost utility Quality-of-life

Composite Outcomes Any of ≥2 component outcomes, for example: Cardiac death, myocardial infarction, or non-fatal arrest Wound infection, anastomotic leak, abscess, or sepsis Usually permits a smaller sample size Incidence of each should be comparable Otherwise common outcome(s) dominate composite Severity of each should be comparable Unreasonable to lump minor and major events Death often included to prevent survivor bias Beware of heterogeneous results

Outcomes Approaches

Trial Management Case-report forms Require careful design and specific field definitions Every field should be completed –Missing data can’t be assumed to be zero or no event Data-management (custom database best) Evaluate quality and completeness in real time Range and statistical checks Trace to source documents Independent monitoring team

Multiple “Looks” Type 1 error = 1 – (1 – alpha) k Where k is the number of evaluations Number of “looks”Alpha error Informal evaluations count

Stopping Rules Corresponds to p < 0.05 at each analysis

Interim Analyses & Stopping Rules Reasons trials are stopped early Ethics Money Regulatory issues Drug expiration Personnel Other opportunities Pre-defined interim analyses Spend alpha and beta power Avoid “convenience sample” Avoid “looking” between scheduled analyses Pre-defined stopping rules Efficacy versus futility

Potential Problems Poor compliance Patients Clinicians Drop-outs Crossovers Insufficient power Greater-than-expected variability Treatment effect smaller than anticipated

Fragile Results Consider two identical trials of treatment for infarction N=200 versus n=8,000 Which result do you believe? Which is biologically plausible? What happens if you add two events to each Rx group? Study A p=0.13 Study B p=0.02 TrialN Treatment Events Placebo Events RRP A B4,

Four versus Five Rx for CML

Problem Solved?

How About Now?

Small Studies Often Wrong!

Multi-center Trials Advantages Necessary when large enrollment required Diverse populations increase generalizability of results Problems in individual center(s) balanced by other centers –Often required by Food and Drug Administration Disadvantages Difficult to enforce protocol –Inevitable subtle protocol differences among centers Expensive! “Multi-center” does not necessarily mean “better”

Unsupported Conclusions Beta error Insufficient detection power confused with negative result Conclusions that flat-out contradict presented results “Wishful thinking” — evidence of bias Inappropriate generalization: internal vs. external validity To healthier or sicker patients than studied To alternative care environments Efficacy versus effectiveness Failure to acknowledge substantial limitations Statistical significance ≠ clinical importance And the reverse!

Conclusion: Good Clinical Trials… Test a specific a priori hypothesis Evaluate clinically important outcomes Are well designed, with A priori and adequate sample size Defined stopping rules Are randomized and blinded when possible Use appropriate statistical analysis Make conclusions that follow from the data And acknowledged substantive limitations

Meta-analysis “Super analysis” of multiple similar studies Often helpful when there are many marginally powered studies Many serious limitations Search and selection bias Publication bias –Authors –Editors –Corporate sponsors Heterogeneity of results Good generalizability Rajagopalan, Anesthesiology 2008

Department of O UTCOMES R ESEARCH

Design Strategies Life is short; do things that matter! Is the question important? Is it worth years of your life? Concise hypothesis testing of important outcomes Usually only one or two hypotheses per study Beware of studies without a specified hypothesis A priori design Planned comparisons with identified primary outcome Intention-to-treat design General statistical approach Superiority, equivalence, non-inferiority Two-tailed versus one-tailed It’s not brain surgery, but…