The Use and Evaluation of Experiments in Health Care Delivery Amanda Kowalski Associate Professor of Economics Department of Economics, Yale University.

Slides:



Advertisements
Similar presentations
Template: Making Effective Presentation about Your Evidence-based Health Promotion Program This template is intended for you to adapt to your own program.
Advertisements

Engaging Patients and Other Stakeholders in Clinical Research
Chapter 11 What Works and What Doesn’t. Are Hospitals Good for You? From Angrist and Pischke, Mostly Harmless Econometrics.
Shared Decision-making’s Place in Health Care Reform Peter V. Lee Executive Director National Health Care Policy, PBGH Co-Chair, Consumer-Purchaser Disclosure.
Pre-analysis plans Module 8.3. Recap on statistics If we find a result is significant at the 5% level, what does this mean? – there is a 5% or less probability.
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Conducting systematic reviews for development of clinical guidelines 8 August 2013 Professor Mike Clarke
1 Implementation of Randomized Trials David Myers American Institutes for Research Washington, DC Prepared for IES/NCER Summer Research Training Institute,
Research, Program Evaluation, and Quality Improvement or Assurance: What’s in a Name? Ivor Pritchard, Ph.D. Senior Advisor to the Director of OHRP
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Clinical trials methodology group Simon Gates 9 February 2006.
Cancer Disparities Research Partnership Program Process & Outcome Evaluation Amanda Greene, PhD, MPH, RN Paul Young, MBA, MPH Natalie Stultz, MS NOVA Research.
Monitoring and Evaluation for TB Programs
Journal Club Alcohol, Other Drugs, and Health: Current Evidence May–June 2009.
Monitoring and Evaluation for TB Programs Developing a Monitoring and Evaluation Plan Charlotte Colvin 2 February 2006 New Delhi, India.
Evaluation. Practical Evaluation Michael Quinn Patton.
Cancer Clinical Trials: The Basics. 2 What Are Cancer Clinical Trials? Research studies involving people Try to answer scientific questions and find better.
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Katherine Baicker Professor of Health Economics, Harvard School of Public Health Implementation of the ACA: Insurance Expansions and the Value of Care.
Public Health Systems Research: What We Know and Need to Learn Glen P. Mays, PhD, MPH Department of Health Policy & Management UAMS College of Public Health.
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Copyright © 2010 Pearson Education, Inc. Slide
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
Measuring Impact: Experiments
WRITING THE SUCCESSFUL PROPOSAL C. June Strickland, Ph.D., RN Associate Professor University of Washington School of Nursing.
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
Experimental Design making causal inferences Richard Lambert, Ph.D.
 C HAPTERS 14 & 15 Code Blue Health Science Edition 4.
Workshop 6 - How do you measure Outcomes?
Evidence-Based Journal Article Presentation [Insert your name here] [Insert your designation here] [Insert your institutional affiliation here] Department.
COMMUNITY CONSULTATION: LESSONS LEARNED IN HIV COMMUNITY-BASED RESEARCH Ronald P. Strauss, D.M.D., Ph.D. University of North Carolina at Chapel Hill.
Indicators for ACSM.
HSS4331 – Introduction to International Health Theory April 12, 2010 – The Last Class!
Evaluating Ongoing Programs: A Chronological Perspective to Include Performance Measurement Summarized from Berk & Rossi’s Thinking About Program Evaluation,
Marginal Treatment Effects and the External Validity of the Oregon Health Insurance Experiment Amanda Kowalski Associate Professor, Department of Economics,
Melissa McCarey, MPH Jefferson Clinical Research Institute (JCRI) Clinicaltrials.gov: What is it? What do I need to know?
Development Impact Evaluation in Finance and Private Sector 1.
Sara Lovell, CPCS Education Coordinator Providence Alaska Medical Center.
Experimental Design Econ 176, Fall Some Terminology Session: A single meeting at which observations are made on a group of subjects. Experiment:
How To Design a Clinical Trial
Reasoning in Psychology Using Statistics Psychology
Is the conscientious explicit and judicious use of current best evidence in making decision about the care of the individual patient (Dr. David Sackett)
1 Alaska State Planning Grant Overview Alice Rarig, MA MPH PhD Planner, Health Planning & Systems Development Unit Office of the Commissioner Alaska Department.
Starting Class Each Day / Lesson 1.Log into google classroom & open “C&E Journal” document. – Classroom.google.com – Insert a line / create separation.
Pilot and Feasibility Studies NIHR Research Design Service Sam Norton, Liz Steed, Lauren Bell.
Framing a research question Chitra Grace A Scientist- C (PGDHE) NIE, Chennai RM Workshop for ICMR Scientists 01/11/2011.
Resource Review for Teaching Resource Review for Teaching Victoria M. Rizzo, LCSW-R, PhD Jessica Seidman, LMSW Columbia University School of Social Work.
RE-AIM Framework. RE-AIM: A Framework for Health Promotion Planning, Implementation and Evaluation Are we reaching the intended audience? Is the program.
3CTN “Ask Me” Campaign Training Slides. Introduction 3CTN Ask Me Campaign Training Slides March CTN Objectives 1. To improve patient access to.
Comparative Effectiveness Research (CER) and Patient- Centered Outcomes Research (PCOR) Presentation Developed for the Academy of Managed Care Pharmacy.
CIS 170 MART Teaching Effectively/cis170mart.com FOR MORE CLASSES VISIT HCS 440 AID Inspiring Minds/hcs440aid.com FOR MORE CLASSES VISIT.
PRAGMATIC Study Designs: Elderly Cancer Trials
Cohort Study Evaluation Irina Ibraghimova
OH NO!!! Quality Improvement. Objectives Define a Quality Improvement Program Identify how to get started Identify who should be involved Identify how.
HCS 440 AID Experience Tradition /hcs440aid.com FOR MORE CLASSES VISIT
Plan & Partner Management Update
admissions in residents in care homes.
How To Design a Clinical Trial
Resource 1. Involving and engaging the right stakeholders.
Financing Heath Care in Low Income Coutnries
CLINICAL PROTOCOL DEVELOPMENT
Reading Research Papers-A Basic Guide to Critical Analysis
NHS Education for Scotland Always Event Project
Healthwatch West Berkshire Rationale
Implementation of Randomized Trials
Community Scientist Academy
3CTN Ask Me Campaign Training Slides.
Presentation transcript:

The Use and Evaluation of Experiments in Health Care Delivery Amanda Kowalski Associate Professor of Economics Department of Economics, Yale University September 26, 2015

What are the questions that YOU want to answer?

How can I help you to answer them?

A Call to Action 47/6223/720.full

Heard in the Trenches: Barriers to Randomization

“Why randomize?”  Way to mitigate influence of confounding factors  If you just compare treated group to a non-treated group, there could be other factors that produce different outcomes  If you just compare treated group after intervention to treated group before intervention, there could be other factors that changed over time

“Can’t you just analyze data collected after the intervention?”  It IS important to examine existing administrative data  Hard to learn about causality if you only have data on the treatment group after the intervention

“Is randomization fair?”  We only randomize if we don’t know if the intervention will work – “equipoise”  Randomization standard in clinical trials for medical interventions  When resources are scarce, randomization can be a fair way to allocate them

“Is randomization fair?” Example: Oregon Health Insurance experiment  Some funds to expand Medicaid coverage, but not enough to expand coverage to all interested parties  Held a lottery in 2008

“Is randomization fair?” Example: Oregon Health Insurance experiment

 Key findings:  Increased health care utilization  Emergency room utilization increased  Decrease in depression  No changes to physical health  Reduced financial strain  No discernable impact on labor market outcomes

“But some large-scale questions can’t be studied with randomization!”  For example, some initiatives like bundled payments might have larger impacts if they are implemented more broadly  Could also randomize fraction of population affected across different sites

“I’m too busy implementing this initiative to think about anything else!”  Randomization can be a seamless part of implementation  After implementation is too late

“I’m afraid to find out that what I have done does not work!”  Health care industry full of altruistic people who want to improve quality, increase access, and decrease cost  If your intervention does not work, can try something else next  If your intervention does work, evidence indicating so can be useful to others, broadening the impact of your work

“Will the implementation be costly?”  Clinical trials often costly  Subject recruitment  Informed consent  Randomized experiments in health care delivery  Subjects already in system  Consent waived  Costs of designing experiment  Costs of implementing randomization  Costs of collecting data – could focus on existing  Costs of analyzing data

“How will results be disseminated?”  Results will be published regardless of outcome  Institutional partner can opt for anonymity before publication

“When is a good time to get started?”  Before implementing a new intervention  Baseline data can be collected  Enriches comparison of treatment to control  Ensures that outcomes can be measured  Program probably not rolled out to everyone at the same time anyway

“Which interventions are best studied with randomization?”  Potential for large, detectable outcomes  But outcomes are not known  Large potential number of subjects  Increases statistical power  Interventions that would be implemented anyway  Increases real-world applicability  Successful implementation paves the way for future randomization of other initiatives

“What are the advantages of partnering with an economist?”  Statistical techniques  More subtle than comparing treatment group to control group  Dissemination  Potential to reach different audience  Results prepared by an independent entity potentially more impartial

“I’m on board with randomization. What’s next?”  What is the problem to be addressed?  What administrative data are available and how can they be accessed?  Who will be the implementing partners?  C-level advocate to push project through  Administrative contact for day-to-day

Let’s talk further!

  Slides on my website, video will be posted  …Tell your friends to examine slides, watch this presentation, propose an idea, and contact me!

What are the questions that YOU want to answer?