PHSSR IG CyberSeminar Introductory Remarks Bryan Dowd Division of Health Policy and Management School of Public Health University of Minnesota.

Slides:



Advertisements
Similar presentations
Andrea M. Landis, PhD, RN UW LEAH
Advertisements

Postgraduate Course 7. Evidence-based management: Research designs.
Making Inferences about Causality In general, children who watch violent television programs tend to behave more aggressively toward their peers and siblings.
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Random Assignment Experiments
GROUP-LEVEL DESIGNS Chapter 9.
Epidemiologic study designs
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Correlation AND EXPERIMENTAL DESIGN
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Specifying a Purpose, Research Questions or Hypothesis
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 4 Choosing a Research Design.
Introduction to Statistics
Chapter 2 – Tools of Positive Analysis
TOOLS OF POSITIVE ANALYSIS
S-005 Types of research in education. Types of research A wide variety of approaches: –Theoretical studies –Summaries of studies Reviews of the literature.
Specifying a Purpose, Research Questions or Hypothesis
Experiments and Observational Studies.  A study at a high school in California compared academic performance of music students with that of non-music.
The Classic Experiment (and Its Limitations) Class 6.
Research Design Interactive Presentation Interactive Presentation
McGraw-Hill/Irwin Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Choosing a Research Design.
Chapter 3 The Research Design. Research Design A research design is a plan of action for executing a research project, specifying The theory to be tested.
Quantitative Research Designs
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Copyright © 2010 Pearson Education, Inc. Slide
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
EVAL 6970: Cost Analysis for Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Design Experimental Control. Experimental control allows causal inference (IV caused observed change in DV) Experiment has internal validity when it fulfills.
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
Types of study designs Arash Najimi
 Is there a comparison? ◦ Are the groups really comparable?  Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
Slide 13-1 Copyright © 2004 Pearson Education, Inc.
Techniques of research control: -Extraneous variables (confounding) are: The variables which could have an unwanted effect on the dependent variable under.
Causal inferences During the last two lectures we have been discussing ways to make inferences about the causal relationships between variables. One of.
Between groups designs (2) – outline 1.Block randomization 2.Natural groups designs 3.Subject loss 4.Some unsatisfactory alternatives to true experiments.
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 3: The Foundations of Research 1.
Assumes that events are governed by some lawful order
Notes on Research Design You have decided –What the problem is –What the study goals are –Why it is important for you to do the study Now you will construct.
1 Experimental Research Cause + Effect Manipulation Control.
1 Evaluating Research This lecture ties into chapter 17 of Terre Blanche We know the structure of research Understand designs We know the requirements.
Ch. 2 Tools of Positive Economics. Theoretical Tools of Public Finance theoretical tools The set of tools designed to understand the mechanics behind.
Research Methods in Psychology Chapter 2. The Research ProcessPsychological MeasurementEthical Issues in Human and Animal ResearchBecoming a Critical.
Causal inferences Most of the analyses we have been performing involve studying the association between two or more variables. We often conduct these kinds.
METHODS IN BEHAVIORAL RESEARCH NINTH EDITION PAUL C. COZBY Copyright © 2007 The McGraw-Hill Companies, Inc.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Empirical Tools of Public Finance 3.1 The Important.
McGraw-Hill/Irwin Copyright © 2008 by The McGraw-Hill Companies, Inc. All rights reserved. CHAPTER 2 Tools of Positive Analysis.
S-005 Types of research in education. Types of research A wide variety of approaches: –Theoretical studies –Summaries of studies Reviews of the literature.
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 15: Single-Participant Experiments, Longitudinal Studies, and Quasi-Experimental.
Evaluation Research Dr. Guerette. Introduction Evaluation Research – Evaluation Research – The purpose is to evaluate the impact of policies The purpose.
Single-Subject and Correlational Research Bring Schraw et al.
Experimental and Ex Post Facto Designs
Causal inferences This week we have been discussing ways to make inferences about the causal relationships between variables. One of the strongest ways.
Mediation: The Causal Inference Approach David A. Kenny.
CJ490: Research Methods in Criminal Justice UNIT #4 SEMINAR Professor Jeffrey Hauck.
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Copyright © 2010 Worth Publishers.
1 The Training Benefits Program – A Methodological Exposition To: The Research Coordination Committee By: Jonathan Adam Lind Date: 04/01/16.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
By Randall Munroe, xkcd.com Econometrics: The Search for Causal Relationships.
Ten things about Experimental Design AP Statistics, Second Semester Review.
Producing Data 1.
Methods of Presenting and Interpreting Information Class 9.
Chapter 13- Experiments and Observational Studies
Making Causal Inferences and Ruling out Rival Explanations
Empirical Tools of Public Finance
Lesson Using Studies Wisely.
MIS2502: Data Analytics Field Experiments and Natural Experiments
Positive analysis in public finance
Presentation transcript:

PHSSR IG CyberSeminar Introductory Remarks Bryan Dowd Division of Health Policy and Management School of Public Health University of Minnesota

Causation versus Association Who Cares? The purpose of public health systems and services research is to examine the impact of the organization, financing, and delivery of public health services at the local, state, and national levels on population health. By “impact,” I assume we mean the causal effect on population health of changing one of those factors.

Causation versus Association Linking “impact” and “causal effect” to change highlights a common distinction between causation and association. “Association” (the weaker term) often refers to relationships among variables whose observed values come from a single observation of each subject (cross-sectional data). “Causation” often refers to relationships among variables whose observed values come from multiple observations of each subject (time-series data).

Causation versus Association But many analytic techniques are designed to draw causal inferences from cross-sectional data. And the fact that two variables change their values over time is no guarantee that the change in one variable caused the values of the other variable to change. Much of our empirical research attempts to distinguish causal relationships from spurious relationships.

But from a practical perspective … In public health, there often comes a time when we must act: choosing a particular course of action or the status quo. Examples: 1.Should we impose a quarantine? 2.Should we inspect restaurants once a month or once a decade? 3.Should we inoculate the population against a particular disease?

Association = status quo? The practical and empirical question is, “Do the data support taking one specific course of action versus an alternative?” In that context, saying the policy variable (that we control) and the outcome variable (that we are trying to influence) are merely associated, but not causally related, is equivalent to answering, “No. We should not take a particular course of action.” So “association” often is synonymous with “stay the course” or “maintain the status quo.”

But that’s illogical ! “Staying the course” when the data do not support causal links between the policy variable and the outcome is illogical. If we can’t establish a causal relationship between the policy variable and the outcome of interest, then we have no way of knowing whether “staying the course” will continue to be “associated with” the same value of the outcome variable that it is now.

The bottom line … We may speak of “association” but we always act as though we have drawn valid causal inferences, even when we choose not to change anything. So the most important question about causal inference is not how to pretend we don’t draw them, but how to make them as reliable as possible so that we make good decisions.

The Basic Research Question  XY What would happen to Y if we were to change X by one unit (sometimes we add, “… holding the effect of other variables constant?”) X could be continuous (e.g., income) or binary (e.g., treatment versus control group). Sometimes called the “marginal effect” of X on Y, which we could denote β.

Challenges: Omitted variable bias  Enrollment in a Health outcome smoking cessation program Family health history Omitted variable, Omitted Confounder, Spurious Correlation

Challenges: Mediating Variables  Education Health outcome Income Controlling for income in a regression of health outcomes on education means that β is the partial effect of education on health outcomes, not the full effect. Which one do you want to estimate?

Challenges: Reverse Causality  1 Health InsuranceHealth Status  2 Reverse causality We hypothesize that having health insurance affects health status, but the relationship we observe could be due, at least in part, to the reverse effect (medical underwriting).

What Methodologies? What methodologies can be used to assess causal relationships in HSR and PHSSR? What forms of manipulation of the policy variables of interest can produce reliable causal inference? This is the area of great disagreement. When and why did that disagreement occur?

A Brief History Regression & Correlation Experimental data: Galton, Pearson, Fisher Observational data: Wright Randomized trials Structural equation modeling Sample selection models Propensity scores DAGs Natural experiments Instrumental variables Multivariate regression & Partial correlation The “big split” in Panel data (Granger causality, etc.) Time

A Brief History Ronald Fisher. Randomization. We (the analyst) must manipulate the policy variables. A research design solution Philip Wright. Instrumental variable estimation. Other types of manipulation can produce valid causal inference. A modeling solution. Many social scientists still are reluctant to use the word “causal” to describe their causal models.

Today Today we have a broad menu of methods to choose from, but residual resistance to using approaches other than randomized trials. Some estimation approaches: 1. Randomization 2. Instrumental variables 3. Natural experiments 4. Sample selection models

Example  Health department Population health characteristics or outcomes t programs (“interventions”) t-1 Past health problems Community risk (unobserved) factors (unobserved)

One Solution: Randomization  Randomization Intervention Outcome v u Randomly assign health departments to interventions. Often not practical, ethical or cost-effective.

Another Solution: Instrumental Variables and Natural Experiments  External event Intervention Outcome v u Some event external (“exogenous”) to the health department (e.g., legislation, “encouragement”) that, like randomization, results in the intervention being adopted by some departments but not others, but has no direct effect on the outcome.

Another Solution: Sample Selection Models  External event Intervention Outcome v ρ u Incorporate the correlation ( ρ ) of unobserved variables (v and u) into the estimation of the causal parameter β. Same data requirements for all methods. Estimation approaches vary for different types of dependent variables.

Two Applications The relationship between local public health spending and measures of public health outcomes. The policy question: What is the effect of changing the level of local public health spending on public health outcomes? Both authors recognize that local public health departments were not randomized to different levels of spending.