Treatment Evaluation. Identification Graduate and professional economics mainly concerned with identification in empirical work. Concept of understanding.

Slides:



Advertisements
Similar presentations
Stephen C. Court Presented at
Advertisements

Impact analysis and counterfactuals in practise: the case of Structural Funds support for enterprise Gerhard Untiedt GEFRA-Münster,Germany Conference:
Policy Evaluation Antoine Bozio Institute for Fiscal Studies University of Oxford - January 2008.
B45, Second Half - The Technology of Skill Formation 1 The Economics of the Public Sector – Second Half Topic 9 – Analysis of Human Capital Policies Public.
Services provided by Mercer Health & Benefits LLC Total Health Management: On the Verge New York Business Group on Health January 22, 2010.
Different Methods of Impact Evaluation
Education and Policy aimed at Early Childhood. Education and Early Childhood Policy We have seen throughout this class that poverty and racial inequality.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
What you don’t see can’t matter? The effects of unobserved differences on causal attributions Robert Coe CEM, Durham University Randomised Controlled Trials.
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
Chapter 8: Women’s Earnings, Occupations, and the Labor Market Year 2002: –FT employed females earned 77.5% of FT employed males. –Female wage growth more.
Work experience – Gary Forrest. 2 Past, present, future. Where have we come from? Where are we now? Where might we be heading? Presentation Title.
Evidence-Based Education (EBE) Grover J. (Russ) Whitehurst Assistant Secretary Educational Research and Improvement United States Department of Education.
1 Difference in Difference Models Bill Evans Spring 2008.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
Monitoring and Evaluation of HIV/AIDS Programs Pretoria, South Africa, March 2011 Research Designs for Program Evaluation M&E for HIV/AIDS Programs March.
Experimental designs Non-experimental pre-experimental quasi-experimental experimental No time order time order variable time order variables time order.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
The counterfactual logic for public policy evaluation Alberto Martini hard at first, natural later 1.
Differences-in-Differences
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Impact Evaluation Methods. Randomized Trials Regression Discontinuity Matching Difference in Differences.
Pooled Cross Sections and Panel Data II
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
Journal Club Alcohol, Other Drugs, and Health: Current Evidence January–February 2011.
Incarceration and the Transition to Adulthood Gary Sweeten Arizona State University Robert Apel University at Albany June 4, Crime and Population.
Agenda: Block Watch: Random Assignment, Outcomes, and indicators Issues in Impact and Random Assignment: Youth Transition Demonstration –Who is randomized?
1 Lecture 20: Non-experimental studies of interventions Describe the levels of evaluation (structure, process, outcome) and give examples of measures of.
1 Comment on Zabel/Schwartz/Donald: An Analysis of the Impact of SSP on Wages Alexander Spermann Mannheim 28 October 2006.
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
FPP Chapters Design of Experiments. Main topics Designed experiments Comparison Randomization Observational studies “control” Compare and contrast.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Welfare Reform and Lone Parents Employment in the UK Paul Gregg and Susan Harkness.
The World Bank Human Development Network Spanish Impact Evaluation Fund
CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010.
Matching Estimators Methods of Economic Investigation Lecture 11.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.
Study Session Experimental Design. 1. Which of the following is true regarding the difference between an observational study and and an experiment? a)
Public Policy Analysis ECON 3386 Anant Nyshadham.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Randomized Assignment Difference-in-Differences
Differences-in- Differences. Identifying Assumption Whatever happened to the control group over time is what would have happened to the treatment group.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Measuring causal impact 2.1. What is impact? The impact of a program is the difference in outcomes caused by the program It is the difference between.
MATCHING Eva Hromádková, Applied Econometrics JEM007, IES Lecture 4.
ENDOGENEITY - SIMULTANEITY Development Workshop. What is endogeneity and why we do not like it? [REPETITION] Three causes: – X influences Y, but Y reinforces.
IMPACT EVALUATION PBAF 526 Class 5, October 31, 2011.
Technical Track Session III
Measuring Results and Impact Evaluation: From Promises into Evidence
Quasi Experimental Methods I
Quasi Experimental Methods I
Explanation of slide: Logos, to show while the audience arrive.
Quasi-Experimental Methods
Differences-in-Differences
Impact evaluation: The quantitative methods with applications
ESF EVALUATION PARTNERSHIP MEETING Bernhard Boockmann / Helmut Apel
Impact Evaluation Methods
Matching Methods & Propensity Scores
Ch. 13. Pooled Cross Sections Across Time: Simple Panel Data.
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Ch. 13. Pooled Cross Sections Across Time: Simple Panel Data.
Presentation transcript:

Treatment Evaluation

Identification Graduate and professional economics mainly concerned with identification in empirical work. Concept of understanding what is the causal relationship behind empirical results.

Selection Bias Example 1: Do hospitals make people healthier?

Selection Bias National Health Interview Survey (NHIS) “During the past 12 months, was the respondent a patient in a hospital overnight?” “Would you say your health in general is excellent, very good, good, fair, poor?” (1 is excellent; 5 is poor)

Selection Bias

Going to the hospital makes people sicker It’s not impossible: hospitals are full of other sick people who might infect us, and dangerous machines and chemicals that might hurt us.

Selection Bias People who go to the hospital are probably less healthy to begin with. Even after hospitalization, people who have sought medical care are not as healthy, on average, as those who never get hospitalized. Though they may well be better after hospitalization than they otherwise would have been.

Selection Bias Example 2: Does college education increase wage?

Selection Bias College graduates earn 84% more than high school graduates.

Selection Bias Selection into college: higher ability, smarter, work harder, etc. College graduates would have earned more even without college education. Simple comparison can not identify the causal impact of college education on wage.

Solution 1: Randomization Random assignment makes the treatment independent of potential outcomes. It eliminates selection bias and reveals true treatment effect. Treatment effect: compare post-treatment outcome between those who get the treatment and those who don’t.

Solution 1: Randomization Example 1: hormone replacement therapy (HRT) Recommended for middle-aged women to reduce menopausal symptoms.

Solution 1: Randomization Nurses Health Study (non-experimental survey of nurses): better health among the HRT users. Randomized trial: few benefits; serious side effects (see, e.g., Women’s Health Initiative [WHI], Hsia, et al., 2006).

Solution 1: Randomization Example 2: government-subsidized training programs. Provide a combination of classroom instruction and on-the-job training for groups of disadvantaged workers such as the long-term unemployed, drug addicts, and ex-offenders. Aim: increase employment and earning.

Solution 1: Randomization Non-experimental studies: trainees earn less than comparison groups (see, e.g., Ashenfelter, 1978; Ashenfelter and Card, 1985; Lalonde 1995). Evidence from randomized evaluations of training programs generate mostly positive effects (see, e.g., Lalonde, 1986; Orr, et al, 1996).

Solution 1: Randomization Problems of randomization: Randomly offers, but people don’t want to be part of the game High costs Small sample size

Solution 2: Difference-in-difference Panel data available

Solution 2: Difference-in-difference New problem: time trend Compare change in outcomes between treatment group and control group Impact is the difference in the change in outcome Impact = (Y t 1 -Y t 0 ) - (Y c 1 -Y c 0 )

Solution 2: Difference-in-difference PrePost

Solution 2: Difference-in-difference Effect of program using only pre- & post- data from T group (ignoring general time trend). PrePost

Solution 2: Difference-in-difference Effect of program using only T & C comparison from post-intervention (ignoring pre-existing differences between T & C groups). PrePost

Solution 2: Difference-in-difference Whatever happened to the control group over time is what would have happened to the treatment group in the absence of the program. PrePost Effect of program difference-in-difference (taking into account pre- existing differences between T & C and general time trend).

Solution 2: Difference-in-difference Example: Schooling and labor market consequences of school construction in Indonesia: evidence from an unusual policy experiment Esther Duflo, MIT American Economic Review, Sept 2001

Solution 2: Difference-in-difference School infrastructure Educational achievement Educational achievement? Salary level?

Solution 2: Difference-in-difference : The Indonesian government built 61,000 schools equivalent to one school per 500 children between 5 and 14 years old The enrollment rate increased from 69% to 85% between 1973 and 1978 The number of schools built in each region depended on the number of children out of school in those regions in 1972, before the start of the program.

Solution 2: Difference-in-difference 2 sources of variations in the intensity of the program for a given individual By region: simplify the intensity of the program: high or low By age: Young cohort of children who benefitted Older cohort of children who did not benefit

Solution 2: Difference-in-difference Intensity of the Building Program Age in 1974HighLow 2-6 (young cohort) (older cohort) Difference DD (0.089)

Solution 2: Difference-in-difference Fundamental assumption that trends (slopes) are the same in treatments and controls (sometimes true, sometimes not)

Time Treatment Outcome EstimatedAv erage Treatment Effect Average Treatment Effect Treatment Group Control Group

Solution 2: Difference-in-difference Need a minimum of three points in time (age of cohort in the example) to verify this and estimate treatment (two pre-intervention)

Time Treatment Outcome Treatment Group Control Group Average Treatment Effect First observation Second observation Third observation

Solution 2: Difference-in-difference Intensity of the Building Program Age in 1974HighLow Difference DD (0.098)

Solution 3: Matching Panel data NOT available Controls: non-participants with same characteristics as participants The matches are selected on the basis of similarities in observed characteristics

Solution 3: Matching Instead of aiming to ensure that the matched control for each participant has exactly the same value of X, same result can be achieved by matching on the probability of participation

Solution 3: Matching For each participant find a sample of non- participants that have similar propensity scores (prob. of treatment) Compare the outcome

Solution 3: Matching Common support

Solution 3: Matching Assumes no selection bias based on unobserved characteristics