AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.

Slides:



Advertisements
Similar presentations
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
Advertisements

Impact Evaluation Methods: Causal Inference
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Introduction to Propensity Score Matching
Treatment Evaluation. Identification Graduate and professional economics mainly concerned with identification in empirical work. Concept of understanding.
Review of Identifying Causal Effects Methods of Economic Investigation Lecture 13.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
Impact Evaluation Click to edit Master title style Click to edit Master subtitle style Impact Evaluation World Bank InstituteHuman Development Network.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Non-Experimental designs: Developmental designs & Small-N designs
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
TOOLS OF POSITIVE ANALYSIS
Agenda: Block Watch: Random Assignment, Outcomes, and indicators Issues in Impact and Random Assignment: Youth Transition Demonstration –Who is randomized?
Impact Evaluation Session VII Sampling and Power Jishnu Das November 2006.
Impact Evaluation Toolbox Gautam Rao University of California, Berkeley * ** Presentation credit: Temina Madon.
1 Evaluation of the adoption and impact of new varieties of sesame and groundnuts in Tanzania Deogratias Lwezaura.
Development Impact Evaluation Initiative innovations & solutions in infrastructure, agriculture & environment naivasha, april 23-27, 2011 in collaboration.
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
S-005 Intervention research: True experiments and quasi- experiments.
Welfare Reform and Lone Parents Employment in the UK Paul Gregg and Susan Harkness.
Matching Estimators Methods of Economic Investigation Lecture 11.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
What is randomization and how does it solve the causality problem? 2.3.
Measuring Impact 1 Non-experimental methods 2 Experiments
Framework of Preferred Evaluation Methodologies for TAACCCT Impact/Outcomes Analysis Random Assignment (Experimental Design) preferred – High proportion.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Using Propensity Score Matching in Observational Services Research Neal Wallace, Ph.D. Portland State University February
Implementing an impact evaluation under constraints Emanuela Galasso (DECRG) Prem Learning Week May 2 nd, 2006.
WBI WORKSHOP Randomization and Impact evaluation.
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Do European Social Fund labour market interventions work? Counterfactual evidence from the Czech Republic. Vladimir Kváča, Czech Ministry of Labour and.
Impact Evaluation Methods Randomization and Causal Inference Slides by Paul J. Gertler & Sebastian Martinez.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
Alexander Spermann University of Freiburg, SS 2008 Matching and DiD 1 Overview of non- experimental approaches: Matching and Difference in Difference Estimators.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
Henrik Winterhager Econometrics III Before After and Difference in Difference Estimators 1 Overview of non- experimental approaches: Before After and Difference.
Impact Evaluation Methods Regression Discontinuity Design and Difference in Differences Slides by Paul J. Gertler & Sebastian Martinez.
Quasi Experimental Methods I
Quasi Experimental Methods I
Propensity Score Matching
An introduction to Impact Evaluation
Quasi-Experimental Methods
Impact Evaluation Methods
Explanation of slide: Logos, to show while the audience arrive.
Quasi-Experimental Methods
Impact evaluation: The quantitative methods with applications
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Methods of Economic Investigation Lecture 12
Impact Evaluation Methods
Impact Evaluation Methods
Matching Methods & Propensity Scores
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Module 3: Impact Evaluation for TTLs
Presentation transcript:

AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J. Gertler & Sebastian Martinez Impact Evaluation Methods: Impact Evaluation Methods: Difference in difference & Matching

Measuring Impact ► Randomized Experiments ► Quasi-experiments  Randomized Promotion – Instrumental Variables  Regression Discontinuity  Double differences (Diff in diff)  Matching

Case 5: Diff in diff ► Compare change in outcomes between treatments and non-treatment  Impact is the difference in the change in outcomes ► Impact = (Y t 1 -Y t 0 ) - (Y c 1 -Y c 0 )

Time Treatment Outcome Treatment Group Control Group Average Treatment Effect

Time Treatment Outcome Treatment Group Control Group Measured effect without pre- measurement

Time Treatment Outcome Estimated Average Treatment Effect Average Treatment Effect Treatment Group Control Group

Diff in diff ► What is the key difference between these two cases? ► Fundamental assumption that trends (slopes) are the same in treatments and controls (sometimes true, sometimes not) ► Need a minimum of three points in time to verify this and estimate treatment (two pre- intervention)

Time Treatment Outcome Treatment Group Control Group Average Treatment Effect First observation Second observation Third observation

Examples ► Two neighboring school districts  School enrollment or test scores are improving at same rate before the program (even if at different levels)  One receives program, one does not  Neighboring _______

Case 5: Diff in Diff Not EnrolledEnrolledt-stat Mean change CPC Case 5 - Diff in Diff Linear RegressionMultivariate Linear Regression Estimated Impact on CPC 27.66**25.53** (2.68)(2.77) ** Significant at 1% level Case 5 - Diff in Diff

Impact Evaluation Example – Summary of Results Case 1 - Before and After Case 2 - Enrolled/Not Enrolled Case 3 - Randomization Case 4 - Regression Discontinuity Case 5 - Diff in Diff Multivariate Linear Regression Multivariate Linear Regression Multivariate Linear Regression Multivariate Linear Regression Multivariate Linear Regression Estimated Impact on CPC 34.28** **30.58**25.53** (2.11)(4.05)(3.00)(5.93)(2.77) ** Significant at 1% level

Example ► Old-age pensions and schooling in South Africa  Eligible if household member over 60  Not eligible if under 60 Used household with member age  Pensions for women and girls’ education

Measuring Impact ► Randomized Experiments ► Quasi-experiments  Randomized Promotion – Instrumental Variables  Regression Discontinuity  Double differences (Diff in diff)  Matching

Matching ► Pick the ideal comparison group that matches the treatment group from a larger survey. ► The matches are selected on the basis of similarities in observed characteristics.  For example? ► This assumes no selection bias based on unobserved characteristics.  Example: income  Example: entrepreneurship Source: Martin Ravallion

Propensity-Score Matching (PSM) ► Controls: non-participants with same characteristics as participants  In practice, it is very hard. The entire vector of X observed characteristics could be huge. ► Match on the basis of the propensity score P(X i ) = Pr (participation i =1|X)  Instead of aiming to ensure that the matched control for each participant has exactly the same value of X, same result can be achieved by matching on the probability of participation.  This assumes that participation is independent of outcomes given X (not true if important unobserved outcomes are affecting participation)

Steps in Score Matching 1. Representative & highly comparable survey of non-participants and participants. 2. Pool the two samples and estimate a logit (or probit) model of program participation: Gives the probability of participating for a person with X 3. Restrict samples to assure common support (important source of bias in observational studies) For each participant find a sample of non- participants that have similar propensity scores Compare the outcome indicators. The difference is the estimate of the gain due to the program for that observation. Calculate the mean of these individual gains to obtain the average overall gain.

Density 0 1 Propensity score Region of common support Density of scores for participants High probability of participating given X

Steps in Score Matching 1. Representative & highly comparable survey of non- participants and participants. 2. Pool the two samples and estimate a logit (or probit) model of program participation: Gives the probability of participating for a person with X 3. Restrict samples to assure common support (important source of bias in observational studies) 4. For each participant find a sample of non- participants that have similar propensity scores 5. Compare the outcome indicators. The difference is the estimate of the gain due to the program for that observation. 6. Calculate the mean of these individual gains to obtain the average overall gain.

PSM vs an experiment ► Pure experiment does not require the untestable assumption of independence conditional on observables ► PSM requires large samples and good data

Lessons on Matching Methods ► Typically used for IE when neither randomization, RD or other quasi-experimental options are not possible (i.e. no baseline)  Be cautious of ex-post matching: Matching on variables that change due to participation (i.e., endogenous) What are some variables that won’t change? ► Matching helps control for OBSERVABLE differences

More Lessons on Matching Methods ► Matching at baseline can be very useful:  Estimation: Combine with other techniques (i.e. diff in diff) Know the assignment rule (match on this rule)  Sampling: Selecting non-randomized control sample ► Need good quality data  Common support can be a problem

Case 7: Matching Case 7 - PROPENSITY SCORE: Pr(treatment=1) VariableCoef.Std. Err. Age Head Educ Head Age Spouse Educ Spouse Ethnicity Female Head Constant

Case 7: Matching Linear RegressionMultivariate Linear Regression Estimated Impact on CPC (3.59)(3.65) ** Significant at 1% level, + Significant at 10% level Case 7 - Matching

Impact Evaluation Example – Summary of Results Case 1 - Before and After Case 2 - Enrolled/Not Enrolled Case 3 - Randomization Case 4 - Regression Discontinuity Case 5 - Diff in Diff Case 6 - IV (TOT) Case 7 - Matching Multivariate Linear Regression Multivariate Linear Regression Multivariate Linear Regression Multivariate Linear Regression Multivariate Linear Regression2SLS Multivariate Linear Regression Estimated Impact on CPC 34.28** **30.58**25.53**30.44**7.06+ (2.11)(4.05)(3.00)(5.93)(2.77) (3.07)(3.65) ** Significant at 1% level

Measuring Impact ► Experimental design/randomization ► Quasi-experiments  Regression Discontinuity  Double differences (Diff in diff)  Other options Instrumental Variables Matching  Combinations of the above

Remember….. ► Objective of impact evaluation is to estimate the CAUSAL effect of a program on outcomes of interest ► In designing the program we must understand the data generation process  behavioral process that generates the data  how benefits are assigned ► Fit the best evaluation design to the operational context

DesignWhen to useAdvantagesDisadvantages Randomization ► Whenever possible ► When an intervention will not be universally implemented ► Gold standard ► Most powerful ► Not always feasible ► Not always ethical Random Promotion ► When an intervention is universally implemented ► Learn and intervention ► Only looks at sub-group of sample Regression Discontinuity ► If an intervention is assigned based on rank ► Assignment based on rank is common ► Only look at sub-group of sample Double differences ► If two groups are growing at similar rates ► Eliminates fixed differences not related to treatment ► Can be biased if trends change Matching ► One other methods are not possible ► Overcomes observed differences between treatment and comparison ► Assumes no unobserved differences (often implausible)