Quasi-Experimental Methods

Slides:



Advertisements
Similar presentations
Impact analysis and counterfactuals in practise: the case of Structural Funds support for enterprise Gerhard Untiedt GEFRA-Münster,Germany Conference:
Advertisements

The World Bank Human Development Network Spanish Impact Evaluation Fund.
N ON -E XPERIMENTAL M ETHODS Shwetlena Sabarwal (thanks to Markus Goldstein for the slides)
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
VIII Evaluation Conference ‘Methodological Developments and Challenges in UK Policy Evaluation’ Daniel Fujiwara Senior Economist Cabinet Office & London.
Correlation AND EXPERIMENTAL DESIGN
An introduction to Impact Evaluation Markus Goldstein Africa Region Gender Practice & Development Research Group.
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
What do we know about gender and agriculture in Africa? Markus Goldstein Michael O’Sullivan The World Bank Cross-Country Workshop for Impact Evaluations.
Non Experimental Design in Education Ummul Ruthbah.
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
Impact Evaluation in the Real World One non-experimental design for evaluating behavioral HIV prevention campaigns.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
What is randomization and how does it solve the causality problem? 2.3.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 Steps in Implementing an Impact.
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Measuring causal impact 2.1. What is impact? The impact of a program is the difference in outcomes caused by the program It is the difference between.
The Evaluation Problem Alexander Spermann, University of Freiburg 1 The Fundamental Evaluation Problem and its Solution SS 2009.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
Henrik Winterhager Econometrics III Before After and Difference in Difference Estimators 1 Overview of non- experimental approaches: Before After and Difference.
The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ The Fundamental Evaluation Problem and its Solution.
Looking for statistical twins
Differences-in-Differences
Measuring Results and Impact Evaluation: From Promises into Evidence
Quasi Experimental Methods I
Measuring the Effects of an Irrigation and Land Tenure Security Initiative in the Senegal River Valley Baseline findings and evaluation challenges March.
General belief that roads are good for development & living standards
Quasi Experimental Methods I
An introduction to Impact Evaluation
Difference-in-Differences
Quasi-Experimental Methods
Experiments: What Can Go Wrong?
An introduction to Impact Evaluation
Impact evaluation: The quantitative methods with applications
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Methods of Economic Investigation Lecture 12
Experimental Design.
Development Impact Evaluation in Finance and Private Sector
Impact Evaluation Methods
Experimental Design.
1 Causal Inference Counterfactuals False Counterfactuals
Matching Methods & Propensity Scores
Implementation Challenges
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Explanation of slide: Logos, to show while the audience arrive.
Sampling for Impact Evaluation -theory and application-
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Steps in Implementing an Impact Evaluation
Presentation transcript:

Quasi-Experimental Methods Florence Kondylis (World Bank) This presentation draws on previous presentations by Markus Goldstein, Leandre Bassole, and Alberto Martini

Objective Find a plausible counterfactual Reality check Every method is associated with an assumption The stronger the assumption the more we need to worry about the causal effect Question your assumptions

Program to evaluate Fertilizer vouchers Program (2007-08) Main Objective Increase maize production Intervention: vouchers distribution Target group: Maize producers Farmers owning >1 Ha, <3 Ha land Indicator: Yield (Maize)

I. Before-after identification strategy Counterfactual: Yield before program started EFFECT = After minus Before Counterfactual assumption: There is no other factor than the vouchers affecting yield from 2007 to 2008 years

Year Number of farmers Maize Production (T per Ha) 2007 5000 1.3 2008 2.1 Difference +0.8

Questioning the counterfactual assumption Question: what else might have happened in 2007-2008 to affect maize yield ?

Examine assumption with prior data Year Number of farmers Maize Production (T per Ha) 2006 5000 1.5 2007 1.3 2008 2.1 Assumption of no change over time not so great ! >> There are external factors (rainfall, pests…)

II. Non-participant identification strategy Counterfactual: Rate of pregnancy among non-participants Counterfactual assumption: Without vouchers, participants would as productive as non-participants in a given year

Group Number of farmers Maize Production in 2008 (T per Ha) Participants 5000 2.1 Non-participants 1.5 Difference +0.6

Questioning the counterfactual assumption Question: how might participants differ from non-participants?

Test assumption with pre-program data REJECT counterfactual hypothesis of same productivity

III. Difference-in-Difference identification strategy Counterfactual: Non-participant maize yield, purging pre-program differences between participants/nonparticipants “Before vouchers” maize yield, purging before-after change for nonparticipants (external factors) 1 and 2 are equivalent

Average maize yield (T / Ha) 2006 2008 Difference (2006-2008) Participants (P) 1.5 2.1 -0.6 Non-participants (NP) 0.5 1.3 -0.8 Difference (P-NP) 1.0 0.8 0.2

Effect = 3.47 – 11.13 = - 7.66 Participants 66.37 – 62.90 = 3.47 57.50 - 46.37 = 11.13 Non-participants

Effect = 8.87 – 16.53 = - 7.66 Before After 66.37 – 57.50 = 8.87 62.90 – 46.37 = 16.53 After

Counterfactual assumption: Without intervention participants and nonparticipants’ pregnancy rates follow same trends

74.0 16.5

74.0 -7.6

Questioning the assumption Why might participants’ trends differ from that of nonparticipants?

Examine assumption with pre-program data Average rate of teen pregnancy in 2004 2008 Difference (2004-2008) Participants (P) 54.96 62.90 7.94 Non-participants (NP) 39.96 46.37 6.41 Difference (P=NP) 15.00 16.53 +1.53 ? Or with other outcomes not affected by the intervention: household consumption counterfactual hypothesis of same trends doesn’t look so believable

IV. Matching with Difference-in-Difference identification strategy Counterfactual: Comparison group is constructed by pairing each program participant with a “similar” nonparticipant using larger dataset – creating a control group from similar (in observable ways) non-participants

Counterfactual assumption: Unobserved characteristics do not affect outcomes of interest Unobserved = things we cannot measure (e.g. ability) or things we left out of the dataset Question: how might participants differ from matched nonparticipants?

Matched nonparticipant 73.36 Effect = - 7.01 66.37 Matched nonparticipant Participant

Can only test assumption with experimental data Studies that compare both methods (because they have experimental data) find that: unobservables often matter! direction of bias is unpredictable! Apply with care – think very hard about unobservables

Summary Randomization requires minimal assumptions needed and procures intuitive estimates (sample means !) Non-experimental requires assumptions that must be carefully assessed More data-intensive

Example: Irrigation for rice producers + Enhanced Market Access Impact of interest measured by: Input use & repayment of irrigation fee Rice yield (Cash) income from rice Non-rice cash income (spillovers to other value chains) Data: 500 farmers in project area / 500 random sample farmers Before & after treatment Can’t randomize irrigation so what is the counterfactual?

Plausible counterfactuals Random sample difference in difference Are farmers outside the scheme on the same trajectory ? Farmers in the vicinity of the scheme but not included in scheme Selection of project area needs to be carefully documented (elevation…) Proximity implies “just-outside farmers” might also benefit from enhanced market linkages What do we want to measure? Propensity score matching Unobservables determining on-farm productivity ?

Thank You