Making the Most out of Discontinuities

Slides:



Advertisements
Similar presentations
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
Advertisements

The World Bank Human Development Network Spanish Impact Evaluation Fund.
An Overview Lori Beaman, PhD RWJF Scholar in Health Policy UC Berkeley
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Regression Discontinuity. Basic Idea Sometimes whether something happens to you or not depends on your ‘score’ on a particular variable e.g –You get a.
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
Sensitivity Analysis for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
Experiments and Observational Studies.  A study at a high school in California compared academic performance of music students with that of non-music.
Development Impact Evaluation Initiative innovations & solutions in infrastructure, agriculture & environment naivasha, april 23-27, 2011 in collaboration.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 AIM-CDD Using Randomized Evaluations.
Applying Science Towards Understanding Behavior in Organizations Chapters 2 & 3.
Determining Sample Size
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Measuring Impact: Experiments
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Christopher (Kitt) Carpenter and Carlos Dobkin The Effects of Alcohol Access on Consumption and Mortality We thank NIH/NIAAA for financial support R01-AA
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Making the Most out of Discontinuities Florence.
Session III Regression discontinuity (RD) Christel Vermeersch LCSHD November 2006.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Why Use Randomized Evaluation? Isabel Beltran, World Bank.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Measuring Impact 1 Non-experimental methods 2 Experiments
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Laura Chioda.
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Using Randomized Evaluations to Improve.
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Copyright © 2010 Worth Publishers.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
Innovations in investment climate reforms: an impact evaluation workshop November 12-16, 2012, Paris Non Experimental Methods Florence Kondylis.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
Impact Evaluation Methods Regression Discontinuity Design and Difference in Differences Slides by Paul J. Gertler & Sebastian Martinez.
Kenya Evidence Forum - June 14, 2016 Using Evidence to Improve Policy and Program Designs How do we interpret “evidence”? Aidan Coville, Economist, World.
Differences-in-Differences
Using Randomized Evaluations to Improve Policy
Measuring Results and Impact Evaluation: From Promises into Evidence
Quasi Experimental Methods I
Differences-in-Differences
Quasi Experimental Methods I
An introduction to Impact Evaluation
Quasi-Experimental Methods
Impact Evaluation Methods
Explanation of slide: Logos, to show while the audience arrive.
Quasi-Experimental Methods
Using Randomized Evaluations to Improve Policy
Chapter 13 Experimental and Observational Studies
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Development Impact Evaluation in Finance and Private Sector
EMPIRICAL STUDY AND FORECASTING (II)
Matching Methods & Propensity Scores
Implementation Challenges
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
Explanation of slide: Logos, to show while the audience arrive.
Sampling for Impact Evaluation -theory and application-
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Positive analysis in public finance
Using Big Data to Solve Economic and Social Problems
Presentation transcript:

Making the Most out of Discontinuities Nandini Krishnan Making the Most out of Discontinuities This presentation is based on work done by Erich Battistin, Jean-Louis Arcand, Nandini Krishnan and Florence Kondylis.

Introduction Setting: we would like to evaluate policy interventions in an observational setting, i.e. when the analyst cannot manipulate the selection process. In general: Individuals, households, villages, or other entities, are either exposed or not exposed to a “treatment” or “policy regime” and the two groups are not comparable because of selection. When randomization is not feasible, how can we exploit implementation features of the program to “measure” its impact? Answer: Quasi-experiments Florence’s presentation and now Regression Discontinuity Design.

Regression Discontinuity Designs RRD is closer cousin of randomized experiments than other competitors Major element in the toolkit for empirical research It is a “design”, not a “method”, and thus relies on knowledge of the selection process Logic: Assignment to “treatment” depends- completely or partly – on a “score”, a quantifiable selection criteria

RDD Example Policy: US minimum legal drinking age (MLDA) – if less than 21, alcohol consumption is illegal Observation: The policy treats people aged 20 years, 11 months and 29 days and 21 year olds differently. However, do we think that these individuals are inherently different? Are 20 year-, 11 month- and 29 day- olds less wise, less likely to go to parties than 21 year olds? Less obedient? People born “few days apart” are treated differently, because of the arbitrary age cut off established by the law. However, we hardly think that few days or a month apart could really make a difference in terms of behaviors and attitudes towards alcohol

RDD Example (2) Idea: This policy rule assigns people to treatment and control groups: Treatment group: those who are 20 years and 11 months old Control group (Can legally drink alcohol): individuals who just turned 21 – It is as if people were assigned to treatment and control at random- similar in terms of observable and unobservable characteristics that affect outcomes (mortality rates). Can isolate the causal impact of alcohol consumption on mortality rates among young adults.

RDD Example (3) MLDA (Treatment) causes lower alcohol consumption

RDD Example (4) Increased alcohol consumption causes higher mortality rates around the age of 21 All deaths All deaths associated with injuries, alcohol or drug use All other deaths

RDD Logic General idea: assignment to the treatment depends, either completely or partly, on a continuous “score”, ranking (age in the previous case): potential beneficiaries are ordered by looking at the score there is a cut-off point for “eligibility” – clearly defined criterion determined ex-ante cut-off determines the assignment to the treatment or no-treatment groups These de facto assignments often arise from administrative decisions, where the incentives to participate are partly limited because of resource constraints, and transparent rules rather than discretion are used for the allocation of incentives

Example (2): vouchers Government offers vouchers for fertilizer for small farmers. Eligibility rule based on plot size: If plot less than 2 acres then farmer receives vouchers If plot bigger than 2 acres then no voucher Size of plot not easily manipulable overnight, easy to measure and enforce (with admin data on size of plots) Everyone below the eligibility cut-off receives vouchers.

Example: fuzzy design Now suppose that, for unknown reasons, not all the eligibles farmers receive the voucher. Why? limited knowledge of the program (didn’t know the program was happening) Voluntary participation (farmers who take up are different from those who don’t along several dimensions) The percentage of participants changes discontinuously at cut-off, from zero to less than 100%

Probability of Participation under Alternative Designs 100% 75% 0% 0%

Sharp and Fuzzy Discontinuities Ideal setting: Sharp discontinuity the discontinuity precisely determines treatment status e.g. ONLY people 21 and older drink alcohol! Only small plots receive vouchers Fuzzy discontinuity the percentage of participants changes discontinuously at cut-off, but not from zero to 100% e.g. rules determine eligibility but amongst the small farmers there is only partial compliance / take-up Some people younger than 21 end up consuming alcohol and some older than 21 don’t consume at all

Internal Validity General idea: as a result of the arbitrary cut off associated to a given policy, individuals to the immediate left and right of the cut-off are similar. Therefore, differences in alcohol consumption and mortality can be thought of as determined by the policy. Assumption (nothing else is happening): in the absence of the policy, we would not observe a discontinuity in the outcomes around the cut off. We are assuming that there is nothing else going on around the same cut off that impacts our outcome of interest: 21 year olds can start drinking however the moment they turn 21 they have to enroll in a “drinking responsibly” type seminar Vouchers: there is another policy that gives equipment to farmers with plots larger than 2 acres.

Outcome Profile Before and After the Intervention

Outcome Profile Before and After the Intervention different shape

External Validity How general are the results? Counterfactual: individuals “marginally excluded from benefits” (less than 21, plots less than 2 acres) Causal conclusions are limited to individuals, households, villages, at the cut-off The effect estimated is for individuals “marginally eligible for benefits” extrapolation beyond this point needs additional, often unwarranted, assumptions (or multiple cut-offs) Fuzzy designs exacerbate the problem

The “nuts and bolts” of implementing RDDs A major advantage of the RDD over competitors lies in its transparency, as it can be illustrated using graphical methods Requires many observations around cut-off (alternatively, one could down-weight observations away from the cut-off) Why? Because only near the cut-off can we assume that people find themselves by chance to the left and to the right of the cut-off. Think about farmer who owns 1 acre plot vs farmer who owns 50 acre plot or compare a 16 vs a 25 years old.

Graphical Analysis

Moving the goalpost Natural-experiments are “naturally” occurring instances which approximate the properties of an experiment RDDs share the same properties as an experiment locally at the cut-off Thus “real-world” discontinuities are a gold mine for those fishing for natural experiments 

Wrap Up Modern econometrics views RDDs as a powerful tool to identify causal effects  Pros: as good as experiments (around the cut off) Cons: the estimated program effects are representative only of households/villages near the cut off, which may not reflect entire population of interest.

Wrap Up Can be used to design a prospective evaluation when randomization is not feasible The design applies to all means tested programs Multiple cut-offs to enhance external validity Can be used to evaluate ex-post interventions using discontinuities as “natural experiments”.

Thank You