Presentation is loading. Please wait.

Presentation is loading. Please wait.

Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.

Similar presentations


Presentation on theme: "Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO."— Presentation transcript:

1 Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO BY-NC-ND 3.0 IGO) license (http://creativecommons.org/licenses/by-nc-nd/3.0/igo/legalcode) and may be reproduced with attribution to the IDB and for any non-commercial purpose. No derivative work is allowed.http://creativecommons.org/licenses/by-nc-nd/3.0/igo/legalcode Any dispute related to the use of the works of the IDB that cannot be settled amicably shall be submitted to arbitration pursuant to the UNCITRAL rules. The use of the IDB’s name for any purpose other than for attribution, and the use of IDB’s logo shall be subject to a separate written license agreement between the IDB and the user and is not authorized as part of this CC-IGO license. Note that link provided above includes additional terms and conditions of the license. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the Inter-American Development Bank, its Board of Directors, or the countries they represent.

2 Measuring impact: Impact Evaluation Methods Sebastian Martínez InterAmerican Development Bank Presentation by Sebastian Martinez based on the book “Impact Evaluation in Practice” by Gertler, Martinez, Premand, Rawlings, and Vermeersch (2010). The contents of this presentation reflect the views of the author and not necessarily those of the InterAmerican Development Bank. This version: January 2012.

3 What is an Impact Evaluation? We start with a development question…. How to reduce youth unemployment? How to increase school enrollment? How to improve child nutrition? Others……… We propose a solution…. Technical-vocational training for youth Scholarships Nutritional supplements 3

4 At the end of the day, did the program have an impact? Did technical-vocational training reduce youth unemployemnt? Did the scholarships increase school enrollment? Did the nutritional supplements improve child nutrition? 4 Impact evaluations measure the causal relationship between an intervention and a result

5 Theory of Change Public policy should be based on a “theory of change” that links an intervention with the results of interest. 5 Impact evaluations measure if these results were achieved

6 Chain of Results Inputs ActivitiesProductsResults Impacts 6 Implementation (Supply)Results (Supply and Demand) Human, financial, and other resources marshalled to achieve activities. Actions taken or work done to transform inputs into specific products Goods and services produced and delivered under the implementer’s control Use of the products by the population of interest Final objectives of the program or long term goals Budget Personnel Other resources Activities done to produce goods and services Goods and Services produced under the control of the executing organism Not entirely under the control of the executing organism Affected by multiple factors

7 Key Inputs for an Impact Evaluation 7 Logical Framework Identification Strategy Data Operating Plan Resources How the program works in theory Measuring Impact Surveys, administrative systems Evaluation team, schedule, ToRs, etc. $

8 1 Causal Inference Counterfactuals False counterfactuals Pre-program conditions (pre-post) Self-selection (apples and oranges) 8

9 Objective of an Impact Evaluation 9 Estimate the causal effect (impact) of an intervention (P) on a result (Y). (P) = Program or “Treatment” (Y) = Results, Indicators of Success Example: What is the impact of a conditional cash transfers program (P) on household consumption (Y)? “

10 Evaluation Question: 10 What is the impact of (P) on (Y) ? α= (Y | P=1)-(Y | P=0) Answer:

11 Problem of Incomplete Data 11 For a program beneficiary: α= (Y | P=1)-(Y | P=0) We observe (Y | P=1): household consumption ( Y ) of those participating in the conditional cash transfers program (P=1) But we DON’T observe (Y | P=0): household consumption (Y) of those not participating in the conditional cash transfers program (P=0)

12 Solution 12 We estimate what would have happened with Y in the absence of P. We call this….. Counterfactual. Estimating a valid counterfactual is key for a good impact evaluation!

13 Estimating the impact of P over Y 13 We observe (Y | P=1) Results under Treatment We Estimate (Y | P=0) The Counterfactual We use comparison or control groups α= (Y | P=1)-(Y | P=0) IMPACT = - counterfactual Results with Treatment

14 Example: What is the impact of… 14 Giving John Doe (P)(P) (Y)?(Y)? money On his consumption of candy

15 The Perfect “Clone” 15 John Doe“Clone” IMPACT=6-4=2 6 candies4 candies

16 In practice, we use insights from statistics… 16 TreatmentComparison Y Average=6 candiesY Average=4 candies IMPACT=6-4=2

17 Finding good control groups 17 We want to find “clones” for the “John Does” in our programs. Treatment and control groups must have Identical characteristics Except for the intervention In practice, we use program eligibility rules to find good controls With a good comparison group, the only reason to have different results between the treatment and control groups is the intervention (P).

18 18 National anti-poverty program in Mexico o Objectives: o End the inter-generational transmission of poverty and reduce poverty today o Starts in 1997 o By 2004, 5 million beneficiaries o Program eligiblity based on a poverty index Intervention: Conditional Transfers o Conditional on participating in schools and in health services Case Study: Progresa

19 19 Impact evaluation with a lot of information: o 506 communities, 24,000 households o Baseline in 1997, follow up in 2008 Many results of interest (education, health, etc) Here: Living standards: Per capita consumption What is the impact of Progresa (P) on per capita consumption (Y)? If the impact is of $20 pesos or more per month, we scale up the program

20 Eligibility and Enrollment Not eligible (Not poor) Eligible (poor) Enrolled Not enrolled

21 1 Causal Inference Counterfactuals False counterfactuals Pre-program conditions (pre-post) Self-selection (apples and oranges) 21 Two good examples of BAD counterfactuals

22 22 0.17 0.26 0.45 0.38 Men Women Impact of Program?

23 23 0.17 0.26 0.45 0.38 Men Women Impact of Program?

24 24 Pre-post comparison does not control for other factors that vary with time! Counterfactual: Men Impact of Program?

25 Case 1: Pre-program 25 What is the impact of Progresa (P) on per capita consumption (Y)? Y Time T=1997 T=1998 α = $35 IMPACT=A-B= $35 B A 233 268 (1)We observe consumption before (April 1997) and after (November 1998) the program (1)α= (Y | P=1)-(Y | P=0)

26 Case 1: Pre-program 26 ** statistically significant at the 1% level Consumption (Y) Results WITH Treatment (Post) 268.7 Counterfactual (Pre) 233.4 Impact (Y | P=1) - (Y | P=0) 35.3** Regression Analysis: Linear Regression 35.27** Multivariate linear regression 34.28**

27 Case 1: What’s the Problem? 27 Y Time T=0 T=1 α = $35 B A 233 268 Economic Boom: o “Real” impact=A-C o A-B is an over- estimate C ? D ? Impact? Recession: o “Real” impact=A-D o A-B is an under- estimation Pre-program condition Doesn’t control for other factors that vary with time

28 1 Causal Inference Counterfactuals False counterfactuals: Pre-program conditions (pre-post) Self-selection (apples and oranges) 28

29 Self-selected controls 29 Generally, control groups that meet the following criteria are NOT good controls: Those who choose NOT to participate Those who are Ineligible to participate (with some very important exceptions) Selection Bias: o Population characteristics that are correlated with their participating in the program and with the results of interest (Y) We can control for observables But not for unobservables! Estimated impact can be confused with these characteristics

30 Post-treatment period (1998) Case 2: Progresa Enrolled Y=268 Not enrolled Y=290 Not eligible (Not poor) Eligible (Poor) In what ways, besides their participation in the program, can those “enrolled” and “not enrolled” be different?

31 Case 2: Self-selected Controls 31 Consumption (Y) Results WITH treatment (enrolled) 268 Counterfactual (Not enrolled) 290 Impact (Y | P=1) - (Y | P=0) -22** Regression Analysis: Linear Regression -22** Multivariate linear regression -4.15 **statistically significant at the 1% level

32 Policy Recommendations? 32 Would you recommend scaling up Progresa nationally? “If the impact is of $20 pesos or more per month, we scale up the program” Pre-program: o Doesn’t consider other factors that change with time Self-selected: o Selection bias: other factors associated with the treatment and control groups affect the results Impact on Consumption (Y) Case 1: Pre- program Linear Regression 35.27** Multivariate Linear Regression 34.28** Case 2: Self- Selected Linear Regression -22** Multivariate Linear Regression -4.15

33 Pre-program Compares: Same observation units before and after receiving P. Problem: Other things that affect the results may occur with time Self-Selected Compares: Group that participates with group that chooses not to participate in P. Problem: Selection bias Remember 33 Both counterfactuals can lead to biased impact estimates !

34 2 Evaluation Methods Random Assignment Regression Discontinuity Design Encouragement Design (Instrumental Variables) Difference-in-Differences (Diff-in-Diff) Matching Our “Toolbox” 34

35 Random Assignment 35 o Benefits are assigned through a lottery or another random process…. o Generates two groups that are statistically identical o Excess Demand: o # of eligible people > available resources o Innovation: we need rigorous evidence about the effectiveness of a program When to Randomize? o Receiving the program o Receiving the program first, second, third, etc. Give each eligible unit the same probability of… o Program selection is ethical, quantitative and transparent o Produces the best counterfactual possible and it’s easy and intuitive to explain Advantages…….

36 = Ineligible Random Assignment = Eligible 1. Population External Validity 2. Sample 3. Treatment Internal Validity Comparison

37 Unit of Randomization 37 Select based on program type: o Individual/Household o School/Health Center o Street/Block o Town/Community o District/Municipality/Region Remember: o It’s necessary to have a number of units that is “large enough” to identify a minimum detectable effect: Statistical Power. o Spillovers/contamination o Operating and surveying costs As a rule of thumb, choose to randomize in the minimum implementation unit possible

38 Case 3: Random Assignment 38 Progresa: Randomization Unit: community o 320 treatment communities (14,446 households): o First transfer: April 1998 o 186 control communities (9,630 households): o First transfer: November 1999 506 communities in the evaluation sample Randomization by stages:

39 Case 3: Random Assignment 39 Treatment Communities 320 Control Communities 186 Time T=1T=0 Comparison Period

40 40 How can we verify that we have good “clones” ? In the absence of P, treatment and comparison groups must be statistically identical Let’s compare their baseline characteristics (T=0) Case 3: Random Assignment

41 Case 3: Balance (pre-program) 41 Case 3: Random Assignment ControlTreatmentEst.T Consumption ($ monthly per capita) 233.47233.4-0.39 Age of head of household (years) 42.341.61.2 Age of spouse of head of household (years) 36.8 -0.38 Education Head of Household (years) 2.82.9-2.16* Education Spouse (years) 2.62.7-0.006 *statistically significant at the 5% level

42 Case 3: Balance (pre-program) 42 Case 3: Random Assignment ControlTreatmentEst.T Female head of household=1 0.07 0.66 Indigenous=1 0.42 0.21 Number of people in household 5.7 -1.21 Has bathroom=1 0.560.57-1.04 Hectares 1.711.671.35 Distance to nearest hospital (km) 106109-1.02

43 Case 3: Random Assignment 43 Treatment Group (Randomly assigned to treatment) Counterfactual (Randomly assigned to control) Impact (Y | P=1) - (Y | P=0) Baseline (T=0) Consumption (Y) 233.47233.400.07 Follow up (T=1) Consumption (Y) 268.75239.529.25** Regression Analysis Linear Regression 29.25** Multivariate Linear Regression 29.75** **significant at the 1% level

44 Policy Recommendations? 44 Impact of Progress on Consumption (Y) Case 1: Pre- program Multivariate Linear Regression 34.28** Case 2: Self- selection Linear Regression -22** Multivariate Linear Regression -4.15 Case 3: Random Assignment Multivariate Linear Regression 29.75** ** significant at the 1% level

45 Comparing Different Benefits 45 “Traditional” evaluation question: o What is the impact of a program over a result of interest? Other interesting questions: o How can we optimize a program? o What is the optimal level of a benefit? o How does a program work? o What is the impact of a sub-component of the program? Random assignment with two levels of benefits: ComparisonLow BenefitsHigh Benefits X

46 = Ineligible Random Assignment with 2 Levels of Benefits = Eligible 1. Eligible population2. Evaluation Sample 3. Random Assignment (2 levels) Comparison

47 Combined Impact of 2 Benefits 47 How do the 2 benefits complement each other? Random assignment of a package of interventions: Intervention 2 ComparaciónTratamiento Intervention 1 Comparison Group A X Group C Treatment Group B Group D

48 = Ineligible Random Assignment of Multiple Interventions = Eligible 1. Eligible Population2. Evaluation Sample 3. Random Assignment 1 4. Random Assignment X

49 Random Assignment Random assignment of a program to a large population produces two groups that are statistically identical We have a perfect “clone”! Randomly assigned to treatment Randomly assigned to control Feasible in prospective evaluations when there is excess demand and limited supply. Many pilot programs meet this requirement. Remember ! 49

50 2 Evaluation Methods Random Assignment Encouragement Design (Instrumental Variables) 50

51 What if EVERYONE can participate? For example…… o National programs with universal eligibility? o Programs with voluntary participation? o Programs where you can’t exclude anyone? If not everyone is enrolled, can we compare those who participate and those who don’t? Selection Bias!

52 Offer or promote a program to a random sub-group 52 If enrollment is voluntary: o Offer program to a random sub-sample o Some accept o Others don’t accept … and can’t exclude anyone from getting the offer: o Offer program to everyone o Offer encouragement or incentives to a random sub-sample: Information Prize Transportation Random offer Random promotion

53 Random Offer and Promotion 53 1.Groups offered/promoted the program and not offered/not promoted the program are comparable: –Being offered (or not) the program is NOT correlated with population characteristics Guaranteed by randomization 2.Group offered/promoted the program has a higher participation rate in the program In other words, promoting the program works! We can prove this empirically 3.Offering/promoting program doesn’t affect results –We use theory and intuition to guarantee this condition Necessary Conditions:

54 Random Offer and Promotion 54 3 groups of units or individuals

55 0 Eligible Enrolled NeverPromotionAlways Random Offering and Promotion 1. Eligible Population 2. Randomize offering/promotion 3. Enrollment WITH offering/promotion Without offering/promotion

56 Exercise: Estimate the Impact of 56 50% 20 20/(1/2)=4 0

57 Case 4: Random Offering 57 Random offering is an instrumental variable o Progresa is offered to households o Offered to households in a random group of 320 communities: 92% of households accept it, 8% reject it. If there’s less than 100% enrollment in the program… o Intention to Treat: Impact of offering the program o Treatment on the Treated: Impact of taking up the program o We use random assignment of the offer as an instrumental variable

58 Case 4: Random Offering 58 WITH Offer (320 Communities) NO Offer (186 Communities) Impact Treatment on the Treated Enrolled=92% Average Consumption= 268 Enrolled=0% Average Consumption= 239 ∆Enrolled=0.92 ∆Y=29 Impact= 29/0.92 =31 Never Participates X Participates with Offer Always Participates X

59 Case 4: Random Offering 59 Impact of Treatment over Consumption of the Treated Instrumental Variables Regression 30.4** ** Statistically significant at the 1% level

60 Random Offering/Promotion Offer/Promotion must be effective in increasing participation! We randomly assign offers (experimental evaluation) in order to evaluate the impact of the program at hand Strategy depends on the validity of the offer/promotion We estimate a local impact, not necessarily generalizable for all the population We don’t exclude anyone, but… Remember ! 60

61 2 Evaluation Methods Random Assignment Regression Discontinuity Design Encouragement Design (Instrumental Variables) 61

62 Regression Discontinuity Design 62 Anti-poverty programs Pensions Education Agriculture Many programs target beneficiaries based on an index or continuous scoring that determines eligibility: Targets households under certain income level or poverty score Targets senior citizens Scholarships for the best students based on a standardized test Fertilizers for smallhold farmers Other examples?

63 Example: Agricultural Program 63 Improve productivity among smallhold farmers Objective: o Farmers with ≤50 hectares are eligible o Farmers with >50 hectares are not eligible Targetting: Subsidy to buy fertilizers Intervention:

64 Pre-Intervention (Baseline) Not Eligible Eligible

65 Post-Intervention IMPACT

66 Regression Discontinuity Design 66 We need a continuous eligibility index with a defined eligibility cutoff point Intuition: o Units around that cutoff point are similar – so the group not chosen makes a good counterfactual o Estimated impact is valid for that neighborhood around the cutoff point To use a Regression Discontinuity Design you need: 1)Continuous eligibility index 2)Defined cutoff point

67 Case 5: Regression Discontinuity Design 67 Eligibility for Progresa based on a poverty index (means tested) Poor household if score ≤ 750 Eligibility: o Eligible=1 if score ≤ 750 o Eligible=0 if score > 750

68 Case 5: Regression Discontinuity Design: Pre- intervention Poverty Index Consumption Estimated Score for Targeting

69 Case 5: Regression Discontuinity Design: Post-Intervention Consumption Poverty Index 30.58** Impact on Consumption (Y) Multivariate Linear Regression Estimated Score for Targeting

70 Regression Discontinuity Design Needs a continuous eligibility index with a defined cutoff point Observations right on the other side of the eligibility cutoff point are good controls. You don’t need to exclude a group of eligible units from the program Many times, it can be applied in retrospective evaluation if necessary conditions are met Remember ! 70

71 Regression Discontinuity Design Produces an estimate of local impact: o Program effect around the cutoff point o Important for deciding whether to expand or reduce program coverage o Not necessarily generalizable to other populations Power: o We need many observations around the cutoff point. Careful with the model: Sometimes, what seems like a discontinuity in a graph is really something else (a non-linear relationship) Remember ! 71

72 2 Evaluation Methods Random Assignment Regression Discontinuity Design Encouragement Design (Instrumental Variables) Difference in Differences (Diff-in-Diff) 72

73 Difference in Differences (Diff-in-Diff) 73 Y= School Enrollment P= Tutoring program Dif-in-Dif: Impact=(Y t1 -Y t0 )-(Y c1 -Y c0 ) With Program Without Program Post 0.740.81 Pre 0.600.78 Difference +0.14+0.03 0.11 -- - =

74 Difference in Differences (Diff-in-Diff) 74 Dif-in-Dif: Impact=(Y t1 -Y c1 )-(Y t0 -Y c0 ) Y= School Enrollment P= Tutoring Program With Program Without Program Post 0.740.81 Pre 0.600.78 Difference -0.07 -0.18 0.11 - - - =

75 Impact =(A-B)-(C-D)=(A-C)-(B-D) School Enrollment B=0.60 C=0.81 D=0.78 T=0T=1 Time With Program Without Program Impact=0.11 A=0.74

76 Impact =(A-B)-(C-D)=(A-C)-(B-D) School Enrollment Impact<0.11 B=0.60 A=0.74 C=0.81 D=0.78 T=0T=1 Time With Program Without Program

77 Case 6: Diff-in-Diff with Progresa 77 EnrolledNot EnrolledDifference Pre-program (T=0) Consumption (Y) 233.47281.74-48.27 Post-program (T=1) Consumption (Y) 268.75290-21.25 Difference 35.288.2627.02 Regression Analysis Linear Regression 27.06** Multivariate Linear Regression 25.53** ** Statistically significant at the 1% level

78 Difference in Differences Diff-in-Diff: pre-post change between a participating and a non-participating group. Counterfactual for trends or changes in the indicators. Assumes that in the absence of the program, participating and non-participating groups follow same trends To prove equal trends, we need at least: o 2 observations from before o 1 observation after Remember ! 78

79 2 Evaluation Methods Random Assignment Regression Discontinuity Design Encouragement Design (Instrumental Variables) Difference in Differences (Diff-in-Diff) Matching 79

80 Matching 80 For each treatment unit we choose its best “match” from another, untreated population. Idea Matches are chosen based on similarities on observable characteristics. How? If there are other characteristics that are not observed and that affect participation: Selection bias! Risk?

81 Propensity-Score Matching (PSM) 81 Comparison Group: non-participants units with the same observable characteristics as participants. o There may exist many important characteristics Propensity Score Matching (proposed by Rosenbaum and Rubin) : o For each unit, calculate the probability of participating based on observable characteristics: propensity score. o Choose pairs that have similar propensity scores. o See appendix 2.

82 Density of Propensity Scores Density Propensity Score 0 1 Participants Non-Participants Common Support

83 Case 7: Matching with Progresa Pre-program characteristics Estimated Coefficient Probit Regression, Prob Enrolled=1 Age (in years) of Head of Household -0.022** Age of spouse of Head (in years) -0.017** Education of Head (years) -0.059** Education of Spouse (years) -0.03** Female Head =1 -0.067 Indigenous=1 0.345** Number of household members 0.216** Dirt floor=1 0.676** Bathroom=1 -0.197** Land hectares -0.042** Distance to hospital (km) 0.001* Constant 0.664** **statistically significat at the 1% level, * statistically significant at the 5% level

84 Case 7: Common Support in Progresa Pr (Enrolled) Not Enrolled Density Enrolled

85 Case 7: Matching 85 Impact on Consumption (Y) Multivariate Linear Regression 7.06+ + statistically significant at the 10% level

86 Matching Requires a baseline with a large number of observations for participants and non- participants Using this method: o Optimal if we know assignment rule for benefits and we use it to find matching pairs. o Combined with other methods like diff-in-diff Matching without a baseline is very risky: Matching based on endogenous variables that have been affected by the program generates biased counterfactuals Remember ! 86

87 Policy Recommendations? 87 Impact of Program on Consumption (Y) Case 1: Pre-program 34.28** Case 2: Self-selection -4.15 Case 3: Random Assignment 29.75** Case 4: Treatment on the Treated 30.4** Case 5: Regression Discontinuity Design 30.58** Case 6: Difference in Differences 25.53** Case 7: Matching 7.06+ **Statistically significant at the 1% level

88 Policy Recommendations 88 Impact of Program on Consumption (Y) Case 1: Pre-program 34.28** Case 2: Self-Selection -4.15 Case 3: Random Assignment 29.75** Case 4: Treatment on the Treated 30.4** Case 5: Regression Discontinuity Design 30.58** Case 6: Difference in Differences 25.53** Case 7: Matching 7.06+ ** Statistically significant at the 1% level

89 2 Evaluation Methods Random Assignment Regression Discontinutiy Design Encouragement Design (Instrumental Variables) Difference in Differences (diff-in-diff) Matching Combining Methods 89

90 Choosing a Method 90 1. Prospective or Retrospective? 2. Eligibility Criteria? 3. Implemmentation is immediate or in stages? 4. Limited resources for potential demand? o Targeting? o Geographic variation? o Budgetting or implementing capacity limitations? o Excess demand for the program Key information when choosing a method:

91 Choosing a Method 91 Best Design Do we control for evertyhing? Is the result valid for all the populations of interest? o Most robust counterfactual + minimum operating risk o External validity o Local or global impacts o Results apply to other relevant populations o Internal validity o Good comparison group Choose the best design given the operating context

92 Using operating rules to choose a method… 92 Source: Gertler et al. “Impact Evaluation in Practice” (2010) Excess demand for program (limited resources) No excess demand (enough resources) Continuous targeting index and threshold No continuous targeting index or threshold Continuous targeting index and threshold No continuous targeting index or threshold Scale up by stages with time Immediate Implementation Resources Targeting Time o Random Assignment o RDD o Random Assignment o RRD o Random Assignment o IV o Diff-in-Diff & Matching o Random Assignment o IV o Diff-in-Diff & Matching o Random assignment by stages o RDD o Random assignment by stages o IV o Diff-in-Diff & Matching If there’s no full participation: o IV o Diff-in-Diff & Matching

93 “ Remember 93 The objective of the impact evaluation is to estimate the causal effect of a program on the results of interest. !

94 “ 94 To measure impact, we need to estimate a counterfactual. What would have happened in the absence of the program We use comparison groups for this ! Remember

95 “ 95 Our “toolbox” to evaluate impact offers 5 methods to generate comparison groups. ! Remember

96 “ 96 Chose the best possible method given the operating context of the program ! Remember

97 www.iadb.org www.worldbank.org/ieinpractice 97 Thank you!

98 Appendix 1: Two-Stage Least Squares 98 Model with Endogenous Treatment (T): Stage 1: Regress the endogenous variable on the instrumental variable (Z) and other exogenous regressors: Calculate the expected value for each observation: T hat

99 99 Need to correct for Standard Errors (based more on T-hat than on T) Stage 2: Regress the result variable Y on the expected variable (and other exogenous variables): In practice, use STATA - ivreg Intuition: T has been “stripped” from its correlation with ε. Appendix 1: Two-Stage Least Squares

100 Appendix 2 Steps for a Propensity Score Matching 100 1.Need representative and highly comparable surveys of participants and non-participants. 2.Merge both samples and estimate a logit (or probit) regression on a dichotomic program participation variable. 3.Restrict the sample to the common support. 4.For each participant, look for non-participants with similar propensity scores. 5.Take the difference in the results between each participant and its matched pair or pairs. The difference is the estimated impact of the program for that observation. 6.Calculate the average of the individual impacts to estimate the average impact of the program.

101


Download ppt "Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO."

Similar presentations


Ads by Google