Presentation is loading. Please wait.

Presentation is loading. Please wait.

Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.

Similar presentations


Presentation on theme: "Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods."— Presentation transcript:

1 Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods I

2 What we know so far Aim: We want to isolate the causal effect of our interventions on our outcomes of interest  Use rigorous evaluation methods to answer our operational questions  Randomizing the assignment to treatment is the “gold standard” methodology (simple, precise, cheap)  What if we really, really (really??) cannot use it?! >> Where it makes sense, resort to non-experimental methods

3 Non-experimental methods  Can we find a plausible counterfactual?  Natural experiment?  Every non-experimental method is associated with a set of assumptions  The stronger the assumptions, the more doubtful our measure of the causal effect  Question our assumptions ▪ Reality check, resort to common sense! 3

4 Example: Matching Grants Program  Principal Objective ▪ Increase firm productivity and sales  Intervention ▪ Matching grants distribution ▪ Non-random assignment  Target group ▪ SMEs with 1-10 employees  Main result indicator ▪ Sales 4

5 5 (+) Impact of the program (+) Impact of external factors Illustration: Matching Grants - Randomization

6 6 « After » difference btwn participants and non-participants Illustration: Matching Grants – Difference-in-difference « Before» difference btwn participants and nonparticipants >> What’s the impact of our intervention?

7 Difference-in-Differences Identification Strategy (1) Counterfactual: 2 Formulations that say the same thing 1. Non-participants’ sales after the intervention, accounting for the “before” difference between participants/nonparticipants (the initial gap between groups) 2. Participants’ sales before the intervention, accounting for the “before/after” difference for nonparticipants (the influence of external factors)  1 and 2 are equivalent 7

8 Data – Example 8

9 “After”-difference: P 08 -NP 08 =1.4 “Before”- difference: P 07 -NP 07 =1.0 “Before”- difference: P 07 -NP 07 =1.0 Impact=0.4

10 Difference-in-Differences Identification Strategy (2) Underlying assumption: Without the intervention, sales for participants and non participants would have followed the same trend >> Graphic intuition coming…

11 “After”-difference: P 08 -NP 08 =1.4 Impact=0.4 “Before”- difference: P 07 -NP 07 =1.0 “Before”- difference: P 07 -NP 07 =1.0

12 Estimated Impact =0.4 True Impact=-0.3

13 Summary  Assumption of same trend very strong  2 groups were, in 2007, producing at very different levels ➤ Question the underlying assumption of same trend! ➤ When possible, test assumption of same trend with data from previous years

14 Questioning the Assumption of same trend: Use pre-pr0gram data >> Reject counterfactual assumption of same trends !

15 Questioning the Assumption of same trend: Use pre-pr0gram data >>Seems reasonable to accept counterfactual assumption of same trend ?!

16 Caveats (1)  Assuming same trend is often problematic  No data to test the assumption  Even if trends are similar the previous year… ▪ Where they always similar (or are we lucky)? ▪ More importantly, will they always be similar? ▪ Example: Other project intervenes in our nonparticipant firms…

17 Caveats (2)  What to do? >> Be descriptive!  Check similarity in observable characteristics ▪ If not similar along observables, chances are trends will differ in unpredictable ways >> Still, we cannot check what we cannot see… And unobservable characteristics might matter more than observable (ability, motivation, patience, etc)

18 Matching Method + Difference-in-Differences (1) Match participants with non-participants on the basis of observable characteristics Counterfactual:  Matched comparison group  Each program participant is paired with one or more similar non-participant(s) based on observable characteristics >> On average, matched participants and nonparticipants share the same observable characteristics (by construction)  Estimate the effect of our intervention by using difference-in-differences 18

19 Matching Method (2) Underlying counterfactual assumptions  After matching, there are no differences between participants and nonparticipants in terms of unobservable characteristics AND/OR  Unobservable characteristics do not affect the assignment to the treatment, nor the outcomes of interest

20 How do we do it?  Design a control group by establishing close matches in terms of observable characteristics  Carefully select variables along which to match participants to their control group  So that we only retain ▪ Treatment Group: Participants that could find a match ▪ Comparison Group: Non-participants similar enough to the participants >> We trim out a portion of our treatment group!

21 Implications  In most cases, we cannot match everyone  Need to understand who is left out  Example Score Nonparticipants Participants Matched Individuals Wealth Portion of treatment group trimmed out

22 Conclusion (1)  Advantage of the matching method  Does not require randomization 22

23 Conclusion (2)  Disadvantages:  Underlying counterfactual assumption is not plausible in all contexts, hard to test ▪ Use common sense, be descriptive  Requires very high quality data: ▪ Need to control for all factors that influence program placement/outcome of choice  Requires significantly large sample size to generate comparison group  Cannot always match everyone… 23

24 Summary  Randomized-Controlled-Trials require minimal assumptions and procure intuitive estimates (sample means!)  Non-experimental methods require assumptions that must be carefully tested  More data-intensive  Not always testable  Get creative:  Mix-and-match types of methods!  Address relevant questions with relevant techniques 24

25 Thank you Financial support from: Bank Netherlands Partnership Program (BNPP), Bovespa, CVM, Gender Action Plan (GAP), Belgium & Luxemburg Poverty Reduction Partnerships (BPRP/LPRP), Knowledge for Change Program (KCP), Russia Financial Literacy and Education Trust Fund (RTF), and the Trust Fund for Environmentally & Socially Sustainable Development (TFESSD), is gratefully acknowledged.


Download ppt "Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods."

Similar presentations


Ads by Google