Presentation is loading. Please wait.

Presentation is loading. Please wait.

Quasi-Experimental Methods

Similar presentations


Presentation on theme: "Quasi-Experimental Methods"— Presentation transcript:

1 Quasi-Experimental Methods
Jean-Louis Arcand The Graduate Institute | Geneva This presentation draws on previous presentations by Markus Goldstein, Leandre Bassole, and Alberto Martini

2 Objective Reality check Find a plausible counterfactual
Every method is associated with an assumption The stronger the assumption the more we need to worry about the causal effect Question your assumptions Reality check

3 Program to evaluate Hopetown HIV/AIDS Program (2008-2012) Objectives
Reduce HIV transmission Intervention: Peer education Target group: Youth 15-24 Indicator: Pregnancy rate (proxy for unprotected sex)

4 I. Before-after identification strategy (aka reflexive comparison)
Counterfactual: Rate of pregnancy observed before program started EFFECT = After minus Before

5 Year Number of areas Teen pregnancy rate (per 1000) 2008 70 62.90 2012 66.37 Difference +3.47

6 Counterfactual assumption: no change over time
Effect = +3.47 Intervention Question: what else might have happened in to affect teen pregnancy?

7 Examine assumption with prior data
Number of areas Teen pregnancy (per 1000) 2004 2008 2012 70 54.96 62.90 66.37 Assumption of no change over time looks a bit shaky

8 Teen pregnancy rate (per 1000) in 2012
II. Non-participant identification strategy Counterfactual: Rate of pregnancy among non-participants Teen pregnancy rate (per 1000) in 2012 Participants 66.37 Non-participants 57.50 Difference +8.87

9 Question: how might participants differ from non-participants?
Counterfactual assumption: Without intervention participants have same pregnancy rate as non-participants Participants Effect = +8.87 Non-participants Question: how might participants differ from non-participants?

10 Test assumption with pre-program data
? REJECT counterfactual hypothesis of same pregnancy rates

11 III. Difference-in-Difference identification strategy
Counterfactual: Nonparticipant rate of pregnancy, purging pre-program differences in participants/nonparticipants “Before” rate of pregnancy, purging before-after change for nonparticipants 1 and 2 are equivalent

12 Average rate of teen pregnancy in
2008 2012 Difference ( ) Participants (P) 62.90 66.37 3.47 Non-participants (NP) 46.37 57.50 11.13 Difference (P-NP) 16.53 8.87 -7.66

13 Effect = 3.47 – = Participants 66.37 – = 3.47 = 11.13 Non-participants

14 Effect = 8.87 – 16.53 = - 7.66 Before After 66.37 – 57.50 = 8.87
62.90 – = 16.53 After

15 Counterfactual assumption:
Without intervention participants and nonparticipants’ pregnancy rates follow same trends

16 74.0 16.5

17 74.0 -7.6

18 Questioning the assumption
Why might participants’ trends differ from that of nonparticipants?

19 Examine assumption with pre-program data
Average rate of teen pregnancy in 2004 2008 Difference ( ) Participants (P) 54.96 62.90 7.94 Non-participants (NP) 39.96 46.37 6.41 Difference (P=NP) 15.00 16.53 +1.53 ? Or with other outcomes not affected by the intervention: household consumption counterfactual hypothesis of same trends doesn’t look so believable

20 IV. Matching with Difference-in-Difference identification strategy
Counterfactual: Comparison group is constructed by pairing each program participant with a “similar” nonparticipant using larger dataset – creating a control group from similar (in observable ways) non-participants

21 Counterfactual assumption:
Unobserved characteristics do not affect outcomes of interest Unobserved = things we cannot measure (e.g. ability) or things we left out of the dataset Question: how might participants differ from matched nonparticipants?

22 Matched nonparticipant
73.36 Effect = 66.37 Matched nonparticipant Participant

23 Can only test assumption with experimental data
Studies that compare both methods (because they have experimental data) find that: unobservables often matter! direction of bias is unpredictable! Apply with care – think very hard about unobservables

24 V. Regression discontinuity identification strategy
Applicability: When strict quantitative criteria determine eligibility Counterfactual: Nonparticipants just below the eligibility cutoff are the comparison for participants just above the eligibility cutoff

25 Counterfactual assumption:
Nonparticipants just below the eligibility cutoff are the same (in observable and unobservable ways) as participants just above the eligibility cutoff Question: Is the distribution around the cutoff smooth? Then, assumption might be reasonable Question: Are unobservables likely to be important (e.g. correlated with cutoff criteria)? Then, assumption might not be reasonable However, can only estimate impact around the cutoff, not for the whole program

26 Example: Effect of school inputs on test scores
Target transfer to poorest schools Construct poverty index from 1 to 100 Schools with a score <=50 are in Schools with a score >50 are out Inputs transfer to poor schools Measure outcomes (i.e. test scores) before and after transfer

27

28 Non-Poor Poor

29

30 Treatment Effect

31 Applying RDD in practice: Lessons from an HIV-nutrition program
Lesson 1: criteria not applied well Multiple criteria: hh size, income level, months on ART Nutritionist helps her friends fill out the form with the “right” answers Now – unobservables separate treatment from control… Lesson 2: Watch out for criteria that can be altered (e.g. land holding size)

32 Summary Gold standard is randomization – minimal assumptions needed, intuitive estimates Nonexperimental requires assumptions – can you defend them?

33 Different assumptions will give you different results
The program: ART treatment for adult patients Impact of interest: effect of ART on children of patients (are there spillover & intergenerational effects of treatment?) Child education (attendance) Child nutrition Data: 250 patient HHs 500 random sample HHs Before & after treatment Can’t randomize ART so what is the counterfactual

34 Possible counterfactual candidates
Random sample difference in difference Are they on the same trajectory? Orphans (parents died – what would have happened in absence of treatment) But when did they die, which orphans do you observe, which do you not observe? Parents self report moderate to high risk of HIV Self report! Propensity score matching Unobservables (so why do people get HIV?)

35 Estimates of treatment effects using alternative comparison groups
Effects are now very large and significant for all kids, particularly in newly treated households Effects represent 30-50% increase relative to orphans and high/mod risk kids within first 100 days of treatment (base =29 hrs) Boys continue to experience large increases after 100 days SEGUE: We can also estimate average treatment on treated using propensity score approach Compare to around 6.4 if we use the simple difference in difference using the random sample Standard errors clustered at the household level in each round. Includes child fixed effects, round 2 indicator and month-of-interview indicators.

36 Estimating ATT using propensity score matching
Allows us to define comparison group using more than one characteristic of children and their households Propensity scores defined at household level, with most significant variables being single-headed household and HIV risk As such, we can view propensity score approach as hybrid of prior comparisons SEGUE: First present the probit results for propensity score regression

37 Probit regression results
Dependent variable: household has adult ARV recipient Interesting to note that neither wealth nor travel time were strong predictors of treatment. Use these coefficients to calculate propensity scores for everyone in our sample SEGUE: Turning to results

38 ATT using propensity score matching
Nearest neighbor is significant at less than 5% level and kernel nearly so Two things to note: - really power constrained to do matching - do not break into newly and veteran treated due to power issues - relative to other comparisons, number quite similar hours when compared to orphans hours when compared to high/mod risk households SEGUE: Now ready to turn attention to nutrition

39 Nutritional impacts of ARV treatment
Very large increase is BMI z-scores of 0.57 standard deviations reassuring that do not see significant changes in height-for-age which is longer term measure of nutrition Also see significant 11% decrease in wasting does not kick in until after parent treated for a while NOTE: Interviewer fixed effects because measuring height tricky and may depend on skill and experience of interviewer SEGUE: What about alternative comparison groups? Includes child fixed effects, age controls, round 2 indicator, interviewer fixed effects, and month-of-interview indicators.

40 Nutrition with alternative comparison groups
Comparison to orphans insignificant, but only 7 orphans under 5 in random sample Comparison to high/mod risk hhs reveals larger nutritional impacts – increase in z-score of standard deviations context: Duflo’s (2003) work on pensions in SA found 1.19 improvement in z-score NOTE: not enough power to do propensity score approach here SEGUE: OK. So what is the take away? Includes child fixed effects, age controls, round 2 indicator, interviewer fixed effects, and month-of-interview indicators.

41 Summary: choosing among non-experimental methods
At the end of the day, they can give us quite different estimates (or not, in some rare cases) Which assumption can we live with?

42 Thank You


Download ppt "Quasi-Experimental Methods"

Similar presentations


Ads by Google