Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.

Similar presentations


Presentation on theme: "Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department."— Presentation transcript:

1 Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department of Labor Washington, DC

2 Abt Associates | pg 2 Objectives  Present fundamental concepts of quasi- experimental designs (QEDs), particularly matched comparison group designs  Discuss issues with clustered designs  Discuss issues with using a comparison group from an earlier time period  Discuss examples from the field

3 Abt Associates | pg 3 How Do You Measure Impact?  Often, the most important questions you want to answer are related to whether or not something is helping to improve student outcomes  Answering these questions requires more than just following a group of students or participants  An impact analysis is designed to help answer questions on the effectiveness of interventions

4 Abt Associates | pg 4 Randomized Controlled Trial (RCT)  Experimental design in which individuals are randomly assigned to the intervention (treatment group), while the other group is not (control group)  Often considered the “gold standard” of impact evaluations  Once you collect outcome data on both the treatment and control groups, you can measure the difference in mean outcomes to determine the effect of the intervention

5 Abt Associates | pg 5 If Randomization is Not Possible  Often, randomization is not feasible, for a variety of reasons –Difficulty in giving the intervention to just one group because the institution wants all eligible individuals to receive it –Costs/time associated with administration of randomization and follow-up  If running an RCT is not an option, you may be able to use a quasi-experimental design (QED) to determine the impact of an intervention

6 Abt Associates | pg 6 Quasi-Experimental Design (QED)  The basic idea is that you are matching a treatment group (students, for example) to a comparison group (of similar students) –Match students by using their characteristics (not their outcomes): gender, age, ethnicity, other demographic or academic characteristics  In the end, you have a treatment group and comparison group of students that look similar to one another on key characteristics – except that only the treatment group received the intervention

7 Abt Associates | pg 7 QED using a Matching Strategy 10 Treatment Students 15 Comparison Students

8 Abt Associates | pg 8 Match Based on Characteristics 10 Treatment Students 15 Comparison Students

9 Abt Associates | pg 9 5 Comparison Students are Not Matched (in red) 10 Treatment Students 15 Comparison Students

10 Abt Associates | pg 10 And They are Left Out of the Sample 10 Treatment Students 10 Comparison Students

11 Abt Associates | pg 11 Baseline Equivalence  Once you have a treatment group and comparison group that look similar to one another, measure their baseline (that is, pre- intervention) characteristics –Ex: Test scores or wages prior to the start of intervention  Demonstrate that the treatment and comparison groups are very similar at baseline  Helps convince audience that your outcomes are different due to the intervention’s impact on the treatment group

12 Abt Associates | pg 12 Clustered Designs  Sometimes, whether a person is in a treatment or comparison group depends on whether they attend a certain community college, or live in a certain district, county, etc.  In these situations, for example, the community college is a cluster – the treatment and comparison condition depends on which college you attend –As opposed to a situation where students can be in a treatment or comparison condition within a community college

13 Abt Associates | pg 13 Clusters and Power  A clustered design may make it easier to distinguish which people are in the treatment condition versus the comparison condition –Ex: A student in community college X received the treatment, but you know that a student in community college Y did not  However, a clustered design diminishes power – that is, your ability to detect an impact –Less variation between clusters than there are between individual students –A design where the treatment and comparison conditions are at the student level will have greater power

14 Abt Associates | pg 14 Clusters and “N of 1 Confounds”  In certain situations, a treatment or comparison group may consist of only one community college –Ex: One community college in the state runs the program, and you want to use three surrounding community colleges to use as comparison group colleges  In this situation, how would you know that your program is impacting outcomes, or that the characteristics of the community college overall is impacting outcomes?  We often call this an “N of 1 confound,” where we cannot disentangle the impact of the program vs. the characteristics of the cluster –TA guidance: Avoid N of 1 confounds!

15 Abt Associates | pg 15 Timing of Comparison Groups  One way that some evaluators have created comparison groups is to collect information on previous cohorts (pre-intervention) –Ex: If treatment starts Fall 2015, a comparison group may include a cohort that started in Fall 2013, prior to start of the program  However, what if there was a major change that occurred between the two appearance of the two cohorts? –Ex: A major policy change at the community college in Fall 2014 that had nothing to do with the program, and may have impacted student outcomes

16 Abt Associates | pg 16 Timing of Comparison Groups  In the previous case, one may argue that there is a time-related bias that may have impacted outcomes for one group and not the other  Having your treatment and comparison groups start at the same time would avoid this type of bias –Baseline test scores or wages would be measured right before the start of the program, for both groups

17 Abt Associates | pg 17 Concluding Remarks  Try to compare your treatment students’ outcomes against those of similar comparison students using a QED –Match students across the treatment and comparison groups, using the information that you have on students –Clustered QEDs have lower power, compared to non- clustered designs –Previous cohorts can be considered a comparison group, but keep time-related bias in mind

18 Abt Associates | pg 18 Additional questions? Sung-Woo Cho, Ph.D. sung-woo_cho@abtassoc.com Office: 301-347-5843


Download ppt "Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department."

Similar presentations


Ads by Google