Presentation is loading. Please wait.

Presentation is loading. Please wait.

Patricia Gonzalez, OSEP June 14, 2011 1. The purpose of annual performance reporting is to demonstrate that IDEA funds are being used to improve or benefit.

Similar presentations


Presentation on theme: "Patricia Gonzalez, OSEP June 14, 2011 1. The purpose of annual performance reporting is to demonstrate that IDEA funds are being used to improve or benefit."— Presentation transcript:

1 Patricia Gonzalez, OSEP June 14, 2011 1

2 The purpose of annual performance reporting is to demonstrate that IDEA funds are being used to improve or benefit children with disabilities and their families. In the case of SPDG Program funds, the theory is that providing effective professional development to personnel implementing special education or early intervention services will ultimately benefit targeted children and families. 2

3  In order to show improvement or benefit in a social condition, change or progress must be demonstrated on outcome variables of interest (that is, an outcome evaluation must occur). and…  In order to “credit” the SPDG Program, some link must be established between SPDG activities and those changes. 3

4  Additionally, judgments based on outcome evaluations rely on determining whether outcomes have improved or are better “as compared” to something else.  Linking program activities to interventions and comparing those outcomes to other groups (or points of time with the same group) requires an evaluation (research) design. 4

5  In special education, evaluation questions involving comparisons often focus on one of the following: ◦ Comparisons with non-disabled students (or special/general education teachers) ◦ Comparisons of students with different types of disabilities or teachers with different specializations ◦ Cross-unit comparisons (districts, schools, classrooms) ◦ Longitudinal/repeated measures of the same group 5

6  Decisions about the type of evaluation design and the appropriate comparison group depends, for example, on: ◦ the evaluation questions ◦ the length of the program or intervention ◦ the amount of resources available ◦ the ability to randomly assign participants to groups 6

7 7

8  Random assignment of participants to groups improves the rigor of the evaluation, but the use of intact groups, such as classrooms and schools is much more common in practice.  The use of propensity scores with intact groups reduces threats to internal validity and improves confidence in evaluation results. 8

9 Examples of Comparison Group Evaluation Amy Gaumer Erickson, Ph.D., aerickson@ku.eduaerickson@ku.edu University of Kansas, Center for Research on Learning 9

10 Is random assignment feasible?  Do you use random assignment of individuals in any of your SPDG activities?  Yes  No 10

11 Comparing Groups (both receiving intervention) Question: Do teachers and administrators have different perceptions about the level of parental involvement in their schools? 11

12 Comparing Across Time (intervention group only) Question: Through multi-year participation in the intervention, do educators feel that their schools improved academic and behavior supports for students? 12

13 Comparing Groups (baseline with demographic variable) Question: What is the level of implementation of research-based transition indicators reported by high school special education teachers? 13

14 Comparing Across Time (intervention group only) Question: How do students with disabilities in the intervention schools perform on the state communication arts assessment across multiple years of intervention implementation? 14

15 Comparing Groups (intervention & state average) Question: How do students with disabilities in the intervention schools perform on state assessments compared to the state average? 15

16 Comparing Groups (stratified sample) Question: In the past year, did the percentage of students with disabilities in the intervention schools that met proficiency on state mathematics assessments increase? 16

17 Comparing Groups (Intervention & Similar Schools) In the past year, did the percentage of students that met proficiency on state communication arts assessments increase? 17

18 Developing Stronger Outcome Data 18  Research design should be clearly articulated from the beginning  Intervention groups must be clearly defined  Multiple measures are necessary  Outcome variable should be collected across time  Outcome variables should be compared to something

19 Propensity Score Matching Chunmei (Rose) Zheng Graduate Research Assistant Center of Research on Learning (CRL) University of Kansas 06/08/2011 19

20 Overview of Presentation Example of comparison analysis General description of PSM Steps of the PSM Resources and References 20

21 An Example of Comparison Analysis IndividualsJob TrainingIncome 1060 2080 3090 40200 51100 6180 7190 8170 Research question: Is there a significant difference in youth income between students with job training and students without job training? The data set looks like: 21

22 An Example of Comparison Analysis Education years might be a factor to influence youth income. The data set looks like: IndividualJob TrainingIncomeEducation 10602 20803 30905 4020012 511005 61803 71904 81702 From Heinrich, C., Maffioli, A., & Vázquez, G. (2010) 22

23 An Example of Comparison Analysis After matching, the data looks like: IJTIncomeEducationMatchY1Y0Difference 10602 --- 20803 --- 30905 --- 4020012 --- 511005[3]1009010 61803[2]80 0 71904[2,3]90855 81702[1]70610 But what about adding other covariate variables: age, gender, and ethnicity? Matching becomes more and more complicated…. From Heinrich, C., Maffioli, A., & Vázquez, G. (2010) 23

24 General Description of PSM Why Propensity Score Matching (PSM)? --Propensity score can reduce the entire set of covariates into a single variable. -- Adjusts for (but not totally solve the problem of) selection bias What is propensity score? In statistical terms, propensity scores are the estimated conditional probability that a subject will be assigned to a particular treatment, given a vector of observed covariates (Pasta, D. J., p.262). 24

25 General Description of PSM Average Treatment Effect or ATE: ATE = E(δ ) = E( Y1 –Y0 ) Average Treatment Effect on the Treated, or ATT, ATT = E(Y1 −Y0 | D =1) Average Treatment Effect on the Untreated, or (ATU) ATU = E( Y1 −Y0 | D = 0) 25

26 Steps of PSM Estimate the propensity score Choose a matching algorithm that will use the estimated propensity scores to match untreated units to treated units Estimate the impact of the intervention with the matched sample and calculate standard errors. ---From Heinrich, C., Maffioli, A., & Vázquez, G. (2010) 26

27 Software for PSM Stata R SPSS S-Plus SAS Mplus 27

28 Reference Pasta, D. J. (n.d.). Using propensity scores to adjust for group differences: examples comparing alternative surgical methods. SUGI paper, 261-25. Heinrich, C., Maffioli, A., & Vázquez, G. (2010). A Primer for Applying Propensity-Score Matching. SPD Working Papers. Guo, S., & Fraser, M. W. (2010). Propensity score analysis: Statistical methods and applications. Sage. 28


Download ppt "Patricia Gonzalez, OSEP June 14, 2011 1. The purpose of annual performance reporting is to demonstrate that IDEA funds are being used to improve or benefit."

Similar presentations


Ads by Google