Patricia Gonzalez, OSEP June 14, 2011 1. The purpose of annual performance reporting is to demonstrate that IDEA funds are being used to improve or benefit.

Slides:



Advertisements
Similar presentations
Value Added in CPS. What is value added? A measure of the contribution of schooling to student performance Uses statistical techniques to isolate the.
Advertisements

Pennsylvania’s Continuous Improvement Process. Understanding AYP How much do you know about AYP?
IDEA and NCLB Accountability and Instruction for Students with Disabilities SCDN Presentation 9/06 Candace Shyer.
VESID UPDATES Patricia J. Geary 9/15/06.  Behavioral Interventions  IDEA Federal Regulations  State Assessments  State Performance Plan  Levels of.
9. Weighting and Weighted Standard Errors. 1 Prerequisites Recommended modules to complete before viewing this module  1. Introduction to the NLTS2 Training.
Project Monitoring Evaluation and Assessment
Implementing Virginia’s Growth Measure: A Practical Perspective Deborah L. Jonas, Ph.D. Executive Director, Research and Strategic Planning Virginia Department.
+ Evidence Based Practice University of Utah Presented by Will Backner December 2009 Training School Psychologists to be Experts in Evidence Based Practices.
Policy Considerations and Implementation. Overview Defining RtI Where did it come from and why do we need it? Support for RtI in federal law Core principles.
Funding Opportunities at the Institute of Education Sciences: Information for the Grants Administrator Elizabeth R. Albro, Ph.D. Acting Commissioner National.
National Center on Educational Outcomes N C E O Strategies and Tools for Teaching English Language Learners with Disabilities April 9, 2005 Kristi Liu.
Talbert House Project PASS Goals and Outcomes.
Agenda: Block Watch: Random Assignment, Outcomes, and indicators Issues in Impact and Random Assignment: Youth Transition Demonstration –Who is randomized?
Types of Evaluation.
Experiments and Observational Studies.  A study at a high school in California compared academic performance of music students with that of non-music.
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
March, What does the new law require?  20% State student growth data (increases to 25% upon implementation of value0added growth model)  20%
Using and Implementing Goal Attainment Scales as a way to Measure Progress 1.
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.
Propensity Score Matching
Becoming a Teacher Ninth Edition
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
T tests comparing two means t tests comparing two means.
Evaluating the Vermont Mathematics Initiative (VMI) in a Value Added Context H. ‘Bud’ Meyers, Ph.D. College of Education and Social Services University.
Food Safety Professional Development for Early Childhood Educators Evaluation Plan.
Accountability for Results State Performance Plan improving educational results and functional outcomes for all children with disabilities…
S-005 Intervention research: True experiments and quasi- experiments.
Early Childhood Outcomes Center Data Workshop Getting the Tools You Need for Data-Informed Program Improvement Presented at the OSEP National Early Childhood.
Measuring Implementation: School-Wide Instructional Staff Perspective Amy Gaumer Erickson, Ph.D. University of Kansas Evaluator: Kansas & Missouri SPDGs.
Connecting with the SPP/APR Kansas State Personnel Development Grant.
Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.
MSRP Year 1 (Preliminary) Impact Research for Better Schools RMC Corporation.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Training Individuals to Implement a Brief Experimental Analysis of Oral Reading Fluency Amber Zank, M.S.E & Michael Axelrod, Ph.D. Human Development Center.
Office of Special Education Programs U.S. Department of Education GRANT PERFORMANCE REPORT FOR CONTINUATION FUNDING.
EDCI 696 Dr. D. Brown Presented by: Kim Bassa. Targeted Topics Analysis of dependent variables and different types of data Selecting the appropriate statistic.
CT Speech Language Hearing Association March 26, 2010.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
Data Report July Collect and analyze RtI data Determine effectiveness of RtI in South Dakota in Guide.
Elizabeth Spier, PhDJohannes Bos, PhD Principal ResearcherSenior Vice President FAST in Philadelphia SEPTEMBER 2015 Copyright © 2015 American Institutes.
Evaluating Impacts of MSP Grants Ellen Bobronnikov January 6, 2009 Common Issues and Potential Solutions.
Assessment Parents Due Process Title 6 and ELL Using Assessment to Identify Evaluating Formally –IQ –Achievement Evaluating Informally –tying into instruction.
1 Early Childhood Assessment and Accountability: Creating a Meaningful System.
Using Propensity Score Matching in Observational Services Research Neal Wallace, Ph.D. Portland State University February
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Randomized Assignment Difference-in-Differences
Developing an evaluation of professional development Webinar #2: Going deeper into planning the design 1.
Effectiveness of Selected Supplemental Reading Comprehension Interventions: Impacts on a First Cohort of Fifth-Grade Students June 8, 2009 IES Annual Research.
Goal Attainment Scales as a way to Measure Progress Amy Gaumer Erickson & Monica Ballay December 3, 2012.
Report on the NCSEAM Part C Family Survey Batya Elbaum, Ph.D. National Center for Special Education Accountability Monitoring February 2005.
Introduction to the Grant August-September, 2012 Facilitated/Presented by: The Illinois RtI Network is a State Personnel Development Grant (SPDG) project.
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
Evaluating Outcomes of the English for Speakers of Other Languages (ESOL) Program in Secondary Schools: Methodological Advance and Strategy to Incorporate.
OSEP-Funded TA and Data Centers David Guardino, Office of Special Education Programs, U.S. Department of Education.
Research And Evaluation Differences Between Research and Evaluation  Research and evaluation are closely related but differ in four ways: –The purpose.
IMPACT EVALUATION PBAF 526 Class 5, October 31, 2011.
Instructional Leadership Supporting Common Assessments.
Response to Intervention for PST Dr. Kenneth P. Oliver Macon County Schools’ Fall Leadership Retreat November 15, 2013.
Experimental Design Ragu, Nickola, Marina, & Shannon.
Looking for statistical twins
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Constructing Propensity score weighted and matched Samples Stacey L
Holli G. Bayonas, Ph.D & Eric S. Howard, M.A.
Chapter Eight: Quantitative Methods
1/18/2019 1:17:10 AM1/18/2019 1:17:10 AM Discussion of “Strategies for Studying Educational Effectiveness” Mark Dynarski Society for Research on Educational.
Evaluating Impacts: An Overview of Quantitative Methods
Response to Intervention in Illinois
Sample Sizes for IE Power Calculations.
Presentation transcript:

Patricia Gonzalez, OSEP June 14,

The purpose of annual performance reporting is to demonstrate that IDEA funds are being used to improve or benefit children with disabilities and their families. In the case of SPDG Program funds, the theory is that providing effective professional development to personnel implementing special education or early intervention services will ultimately benefit targeted children and families. 2

 In order to show improvement or benefit in a social condition, change or progress must be demonstrated on outcome variables of interest (that is, an outcome evaluation must occur). and…  In order to “credit” the SPDG Program, some link must be established between SPDG activities and those changes. 3

 Additionally, judgments based on outcome evaluations rely on determining whether outcomes have improved or are better “as compared” to something else.  Linking program activities to interventions and comparing those outcomes to other groups (or points of time with the same group) requires an evaluation (research) design. 4

 In special education, evaluation questions involving comparisons often focus on one of the following: ◦ Comparisons with non-disabled students (or special/general education teachers) ◦ Comparisons of students with different types of disabilities or teachers with different specializations ◦ Cross-unit comparisons (districts, schools, classrooms) ◦ Longitudinal/repeated measures of the same group 5

 Decisions about the type of evaluation design and the appropriate comparison group depends, for example, on: ◦ the evaluation questions ◦ the length of the program or intervention ◦ the amount of resources available ◦ the ability to randomly assign participants to groups 6

7

 Random assignment of participants to groups improves the rigor of the evaluation, but the use of intact groups, such as classrooms and schools is much more common in practice.  The use of propensity scores with intact groups reduces threats to internal validity and improves confidence in evaluation results. 8

Examples of Comparison Group Evaluation Amy Gaumer Erickson, Ph.D., University of Kansas, Center for Research on Learning 9

Is random assignment feasible?  Do you use random assignment of individuals in any of your SPDG activities?  Yes  No 10

Comparing Groups (both receiving intervention) Question: Do teachers and administrators have different perceptions about the level of parental involvement in their schools? 11

Comparing Across Time (intervention group only) Question: Through multi-year participation in the intervention, do educators feel that their schools improved academic and behavior supports for students? 12

Comparing Groups (baseline with demographic variable) Question: What is the level of implementation of research-based transition indicators reported by high school special education teachers? 13

Comparing Across Time (intervention group only) Question: How do students with disabilities in the intervention schools perform on the state communication arts assessment across multiple years of intervention implementation? 14

Comparing Groups (intervention & state average) Question: How do students with disabilities in the intervention schools perform on state assessments compared to the state average? 15

Comparing Groups (stratified sample) Question: In the past year, did the percentage of students with disabilities in the intervention schools that met proficiency on state mathematics assessments increase? 16

Comparing Groups (Intervention & Similar Schools) In the past year, did the percentage of students that met proficiency on state communication arts assessments increase? 17

Developing Stronger Outcome Data 18  Research design should be clearly articulated from the beginning  Intervention groups must be clearly defined  Multiple measures are necessary  Outcome variable should be collected across time  Outcome variables should be compared to something

Propensity Score Matching Chunmei (Rose) Zheng Graduate Research Assistant Center of Research on Learning (CRL) University of Kansas 06/08/

Overview of Presentation Example of comparison analysis General description of PSM Steps of the PSM Resources and References 20

An Example of Comparison Analysis IndividualsJob TrainingIncome Research question: Is there a significant difference in youth income between students with job training and students without job training? The data set looks like: 21

An Example of Comparison Analysis Education years might be a factor to influence youth income. The data set looks like: IndividualJob TrainingIncomeEducation From Heinrich, C., Maffioli, A., & Vázquez, G. (2010) 22

An Example of Comparison Analysis After matching, the data looks like: IJTIncomeEducationMatchY1Y0Difference [3] [2] [2,3] [1]70610 But what about adding other covariate variables: age, gender, and ethnicity? Matching becomes more and more complicated…. From Heinrich, C., Maffioli, A., & Vázquez, G. (2010) 23

General Description of PSM Why Propensity Score Matching (PSM)? --Propensity score can reduce the entire set of covariates into a single variable. -- Adjusts for (but not totally solve the problem of) selection bias What is propensity score? In statistical terms, propensity scores are the estimated conditional probability that a subject will be assigned to a particular treatment, given a vector of observed covariates (Pasta, D. J., p.262). 24

General Description of PSM Average Treatment Effect or ATE: ATE = E(δ ) = E( Y1 –Y0 ) Average Treatment Effect on the Treated, or ATT, ATT = E(Y1 −Y0 | D =1) Average Treatment Effect on the Untreated, or (ATU) ATU = E( Y1 −Y0 | D = 0) 25

Steps of PSM Estimate the propensity score Choose a matching algorithm that will use the estimated propensity scores to match untreated units to treated units Estimate the impact of the intervention with the matched sample and calculate standard errors. ---From Heinrich, C., Maffioli, A., & Vázquez, G. (2010) 26

Software for PSM Stata R SPSS S-Plus SAS Mplus 27

Reference Pasta, D. J. (n.d.). Using propensity scores to adjust for group differences: examples comparing alternative surgical methods. SUGI paper, Heinrich, C., Maffioli, A., & Vázquez, G. (2010). A Primer for Applying Propensity-Score Matching. SPD Working Papers. Guo, S., & Fraser, M. W. (2010). Propensity score analysis: Statistical methods and applications. Sage. 28