Longitudinal Analysis of Effects of Reclassification, Reporting Methods, and Analytical Techniques on Trends in Math Performance of Students with Disabilities.

Slides:



Advertisements
Similar presentations
Value Added in CPS. What is value added? A measure of the contribution of schooling to student performance Uses statistical techniques to isolate the.
Advertisements

Jamesville-DeWitt School Report Card Presented to the Board of Education May 10, 2010.
Maryland School Assessment (MSA) 2012 Science Results Carolyn M. Wood, Ph.D. Assistant Superintendent, Accountability, Assessment, and Data Systems August.
Accountability data overview August Topics  Changes to 2014 accountability reporting  Overview of accountability measures  Progress & Performance.
PurposeTo practice making valid inferences.Related Documents DescriptionThis activity can be used within your Data Team or with other audiences to improve.
Digging Into the Data to Learn More About Low Performing Student with Disabilities Sheryl Lazarus Successfully Transitioning Away from the 2% Assessment.
1 Union County School District Instructional Update 10 December 2007 Dr. David Eubanks Superintendent.
Build Assessment Literacy and Create a Data Overview Oct. 10, 2006.
Communicating through Data Displays October 10, 2006 © 2006 Public Consulting Group, Inc.
Robert L. Linn CRESST, University of Colorado at Boulder Paper presented at a symposium sponsored entitled “Accountability: Measurement and Value-Added.
N C E O National Center on Educational Outcomes Designing and Using Assessment and AYP Reports for Improved Achievement at State and Local Levels Martha.
Delaware’s Accountability Plan for Schools, Districts and the State Delaware Department of Education 6/23/04.
1 The New York State Education Department New York State’s Student Reporting and Accountability System.
The Characteristics of Non-Proficient Special Education and Non-Special Education Students on Large-Scale Assessments Yi-Chen Wu, Kristi Liu, Martha Thurlow,
99th Percentile 1st Percentile 50th Percentile What Do Percentiles Mean? Percentiles express the percentage of students that fall below a certain score.
NCAASE Work with NC Dataset: Initial Analyses for Students with Disabilities Ann Schulte NCAASE Co-PI
Introduction to Adequate Yearly Progress (AYP) Michigan Department of Education Office of Psychometrics, Accountability, Research, & Evaluation Summer.
NYC ACHIEVEMENT GAINS COMPARED TO OTHER LARGE CITIES SINCE 2003 Changes in NAEP scores Class Size Matters August
MARSHALL PUBLIC SCHOOLS STATE ACCOUNTABILITY RESULTS Multiple Measurement Rating (MMR) – Initial Designation.
1 Paul Tuss, Ph.D., Program Manager Sacramento Co. Office of Education August 17, 2009 California’s Integrated Accountability System.
ANDREW PARR, WASHINGTON STATE BOARD OF EDUCATION GREG LOBDELL, CENTER FOR EDUCATIONAL EFFECTIVENESS NATIONAL CONFERENCE ON STUDENT ASSESSMENT JUNE 24,
Evaluating the Vermont Mathematics Initiative (VMI) in a Value Added Context H. ‘Bud’ Meyers, Ph.D. College of Education and Social Services University.
Patterns in Child Outcomes Summary Data: Cornelia Taylor, Lauren Barton, Donna Spiker September 19-21, 2011 Measuring and Improving Child and Family Outcomes.
ATIA 2009 Accessible Online State Assessment Compared to Paper-Based Testing: Is There a Difference in Results? Presenters: Linnie Lee, Bluegrass Technology.
The Impact of Including Predictors and Using Various Hierarchical Linear Models on Evaluating School Effectiveness in Mathematics Nicole Traxel & Cindy.
1 Results for Students with Disabilities and School Year Data Report for the RSE-TASC Statewide Meeting May 2010.
CLINTON HIGH SCHOOL 2012 MCAS Presentation October 30, 2012.
Assessing SAGES with NSSE data Office of Institutional Research September 25 th, 2007.
A Principled Approach to Accountability Assessments for Students with Disabilities CCSSO National Conference on Student Assessment Detroit, Michigan June.
Issues in Assessment Design, Vertical Alignment, and Data Management : Working with Growth Models Pete Goldschmidt UCLA Graduate School of Education &
PUBLIC SCHOOLS OF NORTH CAROLINA STATE BOARD OF EDUCATION DEPARTMENT OF PUBLIC INSTRUCTION 1 Effect Size Lens on Local SAT and State Math Test Results.
Students with Disabilities in the P-16 Framework: Outcomes and Improvement Strategies Rebecca H. Cort VESID October 2007 Statewide Meeting.
1 The New York State Education Department New York State’s Student Data Collection and Reporting System.
December 15, 2014 ESEA Flexibility Analysis. The flex analysis was designed to examine the characteristics of schools identified by each SEA’s differentiated.
© CCSR ccsr.uchicago.edu. © CCSR Early Warning Indicators of High School Graduation and Dropout Elaine Allensworth.
The Nation’s Report Card: U.S. History National Assessment of Educational Progress (NAEP)
Comparisons of independent schools across time: QCEA 2005, 2006, and 2007 Key assumptions Summary of Results Arabic: All students, and independent school.
1 Cohort Graduation Rate October 1, 2010 Jonathan Wiens and Sara Berscheit Office of Assessment and Information Services Oregon Department of Education.
1 Monroe County School District Spending vs. Student Achievement John R. Dick School Board District 4.
P-20 in Action – Michigan’s Focus on Career and College Ready Students: Success through Cross- Agency Collaboration 2012 MIS Conference February 16, 2012.
MCC MCA Data Discoveries. What does Minnesota think is important? What do we want kids to do?  Pass important tests “Be Proficient”  Grow.
Slide 1 National Center on Educational Outcomes (NCEO) States’ Data-Based Responses to Low Achieving Students on State Assessments Martha L. Thurlow National.
Arkansas State Report Card Are We 5 th or 49 th ? July 8, 2013 Arkansas Rural Ed Association.
October 24, 2012 Jonathan Wiens, PhD Accountability and Reporting Oregon Department of Education.
SEF Describing good or better achievement and standards What is laid down, ordered, factual is never enough to embrace the whole truth: life spills over.
Vertical Articulation Reality Orientation (Achieving Coherence in a Less-Than-Coherent World) NCSA June 25, 2014 Deb Lindsey, Director of State Assessment.
Including analysis and self-help tools for coordination with Section 618: Table 6.
Measuring Turnaround Success October 29 th, 2015 Jeanette P. Cornier, Ph.D.
Accountability Training Review Agenda for Today: Review of calculation changes and what’s new In depth review of Closing Gaps calculations Graduation Rates.
Minnesota’s Proposed Accountability System “Leading for educational excellence and equity. Every day for every one.”
Value Added Model Value Added Model. New Standard for Teacher EvaluationsNew Standard for Teacher Evaluations Performance of Students. At least 50% of.
2011 MEAP Results Board of Education Presentation | 07 May 2012 Romeo Community Schools | Office of Curriculum and Instruction.
Using Data to Identify Priorities in an Accountability System Jared E. Knowles, Wisconsin Department of Public Instruction.
1 Testing Various Models in Support of Improving API Scores.
State Graduation Policies for Students with Disabilities
Examining Achievement Gaps
State of Wisconsin School Report Cards Fall 2014 Results
Achievement Growth and Gaps for Students with Disabilities
2016 READY ACCOUNTABILITY DISTRICT RESULTS
EVAAS Overview.
CORE Academic Growth Model: Results Interpretation
ECHOLS COUNTY MIDDLE SCHOOL April 12, 2016 Middle School Teachers
Student Growth and Performance Update:
What Do the Part B 618 Table 6 Data Tell Us?
Student Mobility and Achievement Growth In State Assessment Mohamed Dirir Connecticut Department of Education Paper presented at National Conference.
North Carolina Positive Behavior Support Initiative
Solving the Riddle That Is APR Indicator 3
2019 Report Card Update Marianne Mottley Report Card Project Director
State of Wisconsin School Report Cards Fall 2014 Results
Presentation transcript:

Longitudinal Analysis of Effects of Reclassification, Reporting Methods, and Analytical Techniques on Trends in Math Performance of Students with Disabilities Yi-Chen Wu, Martha Thurlow, & Sheryl Lazarus National Center on Educational Outcomes University of Minnesota This paper was developed, in part, with support from the U.S. Department of Education, Office of Special Education Programs grants (#H373X070021and #H326G110002). Opinions expressed herein do not necessarily reflect those of the U.S. Department of Education or Offices within it.

NCEO Web site (

Outline  Background  Achievement gap  Explanations  Ysseldyke and Bielinski (2002) study  Questions  Method  Data source  Analytical Techniques  Results  Conclusions

Achievement gap 4  Focused on race/ethnicity or poverty.  Less attention on achievement gaps between SPED vs. Non-SPED  Research on Achievement Gap (Chudowsky, Chudowsky, & Kober, 2009a; 2009b)  Examined gaps for subgroups by proficiency rate & mean SS, but no comparison between SPED and Non- SPED  Examined the achievement over time for SWD, but not the gap between SWD vs. SWOD over time

Explanations on gap increasing over time between SWD and SWOD 5  SPED drop out of school=high achievement (McMillen & Kaufman, 1997)  Tests given in higher grades are less valid for SWD (Thurlow & Ysseldyke, 1999; Thurlow, Bielinski, Minnema, & Scott, 2002)  Students with lower performance moved in SPED and students with higher performance move out SPED (Ysseldyke and Bielinski, 2002)

Ysseldyke and Bielinski (2002) study  Explored the extent to which reclassification impacts the size of the achievement gap between GED and SPED across grades.  to compare the effects of different reporting methods, and to examine the effects of reclassification  They argued that fair comparisons involved using clearly defined and consistent comparison groups, and that special education status complicates the reporting because status changes over time.

Ysseldyke and Bielinski (2002) study  They used three methods to analyze trends in performance (cross-sectional, cohort-static and cohort- dynamic), and found that gap trends depended on the method used  examined how the use of scaled scores and effect size could be used for reporting results.

Purpose  The Ysseldyke and Bielinski (2002) study  did not use proficiency to examine the reporting results  is now more than a decade old  was completed prior to the implementation of ESEA  There is a need to take a new look at how achievement gap trends are affected by the method used to calculate them.

Research Questions  Reporting Methods: How does the use of cross- sectional, cohort-static, and cohort-dynamic data analysis methods affect interpretation of trends in the performance of students with disabilities?  Analytical Techniques: How does the score used in the analyses (proficiency level, scaled score, effect size) affect interpretation of trends and achievement gaps?  Reclassification: To what extent do students move in and out of special education each year, and what are the achievement characteristics of those who do and do not move?

Method  Data source  used math assessment data for grades 3-8 from a midwestern state  Cross-sectional  to  305,819 records  Cohort  to (G3-8)  8,231 students with 6-yr records

Method- Methods Used to Measure Gap  Cross-sectional  five years of data were used to calculate the average performance to reduce year-to-year variations that might affect results if data from a single year were selected.  Cohort-static  A cohort across six years  Group membership stayed the same across years.  Cohort-dynamic  group membership was redefined every year

Method- Analytical Techniques 

Results—RQ1  How does the use of cross-sectional, cohort-static, and cohort-dynamic data analysis methods affect interpretation of trends in the performance of students with disabilities?  Using PF to show the trend over time among the three methods used to measure gap

Figure 1. Cross-sectional method: Percentage of students above proficiency level on math assessment by SPED and non-SPED Results—Comparing reporting methods 21-->47

Results—Comparing reporting methods Figure 2: Cohort-static method: Percentage of students above proficiency level on math assessment by SPED and Non-SPED 22->21

Results—Comparing reporting methods Figure 3. Cohort-dynamic method: The percentage of students above proficiency level on math assessment by SPED and Non-SPED 22-->45

Results—Comparing reporting methods  Quit different  Quite similar  Steady Cohort-dynamic Cohort-static Cross-sectional

Results—RQ2  How does the score used in the analyses (proficiency rate, scaled score, effect size) affect interpretation of trends and achievement gaps?

Figure 4. Percent proficient: Achievement gap (difference between non-SPED and SPED) in percent proficient on math assessment Results—Comparing Analytical Techniques

Figure 5. Scaled score: Achievement gap (difference between non- SPED and SPED) in mean scaled score on math assessment Results—Comparing Analytical Techniques

Figure 6. Effect size: Achievement gap (difference between non- SPED and SPED) in effect size on math assessment Results—Comparing Analytical Techniques

Results—Comparing analytical techniques  Quit different  Quite similar  Steady Effect size Scaled Score Proficiency Level

Results—RQ3  To what extent do students move in and out of special education each year, and what are the achievement characteristics of those who do and do not move?

Figure 7. Mean math scaled scores by special education status across years Results—Reclassification Note: NS1 = Students who remained in non-special education in both of two consecutive years; NS2 = Students who moved from non-special education to special education in the second of two consecutive years; S1 = Students who remained in special education in both of two consecutive years; S2 = Students who moved from special education to non-special education in the second of two consecutive years.

Results—Reclassification  Non-SPED only  Students stayed in non-SPED for six years  Non-SPED to SPED  Students moved from non-SPED to SPED only once over six years  SPED to Non-SPED  Students moved from SPED to non-SPED only once over six years  Back and forth  Students moved between SPED and non-SPED more than once over six years  SPED only  Students stayed in SPED for six years

Figure 8. The effect size between different reclassification groups in math assessment by using non-SPED only group as the reference group Results—Reclassification

Discussion and Conclusion  Different methods of reporting data present different pictures of the gap between SPED and non-SPED  This study was undertaken to update the work done more than a decade ago by Ysseldyke and Bielinski (2002)  Replicated + proficiency level  Confirmed  Suggestions

Discussion and Conclusion  Suggestions  The choice of method affects what the results look like and the possible interpretation of findings.  Tracking individual student performance provides a better indication of how well schools are educating their students than cross-sectional models where the grade remains the constant but the students change.  Cross-sectional models should not be used when examining trends across grades.  Cohort-static and cohort-dynamic methods enable educators to make comparisons among individual students

Discussion and Conclusion  Specific situation for each reporting method  If the goal is to know how well students do yearly without considering changing students => cross-sectional  /school.aspx  If states and districts want to account with precision for the reclassification of students each year. => cohort-dynamic  When the goal is to account for individual student performance over time without regard to the nature of services received=> cohort-static