Identifying the gaps in state assessment systems CCSSO Large-Scale Assessment Conference Nashville June 19, 2007 Sue Bechard Office of Inclusive Educational.

Slides:



Advertisements
Similar presentations
Performance Assessment
Advertisements

Understanding the IEP Process
General Information --- What is the purpose of the test? For what population is the designed? Is this population relevant to the people who will take your.
Minnesota Manual of Accommodations for Students with Disabilities Training Guide
Alternate Performance Task Assessment APTA Parent Power Point Presentation Beth Judy, Coordinator Office of Assessment and Accountability 1.
Chapter 4 Validity.
1 Alignment of Alternate Assessments to Grade-level Content Standards Brian Gong National Center for the Improvement of Educational Assessment Claudia.
Modified High School Assessment (Mod-HSA) Maryland State Board of Education August 26, 2008.
New Hampshire Enhanced Assessment Initiative: Technical Documentation for Alternate Assessments Standard Setting Inclusive Assessment Seminar Marianne.
Setting Alternate Achievement Standards Prepared by Sue Rigney U.S. Department of Education NCEO Teleconference March 21, 2005.
Large Scale Assessment Conference June 22, 2004 Sue Rigney U.S. Department of Education Assessments Shall Provide for… Participation of all students Reasonable.
Meeting NCLB Act: Students with Disabilities Who Are Caught in the Gap Martha Thurlow Ross Moen Jane Minnema National Center on Educational Outcomes
Georgia Modification Research Study Spring 2006 Sharron Hunt Melissa Fincher.
New Hampshire Enhanced Assessment Initiative: Technical Documentation for Alternate Assessments Alignment Inclusive Assessment Seminar Brian Gong Claudia.
Minnesota Manual of Accommodations for Students with Disabilities Training Guide
Identifying Students in Need of Modified Achievement Standards and Developing Valid Assessments.
Data Analysis Protocol Identify – Strengths – Weaknesses – Trends – Outliers Focus on 3 – Strengths – Weaknesses.
Mark DeCandia Kentucky NAEP State Coordinator
The Characteristics of Non-Proficient Special Education and Non-Special Education Students on Large-Scale Assessments Yi-Chen Wu, Kristi Liu, Martha Thurlow,
A Prudent Question is Half of WISEdom Using WISEdash Data Wisely to Leverage Change Kenneth S. Donahue School Administration Consultant for Data Analysis.
Creating Assessments with English Language Learners in Mind In this module we will examine: Who are English Language Learners (ELL) and how are they identified?
NCCSAD Advisory Board1 Research Objective Two Alignment Methodologies Diane M. Browder, PhD Claudia Flowers, PhD University of North Carolina at Charlotte.
 Describes the special education program and services that are provided within a school district and those special education programs and services which.
NCAASE Work with NC Dataset: Initial Analyses for Students with Disabilities Ann Schulte NCAASE Co-PI
ASSESSMENT ACCOMMODATIONS How to Select, Administer, and Evaluate Use of Accommodations for Instruction and Assessment of Students with Disabilities Ohio.
Students in the Gap: Understanding Who They Are & How to Validly Assess Them.
ALTERNATE ACCESS for ELLs 1 Alternate ACCESS for ELLs ™ Participation Criteria The Alternate ACCESS for ELLs was initially developed by a team led by Craig.
AFT 7/12/04 Marywood University Using Data for Decision Support and Planning.
DRE Agenda Student Learning Growth – Teacher VAM – School Growth PYG Area Scorecards. PYG, and other Performance Indicators.
Testing Students with Disabilities Office of Assessment Update Suzanne Swaffield Anne Mruz November
Alabama Professional Education Personnel Evaluation Program Teacher
Getting Oriented to Exceptionality and Special Education There is no single accepted theory of normal development, so relatively few definite statements.
National Accessible Reading Assessment Projects Research on Making Large-Scale Reading Assessments More Accessible for Students with Disabilities June.
NCEXTEND2 Assessments Mike Gallagher, NCDPI Nadine McBride, NCDPI Sheila Garner Brown, TOPS.
SB : The Great Teachers and Leaders Act State-wide definition of “effective” teacher and principal in Colorado Academic growth, using multiple measures.
Fall Testing Update David Abrams Assistant Commissioner for Standards, Assessment, & Reporting Middle Level Liaisons & Support Schools Network November.
The World of Assessment Consider the options! Scores based on developmental levels of academic achievement Age-Equivalent scores.
WELCOME TO PARK VIEW MIDDLE SCHOOL NECAP REPORT NIGHT.
Students in the gap(s): Research findings on who they are, what they need, and implications for the 2% flexibility option Gaye Fedorchak New Hampshire.
Assessing Students With Disabilities: IDEA and NCLB Working Together.
 Closing the loop: Providing test developers with performance level descriptors so standard setters can do their job Amanda A. Wolkowitz Alpine Testing.
A Principled Approach to Accountability Assessments for Students with Disabilities CCSSO National Conference on Student Assessment Detroit, Michigan June.
Linking a Comprehensive Professional Development Literacy Program to Student Achievement Edmonds School District WERA December 4, 2008.
Chapter 4: Measurement, Assessment, and Program Evaluation
Legal Aspects of Special Education Eligibility and Placement IEP and 504.
NAEP 2011 Mathematics and Reading Results Challis Breithaupt November 1, 2011.
NECAP 2007: District Results Office of Research, Assessment, and Evaluation February 25, 2008.
Standard Setting Results for the Oklahoma Alternate Assessment Program Dr. Michael Clark Research Scientist Psychometric & Research Services Pearson State.
Grading Special Education Students Elementary, Middle School, and High School Ages “Research suggests that grading practices vary considerably among.
Future Ready Schools National Assessment of Educational Progress (NAEP) in North Carolina Wednesday, February 13, 2008 Auditorium III 8:30 – 9:30 a.m.
10+ Ways to Analyze Data Presenter: Lupe Lloyd Lupe Lloyd & Associates, Inc.
Building the NCSC Summative Assessment: Towards a Stage- Adaptive Design Sarah Hagge, Ph.D., and Anne Davidson, Ed.D. McGraw-Hill Education CTB CCSSO New.
Catholic College at Mandeville Assessment and Evaluation in Inclusive Settings Sessions 3 & /14/2015 Launcelot I. Brown Lisa Philip.
Minnesota Manual of Accommodations for Students with Disabilities Training January 2010.
1 NCEXTEND1 Alternate Assessment with Alternate Achievement Standards Conference on Exceptional Children November 17-18, 2008 NCDPI Division of Accountability.
Florida Department of Education’s Florida Department of Education’s Teacher Evaluation System Student Learning Growth.
NECAP Presentation Dunn’s Corners School Westerly, RI February 24, 2010.
NCEXTEND1 Alternate Assessments of: English Language Arts/Reading 3  8, Mathematics 3  8, and Science 5 & 8 English II, Math I, and Biology at Grade.
How was LAA 2 developed?  Committee of Louisiana educators (general ed and special ed) Two meetings (July and August 2005) Facilitated by contractor.
NCEXTEND1 Alternate Assessments of: English Language Arts/Reading 3  8, Mathematics 3  8, and Science 5 & 8 English II, Math I, and Biology at Grade.
Closing the Educational Gap for Students with Disabilities Kristina Makousky.
NAEP What is it? What can I do with it? Kate Beattie MN NAEP State Coordinator MN Dept of Education This session will describe what the National Assessment.
American Institutes for Research
Special Education Program Evaluation
Verification Guidelines for Children with Disabilities
پرسشنامه کارگاه.
Federal Policy & Statewide Assessments for Students with Disabilities
Exploring Assessment Options NC Teaching Standard 4
WA-AIM 1% Participation Cap
Assessing Students With Disabilities: IDEA and NCLB Working Together
Presentation transcript:

Identifying the gaps in state assessment systems CCSSO Large-Scale Assessment Conference Nashville June 19, 2007 Sue Bechard Office of Inclusive Educational Assessment Ken Godin

Research Questions Of all the students who are not proficient, how can states identify those who are in the assessment gap? Who are the students in the gaps, what are their attributes, and how do they perform?

Gap identification process Conduct exploratory interviews with teachers to identify the assessment gaps Review student assessment data Review teacher judgment data Operationalize gap criteria Conduct focused teacher interviews to confirm gap criteria Parker and Saxon: Teacher views of students and assessments Bechard and Godin: Finding the real assessment gaps

Data sources State assessment data – grade 8 mathematics results from two systems General large-scale test results Demographics (special programs, ethnicity, gender) Teachers’ judgments of students’ classroom work Student questionnaires completed at time of test Accommodations used at time of test State data bases for additional student demographic data Disability classification Free/reduced lunch Attendance Student-focused teacher interviews

Why use teacher judgment of students’ classroom performance? Gap 1: the test may not reflect classroom performance Teachers see students performing proficiently in class, but test results are below proficient. Gap 2: the test may not be relevant for instructional planning Teachers rate students’ class work as low as possible and test results are at “chance” level. No information is generated on what students can do.

Teacher judgment instructions The instructions were clear that this was to be a judgment of the student’s demonstrated achievement on GLE-aligned academic material in the classroom, not a prediction of test performance. NECAP: The teacher judgment field consisted of 12 possibilities – each of the 4 achievement levels had low, medium, and high divisions. MEA: The teacher judgment field consisted of 4 possibilities - one possibility per achievement level. (For comparisons across the two systems, we used a collapsed version of the NECAP judgments (down to the 4 achievement levels).

Research on validity of teacher judgment While there are some conflicting results, the most accurate judgments were found when: teachers were given specific evaluation criteria levels of competency were clearly delineated criterion-referenced tests in mathematics or reading were the matching measure criterion-referenced tests reflected the same content as did classroom assessments judgments were of older students who had no exceptional characteristics, and teachers were asked to assign ratings to students, not to rank-order them

Validation of teacher judgment data from NECAP and MEA Data collected to establish as “Round 1” cutpoints (of 3 rounds) during standard-setting. Validation studies were conducted which asked: Were there differences between the sample of students with non-missing teacher judgments data and the rest of the population? Were there suspicious trends in the judgment data suggesting that teachers did not take the task seriously? How did teacher judgments compare with students’ actual test scores? Results of these investigations were considered supportive of using the teacher judgment data for standard setting.

Teacher judgment vs. test performance (NECAP)

Teacher judgment vs. test performance (MEA) † Students within error of bottom of scale (i.e., chance score) is subset of Achievement Level 1.

Operationalizing the gap definitions using teacher judgment

Student questionnaires (answered after taking the test) 1. How difficult was the mathematics test? A. harder than my regular mathematics schoolwork B. about the same as my regular mathematics schoolwork C. easier than my regular mathematics schoolwork 2. How hard did you try on the mathematics test? A. I tried harder on this test than I do on my regular mathematics schoolwork. B. I tried about the same as I do on my regular mathematics schoolwork. C. I did not try as hard on this test as I do on my regular mathematics schoolwork

Accommodations (used during the mathematics test) NECAP: 16 accommodations listed by category: Setting Scheduling/timing Presentation formats Response formats MEA: 21 accommodations listed by category: Setting Scheduling Modality Equipment Recording

Student-focused teacher interviews Student profile data math test scores (both overall and on subtests) specific responses to released math test items student’s responses to the questionnaire special program status accommodations used during testing Teacher interview questions Questions regarding perceptions of the students in each gap on various aspects of gap criteria, 17 Likert scale questions on the student’s class work and participation in classroom activities.

Student-focused teacher interview samples NECAP sample: 20 8th grade math and special ed teachers 7 schools across three states (NH, RI, and VT). 51 students: gap 1=19, gap 2=18, and comparison group=14. MEA sample: 7 8th grade math and special ed teachers 3 schools 14 students: gap 1=4, non-gap 1=3, gap 2=2, non-gap 2=5, and comparison group=0.

Results: Percentages of students in the gaps (NECAP) Gap 2 and non-gap 2 percentages are different when fine or gross grained ratings are used.

Results: Percentages of students in the gaps (MEA)

Accommodations use (NECAP) Students in gap 1 were significantly less likely to use accommodations than students in non-gap 1. Only a small percentage of students in gap 1 used any accommodations at all. The majority of students in both gap 2 and non-gap 2 used one or more accommodations.

Accommodations use (MEA) Similar patterns of accommodations use are seen for gap 1 on the MEA as in NECAP.

Performance of students in gap 1 compared to non-gap 1 on the NECAP + Statistically higher than expected - Statistically lower than expected

Performance of students in gap 1 compared to non-gap 1 on the MEA + Statistically higher than expected - Statistically lower than expected

Special program status of students in gap 1 ( NECAP) The majority of students in gap 1 were in general education. Students with IEPs were under-represented in gap 1 and over- represented in non-gap 1. + Statistically higher than expected - Statistically lower than expected

Special program status of students in gap 1 (MEA) There were similar gap 1 compositions in MEA. + Statistically higher than expected - Statistically lower than expected

Disability designations in gap 1 Learning disabilities (NECAP) Gap 1: 57.7% of the IEP gap 1 group (n=208) Non-gap 1: 49.7% of the IEP non-gap 1 group (n=860) Comparison: 49.2% of the IEP comparison group (n=83) Total population: 52% of students with IEPs (N=4,465) Disability designations only seen in non-gap 1 : NECAP: Students with learning impairments, deafness, multiple disabilities and traumatic brain injury MEA: Students with learning impairments and traumatic brain injury

Additional characteristics of students in gap 1 compared to non-gap 1 Gap 1 students: Were more likely female and white Had the fewest absences Had higher SES Found the state test about the same level of difficulty as class work Exhibited academic and mathematics-appropriate behaviors in class

Performance of students in gap 2 on the test (NECAP and MEA) By definition, students in both gap 2 and non- gap 2 scored no better than chance on the assessment.

Special program status of students in gap 2 (NECAP) The majority of students in gap 2 and non-gap 2 were students with IEPs.

Special program status of students in gap 2 (MEA) MEA results show the majority of the students in gap 2 had IEPs. The percentages of students in general education in gap 2 and non-gap 2 groups are higher than in NECAP.

Disability designations in gap 2 Learning disabilities: Fewer than half of the students in gap 2 groups had learning disabilities in both systems Other disability designations differed between the two systems. NECAP Students who were deaf/blind and those with multiple disabilities were only found in gap 2. Students with hearing impairments, deafness and traumatic brain injury were only found in non-gap 2. MEA Students with hearing impairments were only in gap 2. Students with visual impairments or blindness were only in non-gap 2.

Additional characteristics of students in gap 2 compared to non-gap 2 Students in gap 2 were very similar to students in non-gap 2 on most variables. Students from both groups felt that the test was as hard as or harder than their schoolwork. They tried as hard as or harder on the test as in class. They used mathematics tools in the classroom (e.g., calculators).

Summary: How many students are in the gaps? 10.9% % of the total student population in two systems are in gaps 1 & 2. NECAP Gap 1 = 8.6% Gap 2 = 2.3% MEA Gap 1 = 7.1% Gap 2 = 4.3%

Summary We found substantial differences between the composition of the gap 1 groups, which held in both systems. Gap 1 students may have characteristics and behaviors that mask their difficulties. Non-gap 1 students are those generally thought to be in the “achievement gap”.

Summary (cont.) Low performing students in gap 2 and non- gap 2 share many characteristics. Their extremely low performances in both classroom activities and the test raise issues about the relevancy of the general assessment for them.

Conclusions For students in gap 1, increase focus on classroom supports and training on how to transfer their knowledge and skills from classroom to assessment environments. For students in non-gap 1, examine expectations and opportunities to learn. Providing a different test based on modified academic achievement standards is premature. Students with IEPs in gap 2 and non-gap 2 may benefit from the 2% option for AYP and an alternate assessment based on modified academic achievement standards (AA-MAAS). There will be challenges designing a test based on MAAS that is strictly aligned with grade level content.