Focus Schools and Special Education Centers

Slides:



Advertisements
Similar presentations
Understanding How the Ranking is Calculated 2011 TOP TO BOTTOM RANKING.
Advertisements

Top-to-Bottom (TTB) Ranking
Top-to-Bottom Ranking & Priority/Focus/Reward Designations Understanding the.
A quick review of z-scores and how to understand them August 26, 2011
Understanding Common Concerns about the Focus School Metric August
Alexander Schwarz Office of Psychometrics, Accountability, Research and Evaluation Michigan Department of Education.
Monthly Conference Call With Superintendents and Charter School Administrators.
Persistently Lowest Achieving Schools Webinar Michigan Department of Education August 26, 2011.
What is a Z Score?. The State’s Waiver from NCLB All schools will achieve 85% proficiency for all students in all subjects (as measured on a statewide.
Standard Deviation. Two classes took a recent quiz. There were 10 students in each class, and each class had an average score of 81.5.
Statistics Used In Special Education
Introduction to Adequate Yearly Progress (AYP) Michigan Department of Education Office of Psychometrics, Accountability, Research, & Evaluation Summer.
Venessa A. Keesler, Ph.D. Bureau of Assessment and Accountability Michigan Department of Education Presentation to MASFPS Fall Directors’ Institute October.
Top-to-Bottom Ranking & Priority/Focus/Reward Designations Understanding the.
Understanding How the Ranking is Calculated
UNDERSTANDING HOW THE RANKING IS CALCULATED Top-to-Bottom (TTB) Ranking
Top-to-Bottom Ranking & Priority/Focus/Reward Designations Understanding the.
Michigan’s Accountability Scorecards A Brief Introduction.
1 Paul Tuss, Ph.D., Program Manager Sacramento Co. Office of Education August 17, 2009 California’s Integrated Accountability System.
1 New York State Growth Model for Educator Evaluation 2011–12 July 2012 PRESENTATION as of 7/9/12.
Understanding How the Ranking is Calculated 2011 TOP TO BOTTOM RANKING.
A Closer Look at Adequate Yearly Progress (AYP) Michigan Department of Education Office of Educational Assessment and Accountability Paul Bielawski Conference.
Introduction to GREAT for ELs Office of Student Assessment Wisconsin Department of Public Instruction (608)
1 Up-date on Assessment in Connecticut Dr. Barbara Q. Beaudin, Associate Commissioner Division of Assessment and Accountability Chief, Bureau of Student.
NEXT-GENERATION ACCOUNTABILITY Designing a Differentiated Accountability System for Michigan Presentation to the Michigan Educational Research Association.
MI-SAAS: Michigan School Accreditation and Accountability System Overview of Key Features School Year.
1 Michigan School Accreditation and Accountability System pending legislative approval Venessa A. Keesler, Ph.D. March 16, 2011.
MEAP / MME New Cut Scores Gill Elementary February 2012.
Capacity Development and School Reform Accountability The School District Of Palm Beach County Adequate Yearly Progress, Differentiated Accountability.
ESEA Federal Accountability System Overview 1. Federal Accountability System Adequate Yearly Progress – AYP defined by the Elementary and Secondary Education.
Adequate Yearly Progress (AYP) for Special Populations Michigan Department of Education Office of Educational Assessment and Accountability Paul Bielawski.
Standardized Testing. Basic Terminology Evaluation: a judgment Measurement: a number Assessment: procedure to gather information.
Standard Deviation. Two classes took a recent quiz. There were 10 students in each class, and each class had an average score of 81.5.
MDE Accountability Update SLIP Conference, January 2016.
Understanding Your Top from Your Bottom: A Guide to Michigan’s Accountability System September 2013 Mitch Fowler
University of Colorado at Boulder National Center for Research on Evaluation, Standards, and Student Testing Challenges for States and Schools in the No.
Top to Bottom and Persistently Lowest Achieving Schools Lists Federally Approved Requirements for Identifying Persistently Lowest Achieving Schools August.
IMPACT EVALUATION PBAF 526 Class 5, October 31, 2011.
1 New York State Growth Model for Educator Evaluation June 2012 PRESENTATION as of 6/14/12.
1 Testing Various Models in Support of Improving API Scores.
Every Student Succeeds Act (ESSA) Accountability
Accountability for Alternative Schools
C3 Student Growth Percentiles: An Introduction for Consumers of the Data MSTC, February 19, 2015.
Basic Statistics Module 6 Activity 4.
What is Value Added?.
Basic Statistics Module 6 Activity 4.
Practice Page Practice Page Positive Skew.
Statistics: The Z score and the normal distribution
Why Measure Performance with Scale Scores
NWEA Measures of Academic Progress (MAP)
Standard Deviation.
Standard Deviation.
Standard Deviation.
StatQuest: t-SNE Clearly Expalined!!!!
Standard Deviation.
Prepared for Quincy Schools – November 2013
Participation in State Assessments State and Federal Policy
Prepared for DD Key Contacts – September 2013
Validating Student Growth During an Assessment Transition
Solving the Riddle That Is APR Indicator 3
Sampling Distributions
Summary (Week 1) Categorical vs. Quantitative Variables
Summary (Week 1) Categorical vs. Quantitative Variables
Maryland State Board of Education October 25, 2011
Understanding How the Ranking is Calculated
Standard Deviation.
Normal Distribution and z-scores
Michigan School Accountability Scorecards
Background This slide should be removed from the deck once the template is updated. During the 2019 Legislative Session, the Legislature updated a the.
Standard Deviation.
Presentation transcript:

Focus Schools and Special Education Centers Presentation to MAASE October 10, 2012 Venessa A. Keesler, Ph.D. Bureau of Assessment and Accountability

Taking a Step Back: Why do we do accountability? Three myths; one reality Myth #1: To drive reform Myth #2: To create education policy Myth #3: Because we are gluttons for punishment Reality: Accountability metrics/systems are quantitative articulations of the core policy beliefs of the education system They help us measure our progress in meeting those core policy goals They are the measure, not the purpose or the goal

Accountability Landscape: 2012 A new era of accountability Switching from a purely criterion-based system to a normative system Criterion-based systems: Set average proficiency targets for schools. Normative system: identifies the “worst” or “best” or “lowest” or “highest”

Why the change? Policy imperative for NCLB: all students CAN and SHOULD demonstrate proficiency  criterion-system with proficiency targets for all schools and subgroups 10 years later: our average achievement is increasing, but we still have students and schools lagging behind New policy imperative (ESEA Flex): we must target our lowest performing schools AND our lowest performing students more specifically and strategically

Why Focus Schools? Different metric  addresses a different policy goal Policy goal  to shine new light on the lowest performing students within schools Priority Schools = lowest performing schools overall Focus Schools = largest within-school gaps

Intersection with Policy Regarding Students With Disabilities “All means all” Michigan believes all students should have access to high-quality instruction and rigorous content; and that we must have high expectations for all students So—the accountability articulation of this core policy belief is to include ALL students and ALL schools in the metrics

Quick Reference for Z-Scores What is a Z-Score? Quick Reference for Z-Scores

Why do We Use Z Scores? Z-scores are a standardized measure that helps you compare individual student (or school) data to the state average data (average scores across populations). Z-scores allow us to “level the playing field” across grade levels and subjects Each Z-score corresponds to a value in a normal distribution. A Z-Score will describe how much a value deviates from the mean. What do you need to know: Z-scores are used throughout the ranking to compare a school’s value on a certain component to the average value across all schools.

What is a Z-Score? Z-scores are centered around zero Positive numbers mean the student or school is above the state average Negative numbers mean the student or school is below the state average State Average …Worse than state average Better than state average…. Note: The distances between numbers are not equal. There is a “bell curve” overlaid on this schematic. Between -1 and 1, there are 64% of the schools. Between -2 and 2, 95% of schools, and between -3 and 3, 99% of schools. So that means, if you are at zero, you are doing better than 50% of schools. By the time you move to +1, you are doing better than 84% of schools (50+34). Most of the schools fall between -1 and 1. This is why scores in here are relatively near the average, although the closer you get to -1 or +1, the further away from that average you get. -3 -2 -1 1 2 3

Z-Score Examples Your school has a z-score of 1.5. You are better than the state average. Z-score of 1.5 State Average …Worse than state average Better than state average…. -3 -2 -1 1 2 3

Z-Score Examples Your school has a z-score of .2. You are better than the state average, but not by a lot. Z-score of 0.2 Z-score of 1.5 State Average …Worse than state average Better than state average…. -3 -2 -1 1 2 3

Z-Score Examples Your school has a z-score of -2.0. You are very far below state average. Z-score of -2.0 Z-score of 0.2 Z-score of 1.5 State Average …Worse than state average Better than state average…. -3 -2 -1 1 2 3

How do we get Standardized Scale Scores for Each Student? Step #1: Take each student’s score on the test they took and compare that score to the statewide average for students who took that same test in the same grade and year This creates a student-level z-score for each student in each content area Compare MEAP to MEAP MEAP-Access to MEAP-Access MME to MME MI-Access Participation to Participation Supported Independence to Supported Independence Functional Independence to Functional Independence Why do we do this? Puts all student test scores on a metric that can be combined, regardless of which test they took Allows us to compare similar students to each other, before we begin to summarize into schools and compare schools to schools Means that we do not rank schools on the “percent of students proficient” but instead on their average standardized achievement Key takeaway to share with schools: the students are compared with similar students first, before the school is ranked at all. This helps “level the playing” field across test types. NOTE ON NEXT SLIDES: Only use these with a school if you feel it benefits your conversation. They are placed here primarily for your own reference.

What do we do with those standardized scores? Step #2: Once each student has a z-score for each content area (based on the test they took), we take all of the students in a each school, and rank order the students within the school. Z-scores will have come from different tests, and compare students to statewide average for that grade, test, and subject But they can now be combined for the school Step #3: Add up all z-scores and take the average. This is now the average standardized student scale score. Step #4: Define the top and bottom 30% subgroups, based on that rank ordering.

Student Test Taken Z-score Tommy Mi-Access, Participation 2.5 Sally MEAP 2.0 Maura MI-Access, SI 1.9 Fred 1.5 Ichabod MEAP-Access 1.0 Freud 0.8 Maybelle MI-Access, FI 0.7 Destiny 0.5 Harold -0.2 Bickford -0.5 Talledaga -0.7 Francine -1.2 Joey -1.9 William -2.2

Average Z-score (average standardized student scale score): 0.28 Test Taken Z-score Tommy Mi-Access, Participation 2.5 Sally MEAP 2.0 Maura MI-Access, SI 1.9 Fred 1.5 Ichabod MEAP-Access 1.0 Freud 0.8 Maybelle MI-Access, FI 0.7 Destiny 0.5 Harold -0.2 Bickford -0.5 Talledaga -0.7 Francine -1.2 Joey -1.9 William -2.2 Average Z-score (average standardized student scale score): 0.28 (sum all z-scores, divide by 15)

Student Test Taken Z-score Tommy Mi-Access, Participation 2.5 Sally MEAP 2.0 Maura MI-Access, SI 1.9 Fred 1.5 Ichabod MEAP-Access 1.0 Freud 0.8 Maybelle MI-Access, FI 0.7 Destiny 0.5 Harold -0.2 Bickford -0.5 Talledaga -0.7 Francine -1.2 Joey -1.9 William -2.2 Top 30% Bottom 30%

Implications for SWDs and Center Programs Students compared only with other students who took the same assessment (Participation to Participation, etc.) All schools treated the same Not that center programs have a gap; but that they have some of the largest gaps Don’t assume the bottom 30% is only one type of student; can look at student data file

Final Point The accountability system will not pick and choose between students and/or schools; it will apply the same rules to all students/schools Accountability system does not decide when to deviate from this; core educational policy does Need to continue to work to make sure that metrics mirror policy goals

Contact Information Venessa A. Keesler, Ph.D. Evaluation, Research and Accountability Bureau of Assessment and Accountability keeslerv@michigan.gov or mde-accountability@michigan.gov