Presentation on theme: "Planning for the New Measurable Objectives: 2 Critical District Issues with Examples & Background to Help You Address Them KASB, Topeka, 4 April 2013 Kelly."— Presentation transcript:
Planning for the New Measurable Objectives: 2 Critical District Issues with Examples & Background to Help You Address Them KASB, Topeka, 4 April 2013 Kelly Spurgeon, KSDE Tony Moss, KSDE
What we hope to cover today: 1.How does the new federal acct. system compare with the old? 2.How can districts best prepare for these changes? 1.An integrated district Measured Objective strategy; 2.Communication strategies for main constituency groups 3.Why do districts need an integrated MO strategy? 4.Nuts & bolts of the Measured Objectives.
Leaving the AYP Mentality: What we had: An overemphasis on a state assessments and a single measure, percent proficient Identifying bubble kids and getting them over the standards line Sometimes teaching to identified test items, not concepts Counting the same kids more than once across multiple subgroups What we will have: 4 Measured Objectives from assessments; 5 areas in the new accreditation model, each counting 20% Incentives to move every student to their highest possible proficiency level Constructed response items that will require deeper conceptual mastery One gap measure of the lowest 30%, no duplicate counting Overall, an emphasis on improved instruction and instructional support
The big differences: A much more nuanced set of measures (good schools can have different profiles) College and Career standardsraise the barso that students are graduating ready for college or a career. Why? So they can better compete in a very competitive world.
2 Critical District-Level Tasks: 1.Designing an integrated strategy for making the Measured Objectives; and 2.Developing a communication strategy for each of your 3 key constituencies: 1.Your staff 2.Your board and political stakeholders 3.Parents, the public, and the press.
Why do districts need a Measured Objectives strategy to coordinate school strategies? Childrens intellectual and social-emotional development is integrated and hierarchical: each stage depends on the stage that went before; Only districts can put all the institutional segments together, from early childhood education and care, through primary, middle, and high school, and keep them all directed toward the same integrated goals.
For example, consider the holes in childrens academic trajectories when measured only by state assessments: All the other necessary partsearly childhood, student engagement, relationships, school climate and culture, professional developmentare missing.
Collaboration with local pediatricians, early education and child care providers, and social services to improve child development and child-rearing skills before Kindergarten; In primary school A, a focus on immediate teacher recognition of any students failure to master any competence, including behaviors, and immediate one-on-one work to move the child to competence; In middle school B, a focus on weekly career and academic coaching, Individual Plans of Study, and career shadowing; In high school C, a focus on improved social climate, social inclusion, continued Individual Plans of Study, project-based instruction, pathway completion, and higher API scores. District investments in high-quality, focused, 50-hour PD training. An Overly Simple Example of a Districts Integrated MO Strategy: An Overly Simple Example of a Districts Integrated MO Strategy: Ages 0 – 5 5 to 9 10 to to 18
Lets now consider districts communication needs: A common question you can expect: Why all the changes? To better prepare students for college or a career, the bar is being raised. AYP was obsolete and had to be replaced.
In 2015, with the higher standards in the new Kansas Career and College Ready assessments, we can expect for scores to go down substantially. We have to explain this so it isnt mis- interpreted. New assessment introduced
Points that Might Be Included in a Staff Communication Plan: We are in a period of rapid change in assessments and accountability: o ESEA Waiver replacing AYP o Career pathways assessments coming o School readiness measures coming o A broader set of accreditation measures (state assessments results only 20%) (rigor, relevance, responsive culture, relationships 20% each) o KEEP (Kansas Educator Evaluation Project) o Blended assessment in 2014 & Kansas College and Career Ready assessments in 2015
Staff Communication (cont.) The Waivers incentives are different from AYPsthey reward moving every student to the highest proficiency possible; Constructed response items will require greater student mastery of concepts; specific test items wont be identified. Constructed response items will have to be checked by staff. Educators will have a view of student proficiency across the grades when they check these items. Many MOs will be re-set in 2015 & 2016 because the assessments will be new.
Points that Might Be Included in a Board or Parent Communication Plan: The new assessments are raising the bar, so we should expect that scores will go down. This decline is a result of the new assessments, not indicative of worse performance by students or teachers. Assessments are designed so that all students miss some items. Some assessments are more difficult than others. Different assessments have different plateaus or ceilings.
Example: NAEP Scores For 17- year olds, NAEP reading scores have only varied 5 points in 42 years.
Now the details of the new MOs:
Academic Performance Index
How is the API calculated?
Getting your API data from the Performance by Grade Report:
Lets compare 2 years of Math API: levelpoints 2011 # of students 2012 # of students within category % change exemplary1, exceeds standard % meets standard % approaching standard % academic warning % Result: API went from 645 to 680
Why did Kansas need a new academic performance measure? Not many students left below proficient. The API rewards schools for moving all students to higher proficiency. It is more accurate that percent proficient. It acknowledges the ceiling or plateaugets us away from 100% proficient idea. We could use historic rates of improvement to set realistic MO goals of improvement.
To differentiate between schools, we looked at the distribution of all schools API scores. Then we divided the distribution into quarters. To differentiate between schools, we looked at the distribution of all schools API scores. Then we divided the distribution into quarters.
Making Progress, Not Making Progress, Title I Schools 10% Reward Schools 10% Focus Schools 5% Priority Schools Title (federal) & Non-Title Categories
Reading AMOs are like the Standard of Excellence: School CategoryAPI Range Expected Rate of Improvement / AMO Cap on % Below Standard Modeling (Level 4) top 25 percent API > or = 757 For schools below the 90 th percentile, a mean advance of 2 points per year. Above the 90 th percentile, whatever improvement is possible. < or = 5 percent; if not, next lower level Transitioning (Level 3) 3rd quarter API > or = 703 but < 757 An average yearly advance of 5 points per year > 5 but < or = 10 percent; if not, next lower level Implementing (Level 2) 2nd quarter API > OR = 635 but < 703 An average yearly advance of 10 points per year > 10 but < or = 15 percent; if not, next lower level High-Need (Level 1) lowest 25 percent API < 635 Increments sufficient to enter level 2 or a yearly mean API advance of 15 pts., whichever is greater. Any school with > 15 percent of its students below proficient is a level 1 school.
Student Growth Percentiles imitate pediatricians growth charts. Normed, percentile bands from the 5 th to 95 th Girls Length and Weight by Age
Kansas Growth AMO: a relative measure There are no consequences for not making a buildings growth measure.
Advantages of the Student Growth Percentile Model SGPs set realistic yearly goals based on each students academic peersstudents with similar score histories SGPs map a students progress relative to all assessed students
How will growth MOs be displayed?
How are 270 of my 900 students doing? Based on performance categories, not individuals No subgroups See the Performance by Grade Report If the LP30 make an API of 500, also made the gap MO the Gap: How are the lowest performing 30 % doing?
How is the Gap calculated? 900 * 0.30 = 270
Reducing the Non-Proficient
Reduction of Non-Proficient AMOs The goal: cut the percentage of non-proficient students (RNP) in half by Custom RNPs at the district, building, and subgroup levels Traditional subgroups with an n 30 will have an AMO determination No monitored students in the SWDs or ELLs Only the All Students group is merged if less than 30 (this is still under consideration)
Resources Video clips and fact sheets for Achievement, Growth, Reduction of Non-Proficient and Gap AMOs can be found on the ESEA flexibility waiver webpage: This page can also be accessed by clicking this logo on the KSDE home page
Questions? Contact the Measurable Objectives Help Desk: Phone: (785)
Slides that deal with subtopics follow:
The first big holeearly childhood is where the largest unrealized gains are. The success of the whole educational enterprise is even more dependent on early childhood than was previously known: Working memory and self controlsthe ability to delay gratification and control impulses Students social predilectionsempathy and considering others Students behaviors (persistence, engagement, motivation) Language skills Some disabling conditionsantisocial behaviors, ADHD All have their origins in early interactions and environments.
Some Implications for Districts: Large improvements in K-12 education depend on overcoming fragmented services in early childhood and improvements in the quality of family environments. The social intelligence and responsiveness of teachers are important--sometimes as important as subject expertise. An overemphasis on academics and test scores, to the exclusion of developmentally important events, e.g. fantasy play in the early school years, and social integration, can be developmentally damaging.
Another consequence of ignoring early childhood: large inefficiencies are built into later district and post-secondary results: return per $1 invested
Why did Kansas need a new academic performance measure? Relatively few students are available for moving over the proficiency line.
Is the API more accurate than the Percent Proficient? School T has 91% at Standard or Above, but 75% in the top 2 categories, Exceeds & Exemplary. School P has 92% at Standard or above, but only 46% in the top 2 categories.
We step away from AYPs 100% above standard and introduce the concept of a ceiling. API Average Building Scores, All Students Group, All Public Schools, 2000 to 2012
What is a realistic rate of improvement? Rates of improvement are larger at the beginning of a new testing cycle. They start to plateau when all of the variables in the systemalignment with standards, student and teacher skills, engagement begin to reach their limits.
2 pts. / yr 5 pts. / yr 10 pts. / yr 15 or more 19 pts. / yr
Mathematics AMOs School CategoryAPI Range Expected Rate of Improvement / AMO Cap on % Below Standard Modeling (Level 4) top 25 percent API > or = 744 For level 4 schools below the 90 th percentile, a mean advance of 2 pts. per year. Above the 90 th percentile, whatever improvement is possible. < or = 6 percent; if not, next lower level Transitioning (Level 3) 3rd quarter API > or = 679 but < 744 An average yearly advance of 7 points per year > 6 but < or = 13 percent; if not, next lower level Implementing (Level 2) 2nd quarter API > or = 596 but < 679 An average yearly advance of 13 points per year > 13 but < or = 19 percent; if not, next lower level High-Need (Level 1) lowest 25 percent API < 596 Increments sufficient to enter level 2 or a yearly mean API advance of 15 pts., whichever is greater. Any school with > 19 percent of its students below proficient is a level 1 school.
How can our measures be made more useful? Please tell us what data and charts would help you communicate better with: o Your board o Your parents o The local press and o Your teachers and staff. We need your feedback and suggestions To prompt your thinking, some examples follow
Possible Improvements within Current Assessment Reports: Population trends Cohort views of the API & Growth Grade patterns across years Adding comparison groups of schools like mine or students like mine Bar graphs that show improvements across performance levels
How about growth trends?
Will there be comparative growth measures?
What measures would better support instructional improvement? Emotional Support classroom climate teacher sensitivity regard for child perspectives Classroom Organization behavior management / guidance productivity maximizing engagement Instructional Support concept development quality of feedback language modeling Classroom Assessment Scoring System, Prof. Robert Pianta, et al
What would be meaningful measures to guide reducing the gap? Improved early education and child care measures Standardized school climate measures (staff collaboration; student trust of teachers) Better tools for diagnostic assessments (emphasis on formative assessments) Tools to smoothly monitor the effectiveness of interventions