C R E S S T / CU University of Colorado at Boulder National Center for Research on Evaluation, Standards, and Student Testing Design Principles for Assessment.

Slides:



Advertisements
Similar presentations
Summative Assessment Kansas State Department of Education ASSESSMENT LITERACY PROJECT1.
Advertisements

Teacher Evaluation and Rewards OECD Mexico Joint Workshop December 1-2, 2009 Susan Sclafani National Center on Education and the Economy.
NCLB Basics From “What Parents of Students with Disabilities Need to Know & Do” National Center on Educational Outcomes University of Minnesota
Robert L. Linn CRESST, University of Colorado at Boulder Presentation at the Ninth Annual Maryland Assessment Conference: The Concept of Validity : Revisions,
Title I Schoolwide Providing the Tools for Change Presented by Education Service Center Region XI February 2008.
Determining Validity For Oklahoma’s Educational Accountability System Prepared for the American Educational Research Association (AERA) Oklahoma State.
High Stakes Tests -Lorrie Shepard, UC-Boulder Discussion by: Trudy Samuelson Rhonda Martin Leland Jacobs Dani Ladwig.
Impact of NCLB Adequate Yearly Progress on District Accountability in Colorado Carolyn Haug, Measured Progress Jonathan Dings, Boulder Valley School District.
Chapter Fifteen Understanding and Using Standardized Tests.
Title III Notice of Proposed Interpretations & Implications for California’s Accountability System Robert Linquanti Cathy George Project Director & Sr.
Consistency of Assessment
Return On Investment Integrated Monitoring and Evaluation Framework.
Robert L. Linn CRESST, University of Colorado at Boulder Paper presented at a symposium sponsored by the National Association of Test Directors entitled.
Robert L. Linn CRESST, University of Colorado at Boulder Paper presented at a symposium sponsored entitled “Accountability: Measurement and Value-Added.
NATIONAL CENTER ON EDUCATIONAL OUTCOMES University of Minnesota Rachel Quenemoen Cammy Lehr Martha Thurlow.
C R E S S T / U C L A Improving the Validity of Measures by Focusing on Learning Eva L. Baker CRESST National Conference: Research Goes to School Los Angeles,
New Hampshire Enhanced Assessment Initiative: Technical Documentation for Alternate Assessments Alignment Inclusive Assessment Seminar Brian Gong Claudia.
Grade 12 Subject Specific Ministry Training Sessions
Maine Course Pathways Maine School Superintendents’ Conference June 24 – 25, 2010.
Accommodations in Testing: A National Perspective Blue Ribbon Panel Meeting June, 2002.
Chapter 14 Understanding and Using Standardized Tests Viewing recommendations for Windows: Use the Arial TrueType font and set your screen area to at least.
Archived Information. MPR Associates 1 Effective Performance Measurement Systems  Define valid and reliable measures of student performance  Use appropriate.
Becoming a Teacher Ninth Edition
High Stakes Testing EDU 330: Educational Psychology Daniel Moos.
Delaware Student Testing Program (DSTP) Informational Meeting and Public Forum Organized by Advocates for Children’s Education (ACE) February 12, 2003.
Leadership: Connecting Vision With Action Presented by: Jan Stanley Spring 2010 Title I Directors’ Meeting.
DLM Early Childhood Express Assessment in Early Childhood Dr. Rafael Lara-Alecio Dr. Beverly J. Irby
STATE CONSORTIUM ON EDUCATOR EFFECTIVENESS September 10, 2013.
Update on Virginia’s Growth Measure Deborah L. Jonas, Ph.D. Executive Director for Research and Strategic Planning Virginia Department of Education July-August.
Including Quality Assurance Within The Theory of Action Presented to: CCSSO 2012 National Conference on Student Assessment June 27, 2012.
Overview to Common Formative Assessments (CFAs) Adapted from The Leadership and Learning Center Presented by Jane Cook & Madeline Negron For Windham Public.
Presentation II A Discussion with School Boards: Raising the Graduation Rate, High School Improvement, and Policy Decisions.
What is HQPD?. Ohio Standards for PD HQPD is a purposeful, structured and continuous process that occurs over time. HQPD is a purposeful, structured and.
A Principled Approach to Accountability Assessments for Students with Disabilities CCSSO National Conference on Student Assessment Detroit, Michigan June.
Washington State Teacher and Principal Evaluation Project Update 11/29/12.
Dr. Nancy S. Grasmick July 26,2012.  Maryland is proud to be the top-ranked state in U.S. growth as reported in this study, and judged by Education Week.
The Do’s and Don’ts of High-Stakes Student Achievement Testing Andrew Porter Vanderbilt University August 2006.
1 Comprehensive Accountability Systems: A Framework for Evaluation Kerry Englert, Ph.D. Paper Presented at the Canadian Evaluation Society June 2, 2003.
CAROLE GALLAGHER, PHD. CCSSO NATIONAL CONFERENCE ON STUDENT ASSESSMENT JUNE 26, 2015 Reporting Assessment Results in Times of Change:
“A Truthful Evaluation Of Yourself Gives Feedback For Growth and Success” Brenda Johnson Padgett Brenda Johnson Padgett.
Copyright © 2007 Allyn and Bacon BECOMING A TEACHER, 7e Chapter 10 Curriculum Standards Assessment and Student Learning.
Validity Validity is an overall evaluation that supports the intended interpretations, use, in consequences of the obtained scores. (McMillan 17)
Michigan School Report Card Update Michigan Department of Education.
North Carolina Read to Achieve. The Goal “The goal of the State is to ensure that every student read at or above grade level by the end of third grade.
Chapter 1 Integrating UBD and DI An Essential Partnership.
The Literacy and Numeracy Secretariat Le Secrétariat de la littératie et de la numératie October – octobre 2007 The School Effectiveness Framework A Collegial.
C R E S S T / U C L A Validity Issues for Accountability Systems Eva L. Baker AERA April 2002 UCLA Graduate School of Education & Information Studies.
Measuring College and Career Readiness 2015 PARCC RESULTS: YEAR ONE EDGEWATER SCHOOL DISTRICT ELEANOR VAN GELDER SCHOOL.
The School Effectiveness Framework
PARENT UNIVERSITY: Preparing Your Child for Success Presented by: Dr. Alicia Scelso, Principal, Pequannock Township High School Richard M. Hayzler, Principal,
No Child Left Behind Impact on Gwinnett County Public Schools’ Students and Schools.
Assessment Small Learning Communities. The goal of all Small Learning Communities is to improve teaching, learning, and student outcomes A rigorous, coherent.
Foundations of American Education: Perspectives on Education in a Changing World, 15e © 2011 Pearson Education, Inc. All rights reserved. Chapter 11 Standards,
C R E S S T / CU University of Colorado at Boulder National Center for Research on Evaluation, Standards, and Student Testing Measuring Adequate Yearly.
SCOTT MARION, CENTER FOR ASSESSMENT PRESENTATION AT CCSSO NCSA AS PART OF THE SYMPOSIUM ON: STUDENT GROWTH IN THE NON-TESTED SUBJECTS AND GRADES: OPTIONS.
The Literacy and Numeracy Secretariat Le Secrétariat de la littératie et de la numératie October – octobre 2007 The School Effectiveness Framework A Collegial.
UPDATE ON EDUCATOR EVALUATIONS IN MICHIGAN Directors and Representatives of Teacher Education Programs April 22, 2016.
Paulding County School District Elementary Parent Presentation New Georgia Elementary Parent Informational Meeting All parents and guardians.
STATEWIDE ASSESSMENT AND ACCOUNTABILITY: PROMOTION AND GRADUATION TESTS BY ISABELLA BROWN Emory University Summer 2006.
Required Skills for Assessment Balance and Quality: 10 Competencies for Educational Leaders Assessment for Learning: An Action Guide for School Leaders.
FLORIDA EDUCATORS ACCOMPLISHED PRACTICES Newly revised.
 Mark D. Reckase.  Student achievement is a result of the interaction of the student and the educational environment including each teacher.  Teachers.
American Institutes for Research
Chapter 18 Assessment Issues in Education
Title III of the No Child Left Behind Act
Lecturette 2: Steps to Improve Data Use
Understanding and Using Standardized Tests
Lecturette 2: Steps to Improve Data Use
Unit 7: Instructional Communication and Technology
Every Student Succeeds Act (ESSA):
Presentation transcript:

C R E S S T / CU University of Colorado at Boulder National Center for Research on Evaluation, Standards, and Student Testing Design Principles for Assessment and Accountability Systems Robert L. Linn AERA Symposium: Designing Ideal Assessment and Accountability Systems New Orleans, LA April 25, 2000

C R E S S T / CU Sources of Principles  Experience with Previous Assessment and Accountability Systems  Successes  Unintended Negative Effects  Test Standards  CRESST Model

C R E S S T / CU Validity, Fairness, Credibility, Utility Are Underlying Principles

C R E S S T / CU Identification of Intended Uses  Validity is specific to particular interpretations and uses of assessment results  Standard 1.1: “A rationale should be presented for each recommended interpretation and use of test scores, together with a comprehensive summary of the evidence and theory bearing on the intended use or interpretation” (p. 17).

C R E S S T / CU Education Broader than Assessment and Accountability  Potentially useful tools if supported by  Curriculum  Professional Development  Resources

C R E S S T / CU Symmetry  Educator Accountability  Teachers  Administrators  Student Accountability  Consequences  Opportunity  Policymaker and Public Accountability

C R E S S T / CU Develop standards, then assessments  How standards are formulated matters  How standards are assessed makes a difference.  WYTIWYG (What You Test Is What You Get) premise.

C R E S S T / CU Alignment  Content Standards  Curriculum  Instruction  Assessment

C R E S S T / CU Provide the resources and professional development to teachers required for students to meet the high expectations  Tests should be used for high stakes decisions about individuals only after they have been taught the knowledge and skills on which they will be tested. (Hauser & Heubert, High Stakes: Testing for Tracking, Promotion, and Graduation, National Academy of Sciences Press, 1998)

C R E S S T / CU Frames of Reference  Content and Performance Standards  Status  Progress  Normative Comparisons

C R E S S T / CU Set standards that are high, but obtainable  Educational standards at the national, state, and district levels are often inconsistent.  Standards are being set that seem out of reach - at least in the near term.  Holding all students to the same standards will either lead to a lowering of standards or untenable retention and failure rates.

C R E S S T / CU Attend to Both Status and Progress  Absolute targets for performance maintain the focus on goals for all students  Acknowledgement of progress allows recognition of improvement and reduces discouraging effects of goals that currently seem unobtainable

C R E S S T / CU Place more emphasis on comparisons of performance from year to year than from school to school  Comparisons among schools immediately raise questions of fairness and whether fairness requires taking context into account.  Use new high-quality assessments each year that yield comparable scores to those of previous years.

C R E S S T / CU Track progress for subgroups of students as well as the total group  Title I requirements for disaggregation.  Ensures attention to all students.

C R E S S T / CU Include all students in testing except those with the most severe disabilities  Who’s included and excluded in testing can produce different results - accountability incentives to distort results.  Traditionally excluded students are often capable of participating in assessments.  Use accommodated assessments for students who have not yet transitioned into English language programs or whose disabilities require it.

C R E S S T / CU Use multiple measures to make important decisions  Tests have validity only in relation to specific purposes.  Tests are not perfect - and neither are the alternatives.  No high-stakes educational decision about a test taker should be made solely or automatically on the basis of a single test score; other relevant information should also be taken into account. (Hauser & Heubert, High Stakes: Testing for Tracking, Promotion, and Graduation, National Academy of Sciences Press, 1998)

C R E S S T / CU Standard 13.7: “In educational settings, a decision or characterization that will have a major impact on a student should not be made on the basis of a single test score. Other relevant information should be taken into account if it will enhance the overall validity of the decision” (p. 146).

C R E S S T / CU Evaluate and Report on the Accuracy of Results  Student Classification  School Classification  Standard 13.14: “In educational settings, score reports should be accompanied by a clear statement of the degree of measurement error associated with each score or classification level and information on how to interpret the scores” (p. 148).

C R E S S T / CU Evaluate and Report on the Impact of the Assessment and Accountability System  Intended Effects  Instruction  Student learning  Unintended Effects  Instruction  Effects on Students

C R E S S T / CU Evaluate both the intended positive effects and the unintended negative effects of the assessment and accountability system  Gains in scores to not necessarily signal improved learning and achievement.