University of Maryland Slide 1 May 2, 2001 ECD as KR * Robert J. Mislevy, University of Maryland Roy Levy, University of Maryland Eric G. Hansen, Educational.

Slides:



Advertisements
Similar presentations
Performance Assessment
Advertisements

Studying at postgraduate level Student Services Get Ahead 2012 Angela Dierks.
CT is a self-directed process by which we take deliberate steps to think at the highest level of quality. CT is skillful, responsible thinking that is.
Level 1 Recall Recall of a fact, information, or procedure. Level 2 Skill/Concept Use information or conceptual knowledge, two or more steps, etc. Level.
Advances in the PARCC Mathematics Assessment August
The Network of Dynamic Learning Communities C 107 F N Increasing Rigor February 5, 2011.
1 Module 2: Tasks that Prompt Students’ to Represent Your Outcome Statements “ Every assessment is also based on a set of beliefs about the kinds of tasks.
Team Task Choose 1 Progression to READ: Number and Operations--Fractions Ratios and Proportional Relationships Develop “Content” Knowledge.
Why Not Just Convert--Percentages or Points to Rubric Scores? 1.Rubric scores facilitate the formative assessment and improvement process. 2.Rubric scores.
THE VISION OF THE COMMON CORE: EMBRACING THE CHALLENGE UCDMP SATURDAY SERIES SECONDARY SESSION 5 MAY 3, 2014.
© 2013 SRI International - Company Confidential and Proprietary Information Center for Technology in Learning SRI International NSF Showcase 2014 SIGCSE.
Educators Evaluating Quality Instructional Products (EQuIP) Using the Tri-State Quality Rubric for Mathematics.
Robert J. Mislevy & Min Liu University of Maryland Geneva Haertel SRI International Robert J. Mislevy & Min Liu University of Maryland Geneva Haertel SRI.
CAP 252 Lecture Topic: Requirement Analysis Class Exercise: Use Cases.
SRI Technology Evaluation WorkshopSlide 1RJM 2/23/00 Leverage Points for Improving Educational Assessment Robert J. Mislevy, Linda S. Steinberg, and Russell.
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
University of Maryland Slide 1 July 6, 2005 Presented at Invited Symposium K3, “Assessment Engineering: An Emerging Discipline” at the annual meeting of.
ECOLT 2006 Slide 1 October 13, 2006 Prospectus for the PADI design framework in language testing ECOLT 2006, October 13, 2006, Washington, D.C. PADI is.
Inference & Culture Slide 1 April 29, 2003 Argument Substance and Argument Structure in Educational Assessment Robert J. Mislevy Department of Measurement,
Action Plan for your topic here your school name here date of presentation here WARNING: Creation of an action plan without collaboration of faculty could.
U Iowa Slide 1 Sept 19, 2007 Some Terminology and Concepts for Simulation-Based Assessment Robert J. Mislevy University of Maryland In collaboration with.
C R E S S T / U C L A Improving the Validity of Measures by Focusing on Learning Eva L. Baker CRESST National Conference: Research Goes to School Los Angeles,
FERA 2001 Slide 1 November 6, 2001 Making Sense of Data from Complex Assessments Robert J. Mislevy University of Maryland Linda S. Steinberg & Russell.
Foundations This chapter lays down the fundamental ideas and choices on which our approach is based. First, it identifies the needs of architects in the.
Graphic Organizers Key elements 1.Pick the right one! 2.Must require some writing. 3.Must include dialogue/speaking. 4.Must have some finishing product/question.
Noynay, Kelvin G. BSED-ENGLISH Educational Technology 1.
ADL Slide 1 December 15, 2009 Evidence-Centered Design and Cisco’s Packet Tracer Simulation-Based Assessment Robert J. Mislevy Professor, Measurement &
ACOS 2010 Standards of Mathematical Practice
Can we make a test fun? Eric Church – BreakAway Games/ University of Baltimore
Terry Vendlinski Geneva Haertel SRI International
Overview of U.S. Results: Digital Problem Solving PIAAC results tell a story about the systemic nature of the skills deficit among U.S. adults.
PIAAC results tell a story about the systemic nature of the skills deficit among U.S. adults. Overview of U.S. Results: Focus on Numeracy.
Foundational Work using Evidence-Centered Design.
ECD is a deliberate and systematic approach to assessment development that will help to establish the validity of the assessments, increase the comparability.
1 Making sound teacher judgments and moderating them Moderation for Primary Teachers Owhata School Staff meeting 26 September 2011.
The Design Phase: Using Evidence-Centered Assessment Design Monty Python argument.
Some Implications of Expertise Research for Educational Assessment Robert J. Mislevy University of Maryland National Center for Research on Evaluation,
© 2012 Board of Regents of the University of Wisconsin System, on behalf of the WIDA Consortium Introduction to the WIDA English Language Development.
Introduction To System Analysis and Design
The present publication was developed under grant X from the U.S. Department of Education, Office of Special Education Programs. The views.
Meaningful Mathematics
ATTRIBUTEDESCRIPTION Focal Knowledge, Skills, Abilities The primary knowledge / skills / abilities (KSAs) targeted by this design pattern. RationaleHow/why.
Robert J. Mislevy University of Maryland Geneva Haertel & Britte Haugan Cheng SRI International Robert J. Mislevy University of Maryland Geneva Haertel.
Historical Thinking Skills
Putting Research to Work in K-8 Science Classrooms Ready, Set, SCIENCE.
Illustration of a Validity Argument for Two Alternate Assessment Approaches Presentation at the OSEP Project Directors’ Conference Steve Ferrara American.
On Layers and Objects in Assessment Design Robert Mislevy, University of Maryland Michelle Riconscente, University of Maryland Robert Mislevy, University.
A Context Model based on Ontological Languages: a Proposal for Information Visualization School of Informatics Castilla-La Mancha University Ramón Hervás.
Communicating About the CCSS South Dakota Community of Practice Webinar March 18, 2014 Presented by Mike Burdge and Debbie Taub, Keystone LLC.
Christoph F. Eick University of Houston Organization 1. What are Ontologies? 2. What are they good for? 3. Ontologies and.
VALUE/Multi-State Collaborative (MSC) to Advance Learning Outcomes Assessment Pilot Year Study Findings and Summary These slides summarize results from.
Evidence-Centered Game Design Kristen DiCerbo, Ph.D. Principal Research Scientist, Pearson Learning Games Scientist, GlassLab.
Alternative Assessment Chapter 8 David Goh. Factors Increasing Awareness and Development of Alternative Assessment Educational reform movement Goals 2000,
National Research Council Of the National Academies
INSTRUCTIONAL OBJECTIVES
Robert J. Mislevy University of Maryland National Center for Research on Evaluation, Standards, and Student Testing (CRESST) NCME San Diego, CA April 15,
AERA April 2005 Models and Tools for Drawing Inferences from Student Work: The BEAR Scoring Engine Cathleen Kennedy & Mark Wilson University of California,
Common Core State Standards in English/Language Arts What science teachers need to know.
Using Evidence-Centered Design to develop Scenario Based Interactive Computer Tasks Daisy Rutstein.
Using PADI Templates as an Alternative Structure for Specifying GLOBE Investigation Strategies AERA April 2005 Angela Haydel DeBarger, SRI International.
INTRODUCTION TO THE WIDA FRAMEWORK Presenter Affiliation Date.
Of 24 lecture 11: ontology – mediation, merging & aligning.
Knowing What Students Know Ganesh Padmanabhan 2/19/2004.
Consider Your Audience
What to Look for Mathematics Grade 5
Principles to Actions: Establishing Goals and Tasks
Principled Assessment Designs for Inquiry (PADI)
Standards-Based Assessment and Rating of Learning Outcomes
Approaches to Learning (ATL)
Presentation transcript:

University of Maryland Slide 1 May 2, 2001 ECD as KR * Robert J. Mislevy, University of Maryland Roy Levy, University of Maryland Eric G. Hansen, Educational Testing Service (builds on work with Linda Steinberg and Russell Almond) March 6, 2003 * Evidence-centered design as knowledge representation

University of Maryland Slide 2 May 2, 2001 Knowledge Representations l A knowledge representation (KR) is a structure for expressing, communicating, and thinking about important entities and relationships in some domain. Maps, wiring diagrams, physics equations, nested lists. Object models, for business systems and computer systems. Evidence-centered design models & structures l KRs are surrogates for something else--a real world situation, or a class of situations, or a representation in other KRs. They capture some entities and relationships, but ignore others. The included entities, relationships, and processes are the ontology of the KR -- what kinds of things you think about, and how.

University of Maryland Slide 3 May 2, 2001 KRs are useful when they highlight important relationships and make them easier to work with. l KRs facilitate analogies across problems and domains. In what ways are AP Studio Art, the SAT, Hydrive, and a language proficiency oral interview alike? l KRs make it easier to acquire and structure information. E.g., ECD design process in ETS Teaching & Learning programs l KRs can facilitate working together. ECD object model for sharing, re-using, repurposing the elements and processes in assessments. l KRs are significant in planning. What will a solution have to look like? What elements in assessments can vary substantially, but what relationships must hold? l Overlapping KRs coordinate work in complex systems. Multiple ECD KRs, with bridges among them, for different, interrelated parts of assessment (substance to argument to specs & models to operation to reporting).

University of Maryland Slide 4 May 2, 2001 *** Warning -- cognitive overload ***

University of Maryland Slide 5 May 2, 2001 Where you usually start: What are all the kinds of things that are important to know and do, when and how? What does good work look like? Not generally organized according to assessment arguments. (In ECD, “domain analysis”) KRs: Idiosyncratic from domains, as evolved to suit domain purposes.

University of Maryland Slide 6 May 2, 2001 Where you usually want to go: Operational assessment system: Pieces and processes that gather, evaluate, and report, to achieve assessment purpose. (In ECD, “assessment delivery system.”) KRs: Object model for delivery system.

University of Maryland Slide 7 May 2, 2001 How do you get from here to there? That is, from knowledge about the domain, to objects and processes that meet the purposes you had in mind?

University of Maryland Slide 8 May 2, 2001 What’s in between (1): Assessment argument: What knowledge, skill, accomplishments, etc., of students do you want do draw inferences about? What do you need to see them say, do, or make? What circumstances can evoke this evidence? (Messick, 1984) KRs: Toulmin & Wigmore diagrams % EH: How about: What’s in between (part 1).. Just a thought..

University of Maryland Slide 9 May 2, 2001 What’s in between (2): Organizing argument in KRs that presage the structure of assessment elements and processes. Still substantively meaningful. (In ECD, “Domain Modeling”) KRs: ETS “paradigms”; T&L forms; PADI Design Patterns; Bayes nets for arguments; BEAR construct map structure.

University of Maryland Slide 10 May 2, 2001 What’s in between (2): Organizing argument in KRs that presage the structure of assessment elements and processes. Still substantively meaningful. (In ECD, “Domain Modeling”) KRs: ETS “paradigms”; T&L forms; PADI Design Patterns, Bayes nets for arguments. What’s in between (2, continued): Also has implications for what information you need before administration of the assessment, and how you can interpret the results. % EH: Very good point for disability access..

University of Maryland Slide 11 May 2, 2001 What’s in between (2): Organizing argument in KRs that presage the structure of assessment elements and processes. Still substantively meaningful. (In ECD, “Domain Modeling”) KRs: ETS “paradigms”; T&L forms; PADI Design Patterns, Bayes nets for arguments. Need to establish correspondence between the common assessment KRs and the domain-specific KRs that address key entities and relationships from that domain, as they need to be organized into the assessment argument, thence assessment structures. E.g., BioKIDS’ structure/demand matrices and FOSS’s filled-in construct maps

University of Maryland Slide 12 May 2, 2001 What’s in between (3): Models and specifications for operational elements and processes. (ECD “Conceptual Assessment Framework”) KRs: Student, Evidence, & Task models; Bayes nets; Measurement-model equations; Task Templates; Generalized rubrics, scoring algorithms

University of Maryland Slide 13 May 2, 2001 The upshot: Work through KRs, get machinery that embodies the substantive assessment argument, to meet the purposes you had in mind.