Presentation is loading. Please wait.

Presentation is loading. Please wait.

Aligning VALUE Rubrics to Institutional Purposes: Sharing Practical Approaches that Promote Consistent Scoring and Program Improvement Linda Siefert

Similar presentations


Presentation on theme: "Aligning VALUE Rubrics to Institutional Purposes: Sharing Practical Approaches that Promote Consistent Scoring and Program Improvement Linda Siefert"— Presentation transcript:

1 Aligning VALUE Rubrics to Institutional Purposes: Sharing Practical Approaches that Promote Consistent Scoring and Program Improvement Linda Siefert siefertl@uncw.edu siefertl@uncw.edu Anne Pemberton pembertona@uncw.edu pembertona@uncw.edu University of North Carolina Wilmington

2 Discussion Topics Who is using VALUE rubrics or other meta- rubrics? How are they being used? What feedback is being gathered from the rubric users? What procedures are being used to improve consistency of scoring? What changes have been made at the institutions using the rubrics?

3 POLL Are you using AAC&U VALUE Rubrics at your institution? For General Education assessment For assessment in the majors Are you using other meta-rubrics at your institution? For General Education assessment For assessment in the majors

4 Which VALUE Rubrics do you use? Inquiry and analysis Critical thinking Creative thinking Written communication Oral communication Reading Quantitative literacy Information literacy Teamwork Problem solving Civic knowledge and engagement Intercultural knowledge Ethical reasoning Foundations and skills for life-long learning Integrative and applied learning www.aacu.org/VALUE/rubrics/index_p.cfm

5 Implementation Procedures (1) How we are using the rubrics: A number of VALUE Rubrics are aligned to our UNCW Learning Goals. These rubrics are used to score student work products from general education courses and senior capstone courses. Courses with student learning outcomes that are aligned to the Learning Goals are selected that are representative of those taken by most students. Sections are selected by stratified random sampling. Students within sections are selected randomly. A workshop is held to acquaint or reacquaint instructors with the rubric(s) prior to the beginning of the semester. Instructors select an assignment that they believe matches most or all dimensions of the rubric.

6 Implementation Procedures (2) Scoring Workshop Two-hour workshop prior to scoring Scorers report that the training is adequate, yet a few say that they were not as prepared on the day of scoring as they thought they were. Scoring is performed at an all-day or half-day session. Scorers score the first work product from each packet together in pairs to recalibrate. Additional work products are double scored to measure IRR.

7 Implementation Procedures (3) How are you using VALUE or other meta- rubrics?

8 Feedback from Scorers Feedback weve received: Written communication rubric fits assignments well, requiring few assumptions. Inquiry is approached differently across disciplines. Most of these differences fall into Design Process. Most scoring pairs needed to make assumptions about the process that was inferred from the assignment. At the basic studies level, Topic Selection was often determined to be not applicable. Critical thinking has been the most difficult rubric to applysome difficulty comes from the rubric, some from the assignments. A number of scorers said that the assignments needed to be better matched to the rubrics. There were a number of comments concerning the need for faculty to provide more in-depth instructions for assignments. Feedback youve received:

9 Feedback from Instructors Feedback weve received: All instructors to date have said that the assignment selection process was not difficult, which does not match with scorer feedback about some of the assignments. One instructor said that more training would be beneficial. This is an area that we will be working on. Feedback youve received:

10 Interrater Reliability We measure using Percent Agreement, Spearmans Rho, and Krippendorffs Alpha. During first round, we met our benchmark on 3 of 15 dimensions, and were close on 3 more. Meta-rubrics are more difficult to apply than assignment-specific rubrics. How are you measuring, and what are your findings?

11 Changes to VALUE Rubrics We have made one change: Evidence dimension of Critical Thinking Divided interpretation from questioning viewpoints of experts What changes have you made to any of the VALUE rubrics? To fit institutional mission and use To improve consistent use

12 Changes Made to Instruction We have started a UNCW Learning Goals series through the Center for Teaching Excellence to begin conversations about each learning goal. Faculty are beginning to grapple with the difference between thinking and critical thinking. What changes are being made at your institutions?


Download ppt "Aligning VALUE Rubrics to Institutional Purposes: Sharing Practical Approaches that Promote Consistent Scoring and Program Improvement Linda Siefert"

Similar presentations


Ads by Google