Presentation on theme: "School of Information Studies"— Presentation transcript:
1School of Information Studies Using Rubrics to Collect Evidence for Decision-Making: What do Librarians Need to Learn?Megan Oakleaf, MLS, PhDSchool of Information StudiesSyracuse University4th International Evidence Based Library & Information Practice ConferenceMay 2007
2Overview Introduction Definition & Benefits of Rubrics Methodology Emergence of Expert Rubric User GroupCharacteristics of Expert Rubric UsersBarriers to Expert Use of RubricsThe Need for TrainingDirections for Future Research
3Rubrics Defineddescribe the 1) parts, indicators, or criteria and 2) levels of performance of a particular task, product, or serviceformatted on a grid or tableemployed to judge qualityused to translate difficult, unwieldy data into a form that can be used for decision-making
4Rubrics are often used to make instructional decisions and evaluations.
5Potential Rubric Uses in Libraries To analyze and evaluate:Information-seeking behaviorEmployee customer service skillsMarketing/outreach effortsCollection strengthsInformation commons spacesStudent information literacy skills
6Rubric for a Library Open House Event for First Year Students IndicatorsBeginningDevelopingExemplaryData SourceAttendanceAttendance rates are similar to the 2006 Open HouseAttendance rates increase by 20% from 2006 Open HouseAttendance rates will increase by 50% from 2006 Open HouseStaff [Committee and Volunteers] recordsStaff ParticipationStaff participation is similar to 2006 Open House, no volunteersIncrease in participation by library staff [librarians and paraprofessionals] and student volunteersIncrease in participation with library staff [librarians and paraprofessionals], student volunteers, student workers, and academic facultyBudgetBudget same as 2006 Open House, $200Budget increases by $100 from 2006 Open HouseBudget increases by $300 from 2006 Open HouseBudget, Financial StatementsReference StatisticsReference statistics similar to 2006Reference statistics increase by 20% from 2006Reference statistics increase by 50% from 2006Library Reference Department StatisticsStudent AttitudesStudents are pleased with Open HouseStudents enjoy the Open House, are satisfied with informationStudents are excited about the Open House, volunteer to participate with the next year’s eventSurveyRubric for a Library Open House Event for First Year StudentsRubric created by:Katherine Thurston & Jennifer Bibbens
7Post-Training Surveys Rubric for a Virtual Reference ServiceIndicatorsBeginningDevelopingExemplaryData SourceTransactions0 – 4 reference transactions per week.5 – 7 reference transactions per week.8 + reference transactions per week.Transaction LogsUser SatisfactionStudents, faculty and staff report they are “dissatisfied” or “very dissatisfied” with reference transactions.Students, faculty and staff report they are “neutral” about reference transactions.Students, faculty and staff report they are “satisfied” or “very satisfied” with reference transactions.User SurveysTrainingLibrarians report they are “uncomfortable” or “very uncomfortable” with providing virtual reference service.Librarians report they are “neutral” about providing virtual reference service.Librarians report they are “comfortable” or “very comfortable” with providing virtual reference service.Post-Training SurveysTechnologyBetween 75 % and 100 % of transactions a week report dropped calls or technical difficulties.Between 25 % and 74% of transactions a week report dropped calls or technical difficulties.Between 0 % and 24% of transactions a week report dropped calls or technical difficulties.System TranscriptsElectronic Resources0 – 50 hits on electronic resources a week.50 – 100 hits on electronic resources a week.100 + hits on electronic resources a week.Systems Analysis LogsRubric created by:Ana Guimaraes & Katie Hayduke
9Benefitsrubrics provide librarians the opportunity to discuss, determine, and communicate agreed upon valuesrubrics include descriptive, yet easily digestible dataprevent inaccuracy of scoringprevent biasWhen used in student learning contexts…reveal the expectations of instructors and librarians to studentsoffer more meaningful feedback than letter or numerical scores alonesupport not only student learning, but also self-evaluation and metacognition
10The Research QuestionTo what extent can librarians use rubrics to make valid and reliable decisions?Library service: an information literacy tutorialArtifacts: student responses to questions within the tutorialGoal: to make decisions about the tutorial and the library instruction program
11Methodology75 randomly selected student responses to open-ended questions embedded in an information literacy tutorial at NCSU25 raters15 internal & trained (NCSU librarians, faculty, students)10 external & untrained (non-NCSU librarians)raters code artifacts using rubricsraters’ experiences captured on comment sheetsreliability statistically analyzed using Cohen’s kappavalidity statistically analyzed using a “gold standard” approach and Cohen’s kappaThis study employed a survey design methodology. The data for the study came from student responses to open-ended questions embedded in an online information literacy tutorial. This textual data was translated into quantitative terms through the use of a rubric. Using a rubric, raters coded student answers into pre-set categories, and these categories were assigned point values. The point values assigned to student responses were subjected to quantitative analysis in order to describe student performance, test for interrater reliability, and explore the validity of the rubric. According to Lincoln, this approach is called “discovery phase” or preliminary experimental design, and it is commonly employed in the development of new rubrics. Yvonna Lincoln. "Authentic Assessment and Research Methodology." to Megan Oakleaf
13Kappa Index Kappa Statistic Strength of Agreement 0.81-1.00 Almost PerfectSubstantialModerateFairSlight<0.00Poor
14Average KappaRankParticipant GroupStatus0.721NCSU LibrarianExpert0.692Instructor0.6730.6640.6250.616Non-Expert0.5970.588Student0.5690.5510.055110.54120.5213140.4315External Instruction Librarian0.3216External Reference Librarian0.3117180.3019200.27210.21220.19230.14240.1325expert status does not appear to be correlated to educational background, experience, or position within the institution
17Expert Characteristics focus on general features of artifactadopt values of rubricsrevisit criteria while scoringexperience training
18Non-Expert Characteristics diverse outlooks or perspectivesprior knowledge or experiencesfatiguemoodother barriers
19Barrier 1 Difficulty Understanding an Outcomes-Based Approach Many librarians are more familiar with inputs/outputs than outcomes.Comments from raters:using measurable outcomes to assess student learning focuses too much on specific skills—too much “science” and not enough “art.”“While the rubric measures the presence of concepts…it doesn’t check to see if students understand [the] issues.”“This rubric tests skills, not…real learning.”
20Barrier 2 Tension between Analytic & Holistic Approaches Some librarians are unfamiliar with analytical evaluation.Comments from raters:The rubric “was really simple. But I worried that I was being too simplistic…and not rating [student work] holistically.”“The rubric is a good and a solid way to measure knowledge of a process but it does not allow for raters to assess the response as a whole.”
21Analytic vs. Holistic Analytic Better for judging complex artifacts Allow for separate evaluations of artifacts with multiple facetsProvide more detailed feedbackTake more time to create and useBottom line: Better for providing formative feedbackHolisticBetter for simple artifacts with few facetsGood for getting a “snapshot” of qualityProvide only limited feedbackDo not offer detailed analysis of strengths/weaknessesBottom line: Better for giving summative scores
22Barrier 3 Failure to Comprehend Rubric Some librarians may not understand all aspects of a rubric.Comments from raters:“I decided to use literally examples, indicators to mean that students needed to provide more than one.”“The student might cite one example…but not…enough for me to consider it exemplary.”
23Barrier 4 Disagreement with Assumptions of the Rubric Some librarians may not agree with all assumptions and values espoused by a rubric.Comments from raters:The rubric “valued students’ ability to use particular words but does not measure their understanding of concepts.”
25Barrier 5 Difficulties with Artifacts Some librarians may be stymied by atypical artifacts.Comments from raters:I found myself “giving the more cryptic answers the benefit of the doubt.”“If a student answer consists of a bulleted list of responses to the prompt, but no discussion or elaboration, does that fulfill the requirement?”“It’s really hard…when students are asked to describe, explain, draw conclusions, etc. and some answer with one word.”
26Barrier 6 Difficulties Understanding Library Context & Culture Librarians need campus context to use rubrics well.
27TrainingTopicsValue & principles of outcomes-based analysis and evaluationTheories that underlie rubricsAdvantages & disadvantages of rubric modelsStructural issues that limit rubric reliability and validity (too general or specific, too long, focused on quantity not quality, etc)Ways to eliminate disagreement about rubric assumptionsMethods for handling atypical artifacts
28Future Research Investigate: attributes of expert raters effects of different types and levels of rater trainingnon-instruction library artifactsimpact of diverse settings
29Conclusion Are rubrics worth the time and energy? This study confirmed the value of rubrics—nearly all participants stated that they could envision using rubrics to improve library instructional services.Such feedback attests to the merit of rubrics as tools for effective evidence based decision-making practice.
30American Library Association. 2000 American Library Association Information Literacy Competency Standards for Higher Education. 22 April 2005 <http://www.ala.org/ala/acrl/acrlstandards/informationliteracycompetency.htm>.Arter, Judith and Jay McTighe. Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance. Thousand Oaks, California: Corwin Press, 2000.Bernier, Rosemarie. “Making Yourself Indispensible By Helping Teachers Create Rubrics.” CSLA Journal 27.2 (2004).Bresciani, Marilee J., Carrie L. Zelna, and James A. Anderson. Assessing Student Learning and Development: A Handbook for Practitioners. Washington: National Association of Student Personnel Administrators, 2004.Callison, Daniel. “Rubrics.” School Library Media Activities Monthly 17.2 (Oct 2000): 34.Colton, Dean A., Xiaohong Gao, Deborah J. Harris, Michael J. Kolen, Dara Martinovich-Barhite, Tianyou Wang, and Catherine J. Welch. Reliability Issues with Performance Assessments: A Collection of Papers. ACT Research Report Series 97-3, 1997.Gwet, Kilem. Handbook of Inter-Rater Reliability: How to Estimate the Level of Agreement between Two or Multiple Raters. Gaithersburg, Maryland: STATAXIS, 2001.Hafner, John C. “Quantitative Analysis of the Rubric as an Assessment Tool: An Empirical Study of Student Peer-Group Rating.” International Journal of Science Education (2003).Iannuzzi, Patricia. “We Are Teaching, But Are They Learning: Accountability, Productivity, and Assessment.” Journal of Academic Librarianship 25.4 (1999):Landis, J. Richard and Gary G. Koch. “The Measure of Observer Agreement for Categorical Data.” Biometrics 33 (1977).Lichtenstein, Art A. “Informed Instruction: Learning Theory and Information Literacy.” Journal of Educational Media and Library Sciences 38.1 (2000).Mertler, Craig A. “Designing Scoring Rubrics For Your Classroom.” Practical Assessment, Research and Evaluation 7.25 (2001).Moskal, Barbara M. “Scoring Rubrics: What, When, and How?” Practical Assessment, Research, and Evaluation 7.3 (2000).Nitko, Anthony J. Educational Assessment of Students. Englewood Cliffs, New Jersey: Prentice Hall, 1996.Popham, W. James. Test Better, Teach Better: The Instructional Role of Assessment. Alexandria, Virginia: Association for Supervision and Curriculum Development, 2003.Prus, Joseph and Reid Johnson. “A Critical Review of Student Assessment Options.” New Directions for Community Colleges 88 (1994).Smith, Kenneth R. New Roles and Responsibilities for the University Library: Advancing Student Learning through Outcomes Assessment. Association of Research Libraries, 2000.Stevens, Dannielle D. and Antonia Levi. Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Sterling, Virginia: Stylus, 2005.Tierney, Robin and Marielle Simon. “What's Still Wrong With Rubrics: Focusing On the Consistency of Performance Criteria Across Scale Levels.” Practical Assessment, Research, and Evaluation 9.2 (2004).Wiggins, Grant. “Creating Tests Worth Taking.” A Handbook for Student Performance in an Era of Restructuring. Eds. R. E. Blum and Judith Arter. Alexandria, Virginia: Association for Supervision and Curriculum Development 1996.Wolfe, Edward W., Chi-Wen Kao, and Michael Ranney. “Cognitive Differences In Proficient and Nonproficient Essay Scorers.” Written Communication 15.4 (1998).