Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rubrics, and Validity, and Reliability: Oh My! Pre Conference Session The Committee on Preparation and Professional Accountability AACTE Annual Meeting.

Similar presentations


Presentation on theme: "Rubrics, and Validity, and Reliability: Oh My! Pre Conference Session The Committee on Preparation and Professional Accountability AACTE Annual Meeting."— Presentation transcript:

1

2 Rubrics, and Validity, and Reliability: Oh My! Pre Conference Session The Committee on Preparation and Professional Accountability AACTE Annual Meeting 2016

3 CPPA Members George Drake Dean, College of Education and Human Services, Millersville University Mark Meyers Educational Administration Program Director, Xavier University Trish Parrish (Committee chair) Associate Vice President, Academic Affairs, Saint Leo University Debbie Rickey Associate Dean, College of Education, Grand Canyon University Carol Ryan Associate Dean, College of Education and Human Services, Northern Kentucky University Jill Shedd Assistant Dean for Teacher Education, School of Education, Indiana University Carol Vukelich (Board liaison) Interim Dean, College of Education and Human Development, University of Delaware

4 Agenda Welcome and Introductions Rubric Design BREAK Table Work: Improving Rubrics Application: Rubrics for Field Experiences Debrief and Wrap Up

5 Table Discussion What type of assessment system does your EPP use? Do you use locally developed instruments as part of your key assessment of teacher candidates? Have you followed a formal process to establish reliability and validity of these instruments?

6 Rubric Design Role of rubrics in assessment Formative and Summative Feedback Transparency of Expectations Illustrative Value

7 Rubric Design Criteria for Sound Rubric Development Appropriate Definable Observable Diagnostic Complete Retrievable

8 Rubric Design Steps in Writing Rubrics Select Criteria Set the Scale Label the Ratings Identify Basic Meaning Describe Performance

9 Resources for Rubric Design National Postsecondary Education Cooperative http://nces.ed.gov/pubs2005/2005832.pdf http://nces.ed.gov/pubs2005/2005832.pdf Rubric Bank at Univ of Hawaii http://www.manoa.hawaii.edu/assessment/resources/rubricbank.htm http://www.manoa.hawaii.edu/assessment/resources/rubricbank.htm University of Minnesota http://www.carla.umn.edu/assessment/vac/improvement/p_4.html http://www.carla.umn.edu/assessment/vac/improvement/p_4.html Penn State Rubric Basics http://www.schreyerinstitute.psu.edu/pdf/RubricBasics.pdf http://www.schreyerinstitute.psu.edu/pdf/RubricBasics.pdf AACU VALUE Project http://www.aacu.org/value http://www.aacu.org/value VALUE Rubrics http://www.aacu.org/value-rubrics http://www.aacu.org/value-rubrics Irubric http://www.rcampus.com/indexrubric.cfm http://www.rcampus.com/indexrubric.cfm

10 Introduction to Validity Construct Validity: How well a rubric measures what it claims to measure Content Validity: Estimate of how the rubric aligns with all elements of a construct Criterion Validity: Correlation with standards Face Validity: A measure of how representative a rubric is “at face value”

11 Table Discussion Which of the types of validity would be most helpful for locally developed rubrics? Construct Content Criterion Face

12 Approaches to Establishing Validity Locally-established methodology, such as tagging, developed by the EPP with the rationale provided by the EPP Research-based methodology, such as Lawshe, removes need for EPP to develop a rationale

13 Tagging Example CAEP 1.2: Providers ensure that candidates use research and evidence to develop an understanding of the teaching profession and use both to measure their P-12 students’ progress and their own professional practice. Component DescriptorIneffectiveEmergingTarget Developing objectivesLists learning objectives that do not reflect key concepts of the discipline. Lists learning objectives that reflect key concepts of the discipline but are not aligned with relevant state or national standards. Lists learning objectives that reflect key concepts of the discipline and are aligned with state and national standards. Uses pre-assessmentsFails to use pre-assessment data when planning instruction. Considers baseline data from pre-assessments; however, pre- assessments do not align with stated learning targets/objectives. Uses student baseline data from pre-assessments that are aligned with stated learning targets/objectives when planning instruction. Planning assessmentsPlans methods of assessment that do not measure student performance on the stated objectives. Plans methods of assessment that measure student performance on some of the stated objectives. Plans methods of assessment that measure student performance on each objective.

14 Table Discussion Why is reliability important in locally developed rubrics? Which types of reliability are most important for locally developed rubrics?

15 Introduction to Reliability Inter-Rater: Extent to which different assessors have consistent results Test-Retest: Consistency of a measure from one time to another Parallel Forms: Consistency of results of two rubrics designed to measure the same content Internal Consistency: Consistency of results across items in a rubric

16 Approaches to Establishing Reliability This can be done via a research-based approach or through a locally-based approach More on this in the second half of our presentation!

17 Time to Practice Opportunities to practice two methods of validity Criterion (Correlation with standards) Content (Estimate of how the rubric aligns with all elements of a construct) Opportunity to practice inter-rater reliability (Extent to which different assessors have consistent results)

18 Criterion Validity Correlation with Standards As an individual- Review the Learning Environment section on the blue document “Tag” each of the elements in that section to the appropriate InTASC and CAEP standards As a table- Come to a consensus on the most appropriate “Tags” for each element of the Learning Environment section

19

20 Content Validity Using the Lawshe Method As an individual, review and rate each element of the Designing and Planning Instruction section on the yellow document Choose a table facilitator Tally the individual ratings Calculate the CVR Value of each element CVR= ne- N/2 N/2 n e= # of experts who chose essential; N= total # of experts The closer to +1.0 the more essential the element What is the CVR value your group suggests as the minimum score for keeping an element?

21

22 Content Validity Discuss the elements that would be cut Why was the element rated as less than essential? Can/ should it be reworded to have a higher CVR value? If so, how would you reword it?

23 Inter-Rater Reliability (Extent to which different assessors have consistent results) Two raters observing the same lesson In person or via recorded lesson Raters can be two university clinical educators, two P-12 clinical educators, or one university and one P-12 clinical educator Each rater independently completes the observation form Statistics are run to determine the amount of agreement between the two raters (SPSS or Excel)

24 Inter-Rater Reliability Watch the Elementary Math Teaching video https://www.youtube.com/watch?v=fZMbCENzaws&featur e=youtu.be https://www.youtube.com/watch?v=fZMbCENzaws&featur e=youtu.be Rate the teacher using the Learning Environment criteria on the blue sheet Identify a partner at the table and compare your ratings On which criteria were your ratings the same? Different?

25 Inter-Rater Reliability Watch the High School Science Teaching video https://youtu.be/tOWYMCmx_0c https://youtu.be/tOWYMCmx_0c Rate the teacher using the Instruction criteria on the pink sheet Identify a partner at the table and compare your ratings On which criteria were your ratings the same? Different?

26 Table Discussion What would you do to increase inter-rater reliability?

27 Summary Validity and Reliability directed at two basic questions re: assessments Is the assessment useful? Does it provide constructive feedback to both candidates and the faculty? Is the assessment fair? Does it provide feedback consistently and as intended?

28 What will you attempt on your own campus? What additional information do you need? What questions do you still have?


Download ppt "Rubrics, and Validity, and Reliability: Oh My! Pre Conference Session The Committee on Preparation and Professional Accountability AACTE Annual Meeting."

Similar presentations


Ads by Google