Presentation is loading. Please wait.

Presentation is loading. Please wait.

Overview of Types of Measures Margaret Kasimatis, PhD VP for Academic Planning & Effectiveness.

Similar presentations


Presentation on theme: "Overview of Types of Measures Margaret Kasimatis, PhD VP for Academic Planning & Effectiveness."— Presentation transcript:

1 Overview of Types of Measures Margaret Kasimatis, PhD VP for Academic Planning & Effectiveness

2 ARTICULATE MISSION/ GOALS IDENTIFY SPECIFIC OUTCOMES DETERMINE PRACTICES USED TO ACHIEVE OUTCOMES GATHER EVIDENCE REVIEW & INTERPRET RESULTS RECOMMEND ACTIONS Location in Assessment Cycle

3 Measures Within the Context of Program Assessment For program assessment, measurement tools should capture student learning that occurs as a result of the program curriculum Many measurement tools can be used for multiple levels of assessment –But how you use them differs Some assessment tools are not appropriate for program-level assessment

4 Categories of Measures Direct Measures –Look at student work products or performances that demonstrate level of learning Indirect Measures –Capture students’ perceptions of their learning and the educational environment that supports learning

5 Categories of Measures Direct Measures –Published, standardized tests (e.g., GRE Subject Test, ETS Major Field Test) –Locally developed tests –Systematic evaluation of student work (papers, presentations, creative work, performances) May or may not be embedded within courses Usually involves scoring rubrics Indirect Measures –Published surveys –Locally developed surveys and interviews –Alumni surveys

6 Properties of Good Assessment Techniques Reliable – internally consistent; consistent across raters Valid – measures what it is supposed to; appropriate Actionable – results point reviewers toward challenges to address (and how to address them) Efficient and cost effective in time and money Interesting and meaningful – people care about the results and are willing to act on them Convergence – multiple lines of evidence point to the same conclusion

7 Evaluating Measures: Direct PUBLISHED TESTS PROSCONS Provide direct evidence of student mastery of content Some are designed specifically to assess major programs Generally highly reliable Validity established (within a specific context) Usually easy to implement and obtain results Norms/comparisons available May be difficult to motivate students to perform at their best level May not align with program outcomes Often focus more on content knowledge than higher-order skills Can be expensive

8 LOCALLY DEVELOPED TESTS PROSCONS Provide direct evidence of student mastery of content or skills More flexible in terms of content and format; easier to align with program outcomes If embedded in courses, student motivation is higher Faculty are more likely to be interested in and use results Likely to be less reliable Validity unknown Norms/comparisons not available Can take several iterations (and several years) to work out the “bugs” Scoring tests and tabulating results can be cumbersome Evaluating Measures: Direct

9 EVALUATION OF STUDENT WORK PROSCONS Provides direct evidence of student mastery of content or skills If embedded in course, student motivation is higher Faculty are more likely to be interested in and use results Data collection is usually unobtrusive to students Requires time to develop, conduct training, implement – Creating flexible rubric, at appropriate for program-level assessment can be tricky Validity unknown; takes time to establish reliability Requires faculty trust that the program will be assessed, not individual instructors Norms/comparisons not available Evaluating Measures: Direct

10 PUBLISHED SURVEYS PROSCONS Minimal effort to implement and tabulate results Can be administered to a large group of respondents Demonstrated reliability and validity Can address a variety of outcomes Norms/comparisons available Provides indirect evidence of student learning May not be aligned with program outcomes Potential for biased results if sample is not representative Can be expensive Evaluating Measures: Indirect

11 LOCALLY DEVELOPED STUDENT SURVEYS & INTERVIEWS PROSCONS Flexible in terms of content and format; easy to align with program outcomes Usually have face validity Can add open-ended questions that allow you to flesh out quantitative results – More actionable Can be administered to a large group of respondents Can address a variety of outcomes Relatively easy to implement Provide indirect evidence of student learning Their validity depends on the quality of questions and response options Potential for biased results if sample is not representative Can be time-consuming to construct, implement, and tabulate results Norms/comparisons not available Open-ended responses can be difficult and time-consuming to analyze Evaluating Measures: Indirect

12 ALUMNI SURVEYS PROSCONS Same advantages as student surveys Can gather more “direct” evidence than current student surveys – E.g., employment, enrollment in graduate programs Can ask questions of alumni that are not appropriate for current students – E.g., the extent to which the program prepared them for their career Many of the same disadvantages of student surveys – It’s particularly difficult to get a good response rate The timing can be tricky – Alumni should be far enough out to see an impact of the program on their life/career but not so far out that the program they experienced is very different from current one Evaluating Measures: Indirect

13 A Word on Embedded Assessment Various types of measurement tools can be embedded within courses Only carefully constructed measures, used in certain types of courses, are appropriate for program-level assessment –Must go beyond individual course content –Some should occur in upper-level courses that are taken only after several other courses in the major

14 QUESTIONS?


Download ppt "Overview of Types of Measures Margaret Kasimatis, PhD VP for Academic Planning & Effectiveness."

Similar presentations


Ads by Google