Presentation is loading. Please wait.

Presentation is loading. Please wait.

Student Evaluation of Teaching: What Do We Know? 2/26/10 Talley North Gallery Co-sponsors: University Planning and Analysis Evaluation of Teaching Committee.

Similar presentations


Presentation on theme: "Student Evaluation of Teaching: What Do We Know? 2/26/10 Talley North Gallery Co-sponsors: University Planning and Analysis Evaluation of Teaching Committee."— Presentation transcript:

1 Student Evaluation of Teaching: What Do We Know? 2/26/10 Talley North Gallery Co-sponsors: University Planning and Analysis Evaluation of Teaching Committee Office of Faculty Development Panel Gary Roberson, EOT Committee Chair, Panel Facilitator Karen Helm, Director UPA Paul Umbach, Leadership, Policy & Adult and Higher Education Gerald Ponder, College of Education, Associate Dean

2 EOT Committee Roberson, Gary Miller-Cochran, Susan Lemaster, Rick Ames, Natalie Franzon, Paul Rabiei, Afsaneh Moss, Christina Bartlett, James Sannes, Phil Sremaniak, Laura Emigh, Ted Petherbridge, Donna Bonto-Kane, Maria Carter, Mike Ambrose, John Helm, Karen Brown, Betsy

3 Current Work of EOT Committee Review of ClassEval instrument Strategies for promotion of ClassEval

4 Introduction of UPA Staff Karen Helm, Director Kay StewartNewman Melissa House Trey Standish Lewis Carson Nancy Whelchel

5 Agenda Gary Roberson, facilitator: Introductions, agenda, and note card instructions Karen Helm: What we know about ClassEval Paul Umbach: What the research tells us about student evaluations Gerald Ponder: – Other types of evaluation of instruction – How to respond to issues raised by student evaluations All presenters: Q and A

6 Note cards Pink Cards Questions for the panel Please pass to outer aisle any time during the presentation and discussion Yellow Cards 1. Suggested revisions to ClassEval 2. Suggestions for improving student participation in evaluation of teaching

7

8 Myths & Biases in Students’ Evaluations of Teaching: Paul D. Umbach Associate Professor Leadership, Policy, and Adult and Higher Education

9 Common myths Students cannot consistently and accurately judge their instructor and instruction because they are immature, lack experience, and are capricious Student ratings are based on nothing more than popularity, with friendly humorous instructors getting the highest ratings Harder courses requiring more effort are rated lower than easier courses. Students cannot make accurate judgments until they have distance from the course

10 Common myths (continued) Time and day of the course affect student ratings Students cannot contribute meaningfully to instructional improvement Gender of the student is related to ratings Student ratings are unreliable and invalid Based on following reviews: Abrami, Leventhal, and Perry (1982); Cohen (1980); Feldman (1977, 1978, 1987, 1989a, 1989b, 2007); Levinson-Rose and Menges (1981); Marsh (1984, 1987, 2007); Marsh and Dunkin (1992)

11 In fact, most research suggests that students’ evaluations of teaching are: Reliable and stable Primarily a function of the instructor rather than the course that is taught Relatively valid against a variety of indicators of effective teaching Relatively unaffected by a variety of variables hypothesized as potential biases

12 Bias in students’ evaluations of teaching “In essence, the question is whether a condition or influence actually affects teachers and their instruction, which is accurately reflected in students’ evaluations (a case of nonbias), or whether in some way this condition or influence only affects students’ attitudes toward the course or students’ perceptions of instructors (and their teaching) such that the evaluations do no accurately reflect the instruction that students receive (a case of bias) (Feldman, 2007, p. 96).”

13 In other words, “Bias exists when a student, teacher, or course characteristic affects the evaluations made, either positively or negatively, but is unrelated to any criteria of good teaching, such as increased learning (Marsh, 2007, p. 350).”

14 Potential bias Slightly higher ratings for… – Smaller classes (nonlinear) – Teachers of upper-level courses – Teachers of higher ranks – Students in elective courses – Students in major courses – Student interest in the course This might not indicate bias

15 Potential bias (continued) Modest or small correlations between grades and evaluations – Usually between.10 and.30, wither the unit of analysis is the individual or the class (Feldman, 1976, 1977, 2007) – Association can be not be bias “Validity effect” “Student characteristics effect” – Or it could be related to bias Attributional bias and retributional bias Grading leniency effect Disciplinary differences

16 A comment on research of and the use of SETs Should SETs be multidimensional? – Flaws in some previous research – Formative and summative uses Should personnel decisions rely on single global rating items, a single score representing a weighted average, or a profile of multiple components? Should institutions offer normative comparisons? – Should they control for potential biases? – Should they construct a normative comparison group for similar courses? Should we be concerned about possible non-response bias?

17 Construct Measurement Response Edited Response Measurement Target Population Sampling Frame Sample Respondents Representation Postsurvey Adjustments Survey Statistic Validity Measurement Error Processing Error Coverage Error Sampling Error Nonresponse Error Adjustment Error From Groves, Couper, Lepkowski, Singer, & Tourangeau (2009, p. 49)

18 So Your Course Evaluations Aren’t So Hot… So What? And Now What?

19 So What? Means? Range/Variability? Course History? Item analysis? Before you worry too much about your evals, do some examination to see how yours compare with the department and the history of the course. Also look to see if specific items can tell you anything. Consultation/Mentoring About Evaluation Results Having a colleague help interpret and assign meaning helps. And…?

20 And Now What? Relationships Respect Responsible Expectations

21 And Now What? Needs Assessment Relationships Respect Responsible Expectations Who are they? What do they know? How do I adjust?

22 And Now What? How’s it going? Formative Assessment of Course @ 4 Weeks (Selden, 1997) Peter Selden has data that show that administering a course evaluation—or even asking students how things are going—at 4 weeks gives a good picture of what evaluations are going to be like. This is also soon enough to correct any big problems in the course so ratings at the end will be improved.

23 And Now What? Teach in cycles Review/quiz SLO Present in chunks Good models and examples CFU/Discuss

24 And Now What? Give shorter and more frequent tests/projects/perf ormances Shorter and more frequent tests give more valid results and seem less daunting. Review/quiz SLO Present in chunks Good models and examples CFU/Discuss

25 And Now What? “Not yet” formative feedback (minute papers, short quizzes, practice assignments, revision to mastery as a course goal) Providing students with formative feedback that does not count as a grade increases learning and provides students with greater satisfaction and engagement in courses.

26 And Now What? Don’t forget Active Learning Be “Fox-y” The “Dr. Fox” studies of some decades ago pointed out that course instruction and student perceptions benefit greatly from energy, enthusiasm, expressiveness, and apparent organization.

27 Questions and Discussion

28 Send comments to: teach_learn@ncsu.edu


Download ppt "Student Evaluation of Teaching: What Do We Know? 2/26/10 Talley North Gallery Co-sponsors: University Planning and Analysis Evaluation of Teaching Committee."

Similar presentations


Ads by Google