Whose voice is not heard? Is there a non-response bias?
Richardson (2005) “It is therefore reasonable to assume that students who respond to feedback questionnaires will be systematically different from those who do not respond in their attitudes and experience of higher education.” (p. 406, emphasis added)
Layne et al. (1999) Statistically significant predictors of responding to electronic course evaluations: –GPA –class –subject area
Dommeyer (2002) Statistically significant predictors of responding to electronic course evaluations: –none! Variables examined: –gender –expected grade –rating of professor’s teaching
Thorpe (2002) Statistically significant predictors of responding to electronic course evaluations: –final grade –gender –GPA
Avery et al. (2006) Statistically significant predictors of responding to electronic course evaluations: –anticipated final grade –gender –race/ethnicity –class size
Conclusion There is a fairly consistent, documented history of bias in response rates, resulting in some groups being under-represented
Are paper forms biased? Perhaps, but the response rates are much higher, so whatever bias exists is not as problematic as with electronic forms that yield much lower response rates
Are the averages different with fewer responses? Does an electronic format result in higher or lower overall average ratings?
Conclusion Some studies show that electronic evaluations result in higher overall averages, some lower, and some not statistically different than paper- based forms
Responses from Survey of Teaching Faculty February 4 - 13, 2009
Procedure Wednesday, February 4: Survey opened; e-mail invitation sent to all teaching faculty Monday, February 9: Reminder announcement in Senate Wednesday, February 11: E-mail sent to all department chairs Friday, February 13: Survey closed
Summary of Written Responses Faculty (even some who are in favor of online evaluations) say they are “worried” about the following: –low response rates –lack of security –non-discrimination (all instructors get rated the same) –biased sample (because of who might not respond)
Summary of Written Responses, cont. One person reported previous positive experience with online evaluations at another institution
Summary of Written Responses, cont. Some faculty who oppose online evaluations have had experience with either the pilot project last summer, online course evaluations at previous institutions, or other online aspects of their courses Faculty speaking from first-hand experience explicitly mentioned their concern about low response rates
Summary of Written Responses, cont. Faculty are concerned about the emotional/mental state of students when completing evaluations online They also worry about whether students might be influenced by others around them at the time
Summary of Written Responses, cont. Overall, the language and tone of faculty opposed to online evaluations was far more strongly and emphatically voiced than the (rather muffled) approval of those in favor
Response Rates and Overall Experience No summary data available Anecdotal data (from the survey and personal conversations): Percentage of faculty who participated in the pilot who are now in favor of online evaluations: 0% Percentage of faculty who participated in the pilot who are now opposed to online evaluations: 100%
Data Sources Survey of teaching faculty Published, peer-reviewed literature Consultation with Patty Francis and Steve Johnson Anecdotal evidence from other institutions Local campus experience
Conclusions: Paper Forms Advantages: –higher response rate, less likely for bias in results –more faculty are confident about obtaining valid results through this method –controlled setting for administration –students are familiar with the format
Conclusions: Paper Forms Disadvantages: –time required to process forms –delay in receiving results –use of paper resources => Note that none of these disadvantages is related to the validity or accuracy of the data
Conclusions: Digital Forms Advantages: –results could be delivered to faculty more quickly –saves paper and some processing time
Conclusions: Digital Forms Disadvantages: –lower response rate –no good options for incentives –more likely for bias in results, concerns about validity –a majority of faculty have significant reservations –concerns among both faculty and students about security/privacy
Conclusions: Digital Forms Disadvantages, cont.: –questions about faculty being able to opt out –questions about students being able to opt out –student responses can be posted online for others to see
One Final Consideration SPI data are currently used to evaluate faculty for: –merit pay –contract renewal –tenure/continuing appointment –promotion –performance awards => If faculty lack confidence in the integrity and accuracy of course evaluation data, any decisions that are made on the basis of these data are likely to be questioned in a way that we believe is unhealthy for our institution.
Recommendation #1 All course evaluations should be administered using paper forms. We believe the current consensus among faculty and students will shift at some point toward favoring an electronic format. But we are not nearly there yet.
Recommendation #2 Electronic course evaluations should not even be an option. Aggregated results cannot be interpreted meaningfully (especially if differential incentives are offered). EXCEPTION: Distance-learning courses
Recommendation #3 Since significant man-hours are needed to process course evaluation forms for our campus, the College Senate should advocate strongly for allocating additional (seasonal) help for processing these forms.
Stay tuned...... for Part II of our recommendations regarding changes to the form used for course evaluation.