Download presentation
Presentation is loading. Please wait.
1
Mark Troy – Data and Research Services – metroy@tamu.edu
2
How low are they? How concerned should faculty and administration be about low rates? Can response rates be improved? 2
3
3
4
PICA average ◦ 50%-60% Paper average ◦ 70%-80% 100% response rate ◦ Less than 10% of courses, whether evaluated with paper forms or PICA achieve 100% responses Extremely low response rate ◦ Less than 10% of courses, whether evaluated with paper forms or PICA have response rates lower than 10% 4
5
5
6
6
7
Non-response bias ◦ The bias that arises when students who do not respond have different course experiences than those who do respond. Random effects ◦ If the response rate is low, the mean is susceptible to the influence of extreme scores, either high or low. 7
8
Only those who really like you and those who really hate you will do the evaluations You will get responses only from the students who like you. Only the students who hate you will bother to respond. The students who really hate you will choose not to play the game at all. 8
9
If only those who like you and those who hate you respond, the means might not change (assuming equal numbers of both groups), but the shape of the distribution and the variance of the scores would change. If only those who like you respond, the mean should increase and the distribution and variance would change. If only those who hate you respond, the mean should decrease and the distribution and variance would change. 9
10
10 No significant mean differences were observed. An upward trend is visible in the chart, but that began before the switch from paper to PICA.
11
11 Paper PICA No mean differences were observed. The standard deviations did not change significantly after the switch to PICA.
12
12 Overall, this was a good course. Strongly agree Agree Undecided Disagree Strongly disagree Percent of students responding
13
13 Fall 2011- paper Spring 2012 - PICA
14
A comparison of courses taught by the same instructor in consecutive semesters showed that half of the ratings went up after the switch to PICA and half went down. Overall, there was no significant difference in means of paper evaluations compared to PICA evaluations. Significant mean differences were observed in 3 out of 70 courses (which would be expected by chance alone.) The three significant differences were observed in courses with less than 30% response rate in PICA. 14
15
The correlation between response rate and class size for paper administration = -.161 (Fall 2009) The correlation between response rate and class size for PICA administration = -.056 (Fall 2009) Conclusion: Although the relationship between class size and response rate is weak, there is a greater tendency for response rates to decrease as class size increases with paper administration than with PICA. 15
16
Most differences between means can be attributed to chance variation and not to the method of evaluation. Response rate does not appear to have an impact on means. EXCEPTION: Extreme scores, whether high or low, exert greater influence on the mean when the number of responses is low. ◦ Equally true for 100% response rate from a class of ten as for 10% response rate from a class of 100. 16
17
1. Non-response bias is a systematic effect that, if present, would appear as a difference in mean ratings, or as an increase in the variance of the ratings, and as a change in the distributions of the scores. Such differences are not seen. (Although that does not mean the non-response bias doesn’t exist.) 2. Random effects. In any evaluation, some ratings will be higher than others and some will be lower. The more responses, the less influence the extremes, either high or low, will have on the mean. A low response rate allows an unusually high or low rating to pull the average in its direction, therefore higher response rates are better than lower response rates. These random effects are often seen with low response rates. 17
18
YES The average response rate per department tends to increase over time as faculty and students attain familiarity with the system. There are some strategies faculty can use to increase response rate. 18
19
Motivation ◦ Students need to be motivated to do online evaluations ◦ Students do not need to be motivated to do paper evaluations Attendance ◦ Students need to be in class to do paper evaluations ◦ Students do not need to be in class to do online evaluations 19
20
Research on student evaluations in higher education shows: ◦ Students’ most desired outcome from evaluations is improvement in teaching. ◦ Students’ second desired outcome is improvement in course content. ◦ Students’ motivation to submit evaluations is severely impacted by their expectation that the faculty and administration pay attention to the evaluations. The bottom line is that students submit evaluations if they believe their opinions will be valued and considered. 20
21
21 Survey conducted of a random sample of students in classes that used PICA, Fall 2008. 95 students submitted an appraisal online (Response), 94 students did not submit an appraisal although requested to do so (Non-response).
22
Receiving an invitation from MARS to do the evaluation has no impact on the response rate. By a factor of 3 to 1, students are more likely to submit an evaluation: ◦ If the request comes from their instructor. ◦ If their instructor discusses the importance of the evaluation. ◦ If their instructor tells how the evaluations have been used or will be used to improve the course. Incentives are less effective than discussing the importance and the use of the evaluations. 22
23
23 A survey of students who had not submitted evaluations online in PICA, though requested to do so found the following reasons for not submitting. ◦ Forgot — 48% ◦ Missed the deadline — 26% ◦ No other reason was given by more than 10% The bottom line is that frequent reminders are important.
24
Texas A&M faculty have an opportunity to conduct mid-term evaluations (Actually fifth-week evaluations) in PICA The purpose of the mid-terms is to provide formative information for improving the course at a point that changes are possible. An examination of response rates at the end of the term shows that courses for which there was a mid-term evaluation have on average a 10% higher response rate than other courses on end-of-term evaluations. The likely explanation for the effect on response rate is that faculty who do mid-terms can demonstrate to students that their opinions are considered. 24
25
Incentives appear to be not as effective a motivator as the intrinsic motivation to help improve teaching and the course. A token incentive, however, can reinforce intrinsic motivation by communicating the importance of the evaluation. 25
26
The Faculty Senate (Resolution FS 27.122) opposes granting academic credit for evaluations because such credit is not related to any learning outcomes, so it could have the effect that two students who learned the same amount would receive different grades. Some faculty offer a token incentive to the entire class if some threshold of response rate is reached, thereby communicating the importance of the evaluation, but not disadvantaging a student who chooses not to do an evaluation. 26
27
Use mid-term evaluations to help students see how to give feedback and what feedback is useful. Show students that their feedback is acted upon in a positive way. Discuss the importance of the evaluation and the use of the results in improving the course and your teaching. Remind students frequently. Reminders are most effective when they come from the faculty. 27
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.