Mark Troy – Data and Research Services –

Slides:



Advertisements
Similar presentations
The online Student Evaluation of Teaching and Units (SETU) Is now available for this unit Complete a simple 5 minute online survey and give us your opinion.
Advertisements

New Patterns in Response Rates: How the Online Format Has Changed the Game Presented by David Nelson, PhD Purdue University.
Review of SUNY Oneonta Course Evaluation Form Report and Recommendations from The Committee on Instruction: Part I March 16, 2009.
Chapter 7: Data for Decisions Lesson Plan
By: Kaitlynn Dworaczek and Celina Reyes.  We chose to research the topic of the affect that part time jobs have on high school students because it is.
Validity In our last class, we began to discuss some of the ways in which we can assess the quality of our measurements. We discussed the concept of reliability.
Table of Contents Exit Appendix Behavioral Statistics.
Statistical Issues in Research Planning and Evaluation
Student Evaluations. Introduction: Conducted: Qualtrics Survey Fall 2011 o Sample Size: 642 o FT Tenured: 158, FT Untenured: 59 o Adjunct: 190 o Students:
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Correlation AND EXPERIMENTAL DESIGN
Assurance of Learning The School of Business and Economics SUNY Plattsburgh.
Chapter 3 Producing Data 1. During most of this semester we go about statistics as if we already have data to work with. This is okay, but a little misleading.
Method IntroductionResults Discussion Effects of Plans and Workloads on Academic Performance Mark C. Schroeder University of Nebraska – Lincoln College.
Presented by: Dr. Sue Courtney Janice Stoudemire, CPA, ATA, ABA Associate Degree Board of Commissioners Copyright Protected: Material can not be use or.
Writing Program Assessment Report Fall 2002 through Spring 2004 Laurence Musgrove Writing Program Director Department of English and Foreign Languages.
Chapter 9 Audit Sampling: An Application to Substantive Tests of Account Balances McGraw-Hill/Irwin ©2008 The McGraw-Hill Companies, All Rights Reserved.
Survey Research & Understanding Statistics
UNDERSTANDING RESEARCH RESULTS: STATISTICAL INFERENCE © 2012 The McGraw-Hill Companies, Inc.
Statistics for CS 312. Descriptive vs. inferential statistics Descriptive – used to describe an existing population Inferential – used to draw conclusions.
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
Experiments and Observational Studies.  A study at a high school in California compared academic performance of music students with that of non-music.
METHODS Study Population Study Population: 224 students enrolled in a 3-credit hour, undergraduate, clinical pharmacology course in Fall 2005 and Spring.
Improved Performance and Critical Thinking in Economics Students Using Current Event Journaling Sahar Bahmani, Ph.D. WI Teaching Fellow INTRODUCTION.
Some Introductory Statistics Terminology. Descriptive Statistics Procedures used to summarize, organize, and simplify data (data being a collection of.
Inference in practice BPS chapter 16 © 2006 W.H. Freeman and Company.
Implementing Active Learning Strategies in a Large Class Setting Travis White, Pharm.D., Assistant Professor Kristy Lucas, Pharm.D., Professor Pharmacy.
Student Engagement Survey Results and Analysis June 2011.
MEGAN LOMBARDI STAFF DEVELOPMENT WEEK JANUARY 13, 2015 Why do Students Leave CLC? Results from the Fall 2014 Survey of Withdrawn Students.
Test item analysis: When are statistics a good thing? Andrew Martin Purdue Pesticide Programs.
ONLINE VS. FACE-TO-FACE: EDUCATOR OPINIONS ON PROFESSIONAL DEVELOPMENT DELIVERY METHODS BY TERESA SCRUGGS THOMAS Tamar AvineriEMS 792x.
Chapter 1: Research Methods
College Algebra: An Overview of Program Change Dr. Laura J. Pyzdrowski Dr. Anthony S. Pyzdrowski Dr. Melanie Butler Vennessa Walker.
Individual values of X Frequency How many individuals   Distribution of a population.
Blackboard Use and Students’ Performance Adugna Lemi Department of Economics UMass Boston May 15, 2009.
Presented by Qian Zou.  The purpose of conducting the experiments.  The methodology for the experiments.  The Experimental Design : Cohesion Experiments.
Research and Statistics AP PSYCHOLOGY RESEARCH METHODS.
Noel-Levitz Student Satisfaction Survey of Classroom and Online Students Conducted Spring 2008.
Final Update on the New Faculty Course Evaluation & Online System November, 2003.
Hypothesis Testing Introduction to Statistics Chapter 8 Mar 2-4, 2010 Classes #13-14.
ASSESSMENT OF THE INDEPENDENT STUDY PATHWAY AT LECOM: STUDENT FEEDBACK Mark A.W. Andrews, Ph.D., Professor and Director, The Independent Study Pathway.
Copyright © 2009 Pearson Education, Inc LEARNING GOAL Interpret and carry out hypothesis tests for independence of variables with data organized.
1 Psych 5500/6500 Standard Deviations, Standard Scores, and Areas Under the Normal Curve Fall, 2008.
Improving Course Completion and Success Rates Easily: Leverage Summative Assessment for Formative Purposes Robert E. Vaden-Goad, PhD Southern Connecticut.
Online Course Evaluations Is there a perfect time? Presenters: Cassandra Jones, Ph.D., Director of Assessment Michael Anuszkiewicz, Research Associate.
Teacher Engagement Survey Results and Analysis June 2011.
Statistical Hypotheses & Hypothesis Testing. Statistical Hypotheses There are two types of statistical hypotheses. Null Hypothesis The null hypothesis,
Appraisal and Its Application to Counseling COUN 550 Saint Joseph College For Class # 3 Copyright © 2005 by R. Halstead. All rights reserved.
Student Preferences For Learning College Algebra in a Web Enhanced Environment Dr. Laura J. Pyzdrowski, Pre-Collegiate Mathematics Coordinator Institute.
Section 10.1 Confidence Intervals
1 Ss. Colman-John Neumann Basketball Survey 2008/2009.
Smith/Davis (c) 2005 Prentice Hall Chapter Nine Probability, the Normal Curve, and Sampling PowerPoint Presentation created by Dr. Susan R. Burns Morningside.
Online Course Design Jennifer Freeman ACADEMIC ■ IMPRESSIONS
Online students’ perceived self-efficacy: Does it change? Presenter: Jenny Tseng Professor: Ming-Puu Chen Date: July 11, 2007 C. Y. Lee & E. L. Witta (2001).
Hypothesis Testing Introduction to Statistics Chapter 8 Feb 24-26, 2009 Classes #12-13.
Education 793 Class Notes Inference and Hypothesis Testing Using the Normal Distribution 8 October 2003.
Chapter 7 Data for Decisions. Population vs Sample A Population in a statistical study is the entire group of individuals about which we want information.
Uncertainty and confidence Although the sample mean,, is a unique number for any particular sample, if you pick a different sample you will probably get.
Quantitative Literacy Assessment At Kennedy King College Fall 2013 DRAFT - FOR DISCUSSION ONLY 1 Prepared by Robert Rollings, spring 2014.
DEVELOPED BY MARY BETH FURST ASSOCIATE PROFESSOR, BUCO DIVISION AMY CHASE MARTIN DIRECTOR OF FACULTY DEVELOPMENT AND INSTRUCTIONAL MEDIA UNDERSTANDING.
Continuing Education Provincial Survey Winter 2012 Connie Phelps Manager, Institutional Research & Planning.
Definition Slides Unit 2: Scientific Research Methods.
Definition Slides Unit 1.2 Research Methods Terms.
Introduction Data may be presented in a way that seems flawless, but upon further review, we might question conclusions that are drawn and assumptions.
Outline Sampling Measurement Descriptive Statistics:
Assessment Day 2017 New Student Experience Presented by Jenny Lee
Assessment Day 2017 New Student Experience Presented by Jenny Lee
1 Chapter 8: Introduction to Hypothesis Testing. 2 Hypothesis Testing The general goal of a hypothesis test is to rule out chance (sampling error) as.
Vocab unit 2 Research.
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

Mark Troy – Data and Research Services –

 How low are they?  How concerned should faculty and administration be about low rates?  Can response rates be improved? 2

3

 PICA average ◦ 50%-60%  Paper average ◦ 70%-80%  100% response rate ◦ Less than 10% of courses, whether evaluated with paper forms or PICA achieve 100% responses  Extremely low response rate ◦ Less than 10% of courses, whether evaluated with paper forms or PICA have response rates lower than 10% 4

5

6

 Non-response bias ◦ The bias that arises when students who do not respond have different course experiences than those who do respond.  Random effects ◦ If the response rate is low, the mean is susceptible to the influence of extreme scores, either high or low. 7

 Only those who really like you and those who really hate you will do the evaluations  You will get responses only from the students who like you.  Only the students who hate you will bother to respond.  The students who really hate you will choose not to play the game at all. 8

 If only those who like you and those who hate you respond, the means might not change (assuming equal numbers of both groups), but the shape of the distribution and the variance of the scores would change.  If only those who like you respond, the mean should increase and the distribution and variance would change.  If only those who hate you respond, the mean should decrease and the distribution and variance would change. 9

10 No significant mean differences were observed. An upward trend is visible in the chart, but that began before the switch from paper to PICA.

11 Paper PICA No mean differences were observed. The standard deviations did not change significantly after the switch to PICA.

12 Overall, this was a good course. Strongly agree Agree Undecided Disagree Strongly disagree Percent of students responding

13 Fall paper Spring PICA

 A comparison of courses taught by the same instructor in consecutive semesters showed that half of the ratings went up after the switch to PICA and half went down.  Overall, there was no significant difference in means of paper evaluations compared to PICA evaluations.  Significant mean differences were observed in 3 out of 70 courses (which would be expected by chance alone.)  The three significant differences were observed in courses with less than 30% response rate in PICA. 14

 The correlation between response rate and class size for paper administration = (Fall 2009)  The correlation between response rate and class size for PICA administration = (Fall 2009)  Conclusion: Although the relationship between class size and response rate is weak, there is a greater tendency for response rates to decrease as class size increases with paper administration than with PICA. 15

 Most differences between means can be attributed to chance variation and not to the method of evaluation.  Response rate does not appear to have an impact on means.  EXCEPTION: Extreme scores, whether high or low, exert greater influence on the mean when the number of responses is low. ◦ Equally true for 100% response rate from a class of ten as for 10% response rate from a class of

 1. Non-response bias is a systematic effect that, if present, would appear as a difference in mean ratings, or as an increase in the variance of the ratings, and as a change in the distributions of the scores. Such differences are not seen. (Although that does not mean the non-response bias doesn’t exist.)  2. Random effects. In any evaluation, some ratings will be higher than others and some will be lower. The more responses, the less influence the extremes, either high or low, will have on the mean. A low response rate allows an unusually high or low rating to pull the average in its direction, therefore higher response rates are better than lower response rates. These random effects are often seen with low response rates. 17

 YES  The average response rate per department tends to increase over time as faculty and students attain familiarity with the system.  There are some strategies faculty can use to increase response rate. 18

 Motivation ◦ Students need to be motivated to do online evaluations ◦ Students do not need to be motivated to do paper evaluations  Attendance ◦ Students need to be in class to do paper evaluations ◦ Students do not need to be in class to do online evaluations 19

 Research on student evaluations in higher education shows: ◦ Students’ most desired outcome from evaluations is improvement in teaching. ◦ Students’ second desired outcome is improvement in course content. ◦ Students’ motivation to submit evaluations is severely impacted by their expectation that the faculty and administration pay attention to the evaluations.  The bottom line is that students submit evaluations if they believe their opinions will be valued and considered. 20

21 Survey conducted of a random sample of students in classes that used PICA, Fall students submitted an appraisal online (Response), 94 students did not submit an appraisal although requested to do so (Non-response).

 Receiving an invitation from MARS to do the evaluation has no impact on the response rate.  By a factor of 3 to 1, students are more likely to submit an evaluation: ◦ If the request comes from their instructor. ◦ If their instructor discusses the importance of the evaluation. ◦ If their instructor tells how the evaluations have been used or will be used to improve the course.  Incentives are less effective than discussing the importance and the use of the evaluations. 22

23  A survey of students who had not submitted evaluations online in PICA, though requested to do so found the following reasons for not submitting. ◦ Forgot — 48% ◦ Missed the deadline — 26% ◦ No other reason was given by more than 10%  The bottom line is that frequent reminders are important.

 Texas A&M faculty have an opportunity to conduct mid-term evaluations (Actually fifth-week evaluations) in PICA  The purpose of the mid-terms is to provide formative information for improving the course at a point that changes are possible.  An examination of response rates at the end of the term shows that courses for which there was a mid-term evaluation have on average a 10% higher response rate than other courses on end-of-term evaluations.  The likely explanation for the effect on response rate is that faculty who do mid-terms can demonstrate to students that their opinions are considered. 24

 Incentives appear to be not as effective a motivator as the intrinsic motivation to help improve teaching and the course.  A token incentive, however, can reinforce intrinsic motivation by communicating the importance of the evaluation. 25

 The Faculty Senate (Resolution FS ) opposes granting academic credit for evaluations because such credit is not related to any learning outcomes, so it could have the effect that two students who learned the same amount would receive different grades.  Some faculty offer a token incentive to the entire class if some threshold of response rate is reached, thereby communicating the importance of the evaluation, but not disadvantaging a student who chooses not to do an evaluation. 26

Use mid-term evaluations to help students see how to give feedback and what feedback is useful. Show students that their feedback is acted upon in a positive way. Discuss the importance of the evaluation and the use of the results in improving the course and your teaching. Remind students frequently. Reminders are most effective when they come from the faculty. 27