Main components of effective teaching. SRI validity A measuring instrument is valid if it measures what it is supposed to measure SRIs measure: Student.

Slides:



Advertisements
Similar presentations
Chapter 8 Flashcards.
Advertisements

Do you think Research in psychology is Important? Why or why not?
Alternative Strategies for Evaluating Teaching How many have used end-of-semester student evaluations? How many have used an alternative approach? My comments.
Champions Inside and Outside the Classroom: Analyzing extracurricular activities, academic self- efficacy, & academic achievement. Shults, L. S., Gibson,
Reliability and Validity
Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
Grade Distribution and Its Impact on CIS Faculty Evaluations David McDonald, Ph.D. Roy D. Johnson, Ph.D.
Quiz Do random errors accumulate? Name 2 ways to minimize the effect of random error in your data set.
Communicate –Reason for evaluating performance Process of performance evaluation Who evaluates performance Method of performance evaluation Communicate.
Faculty Evaluation Systems: Student Evaluations of Faculty What are we measuring? “The evaluation of teachers is a mark of a good college.” Ernest Boyer,
Looking Good, Teaching Well? Linking Liking, Looks, and Learning Regan A. R. Gurung, Kristin M. Grudzielanek, and Christina J. Tosh Attractiveness is a.
Tables Data collected during an experiment should be recorded in a Table The First column of the table contains the manipulated variable The second column.
Potential Biases in Student Ratings as a Measure of Teaching Effectiveness Kam-Por Kwan EDU Tel: etkpkwan.
VALIDITY.
RELIABILITY & VALIDITY
Review of SUNY Oneonta Course Evaluation Form Report and Recommendations from The Committee on Instruction: Part II October 4, 2010.
How Psychologists Ask and Answer Questions
Chapter 7 Correlational Research Gay, Mills, and Airasian
Understanding Validity for Teachers
Research Methods Case studies Correlational research
Technical Issues Two concerns Validity Reliability
AN EVALUATION OF THE EIGHTH GRADE ALGEBRA PROGRAM IN GRAND BLANC COMMUNITY SCHOOLS 8 th Grade Algebra 1A.
1 Maximizing Learning in Online Training Courses: Meta-Analytic Evidence Traci Sitzmann Advanced Distributed Learning.
METHODS Study Population Study Population: 224 students enrolled in a 3-credit hour, undergraduate, clinical pharmacology course in Fall 2005 and Spring.
RATING THE PROFESSOR TA & PFF Workshop– Spring 2011 Artze-Vega.
Examination of Variables Related to NCLEX-RN Janis A. Franich, RN, BSN, CIC.
Group Discussion Explain the difference between assignment bias and selection bias. Which one is a threat to internal validity and which is a threat to.
Evaluation Test Justin K. Reeve EDTECH Dr. Ross Perkins.
Student Engagement Survey Results and Analysis June 2011.
EVALUATION REPORT Derek R. Lane, Ph.D. Department of Communication University of Kentucky.
Before & After: What Undergraduates and Alumni Say About Their College Experience and Outcomes Angie L. Miller, NSSE & SNAAP Research Analyst Amber D.
Why study educational psychology?
The Genetics Concept Assessment: a new concept inventory for genetics Michelle K. Smith, William B. Wood, and Jennifer K. Knight Science Education Initiative.
Using Grades over 26 Years to Evaluate a Career Course: Two Studies Robert C. Reardon, Ph.D. Stephen J. Leierer, Ph.D. Donghyuck Lee, M.Ed. Florida State.
Lesson 9: Reliability, Validity and Extraneous Variables.
Evidence of Student Learning Fall Faculty Seminar Office of Institutional Research and Assessment August 15, 2012.
THE RELATIONSHIP BETWEEN PRE-SERVICE TEACHERS’ PERCEPTIONS TOWARD ACTIVE LEARNING IN STATISTIC 2 COURSE AND THEIR ACADEMIC ACHIEVEMENT Vanny Septia Efendi.
Reliability vs. Validity.  Reliability  the consistency of your measurement, or the degree to which an instrument measures the same way each time it.
Background Method Discussion References Acknowledgments Results  RateMyProfessors.com (RMP.com) launched in 1999 as an avenue for students to offer public.
University Teaching Symposium January 9,  Individual Development and Educational Assessment  Kansas State University  A diagnostic to.
Validity In our last class, we began to discuss some of the ways in which we can assess the quality of our measurements. We discussed the concept of reliability.
Psychology 3051 Psychology 305A: Theories of Personality Lecture 2 1.
© 2008 Brigham Young University–Idaho Course Evaluations at BYU-Idaho 1.
Student Engagement and Academic Performance: Identifying Effective Practices to Improve Student Success Shuqi Wu Leeward Community College Hawaii Strategy.
Measurement Issues General steps –Determine concept –Decide best way to measure –What indicators are available –Select intermediate, alternate or indirect.
McGraw-Hill/Irwin © 2012 The McGraw-Hill Companies, Inc. All rights reserved. Obtaining Valid and Reliable Classroom Evidence Chapter 4:
Instructors’ General Perceptions on Students’ Self-Awareness Frances Feng-Mei Choi HUNGKUANG UNIVERSITY DEPARTMENT OF ENGLISH.
Criteria for selection of a data collection instrument. 1.Practicality of the instrument: -Concerns its cost and appropriateness for the study population.
Measuring Teaching Effectiveness in Higher Education Dr. Jackie Hill.
Stetson University welcomes: NCATE Board of Examiners.
Reliability EDUC 307. Reliability  How consistent is our measurement?  the reliability of assessments tells the consistency of observations.  Two or.
DEVELOPED BY MARY BETH FURST ASSOCIATE PROFESSOR, BUCO DIVISION AMY CHASE MARTIN DIRECTOR OF FACULTY DEVELOPMENT AND INSTRUCTIONAL MEDIA UNDERSTANDING.
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
Overview of Types of Measures Margaret Kasimatis, PhD VP for Academic Planning & Effectiveness.
Results Reliability Consistency and stability of cluster solution across different samples In both years, three distinct cluster groups identified thus.
TA & PFF Workshop– Spring 2011 Artze-Vega
Classroom Observations
Student Evaluations of Teaching
Lecture 5 Validity and Reliability
Indiana University – Purdue University Indianapolis
Dissertation Findings
Concept of Test Validity
Classroom Observations
The Efficacy of Student Evaluations of Teaching Effectiveness
Delia L. Lang, PhD, MPH Elizabeth Reisinger Walker, PhD, MPH, MAT
THE RELATIONSHIP BETWEEN PRE-SERVICE TEACHERS’ PERCEPTIONS TOWARD ACTIVE LEARNING IN STATISTIC 2 COURSE AND THEIR ACADEMIC ACHIEVEMENT Vanny Septia Efendi.
Student Evaluations of Teaching (SETs)
Classroom Teaching Demonstrations
Presentation transcript:

Main components of effective teaching

SRI validity A measuring instrument is valid if it measures what it is supposed to measure SRIs measure: Student opinions about/perceptions of effective teaching, their satisfaction with instruction CIOS: Course/Instructor Opinion Survey SRIs are valid because they do measure student satisfaction with instruction

However, in my talk I’ve shown that: Student satisfaction with instruction (ratings on the Overall Teaching item) does correlate in a moderate to high extent with: 1.Student learning 2.The conceptual structure/main general behaviors of effective teaching 3.Other measures of effective teaching (i.e. by alumni, peers, experts, trained observers, and by self ratings) And: 4.There are almost no proven factors that bias SRI validity

Factors controllable by the teacher, found not to bias validity 1.Expressiveness/enthusiasm 2.Course (perceived) difficulty/overload 3.Course grades

Illustration: no relationships between rating of the items: course difficulty and Overall Teaching TAU: All University undergraduate lecture courses (n = 1,770), Fall Semester 2007, r =-.02

Risks with factors controllable by the teacher: Faculty may manipulate these factors hoping to get higher ratings—by entertaining students, lowering course demands, grade inflation and altogether, watering down the course… There is evidence that this already happens…

Factors uncontrollable by the teacher, found not to bias validity 1.Class size (smaller classes are rated higher) 2.Discipline (“soft” disciplines are rated higher) 3.Student GPA (higher GPA tend to be rated higher) 4.Student motivation (elective courses tend to be rated higher) 5.Student level of studies (graduate courses tend to be rated higher) 6.A variety of extraneous factors

Extraneous factors (should not affect teaching or student learning) that were studied (to discredit SRIs) Faculty: academic rank, age, gender, years of teaching experience, personal characteristics (other than enthusiasm or caring), physical attractiveness, with Asian accent, research productivity Students: age, gender, personality Course: the time of day it is offered, length of class meetings, number of rows in the classroom

Conclusion (Marsh, 2007): SRIs are valid SRIs correlate only minimally with extraneous variables but correlate reasonably with effective teaching behaviors and with student learning SRIs are relatively valid against a variety of indicators of effective teaching and relatively unaffected by a variety of variables hypothesized as potential biases

SRI reliability Reliability is how accurately an instrument measures whatever it measures SRI reliability refers to the stability and consistency of the measurement data

Consistency: Repeated measurements should give almost the same result. To illustrate with measurement of same issue in 2 semesters:

Stability: Measurement of same issue in different times and by different students provides stable/consistent results: Illustration with teacher ratings profile

Conclusion by Marsh & Baily (1993) Instructors appear to have distinct profiles of strengths and weaknesses that are highly consistent across sets of ratings obtained for the same instructor over a 13-year period and across undergraduate- and graduate- level courses.

Conclusion: SRIs are valid and reliable to a large extent