Presentation is loading. Please wait.

Presentation is loading. Please wait.

Best Practices in Faculty Evaluation and Professional Enrichment A presentation for faculty and administrators at Sinclair Community College October 21,

Similar presentations


Presentation on theme: "Best Practices in Faculty Evaluation and Professional Enrichment A presentation for faculty and administrators at Sinclair Community College October 21,"— Presentation transcript:

1 Best Practices in Faculty Evaluation and Professional Enrichment A presentation for faculty and administrators at Sinclair Community College October 21, 2011 Michael Theall, Ph.D. M. Theall, SCC 10-21-11 (c)

2 Basic Premise: Evaluation without development is punitive Development without evaluation is guesswork. M. Theall, SCC 10-21-11 (c)

3 Basic Premise: Development and evaluation systems will not be complete until they are based on an understanding of the work that faculty are expected to do, and the skills that are required to do that work successfully! M. Theall, SCC 10-21-11 (c)

4 Basic Premise: ALL DEVELOPMENT and EVALUATION ARE LOCAL! M. Theall, SCC 10-21-11 (c)

5 5

6 8 Steps to Develop a Comprehensive Faculty Evaluation System (Arreola, 2007) Determine the faculty role model the faculty role model the faculty role model parameter values the faculty role model parameter values the definitions of the roles the definitions of the roles the role component weights the role component weights the appropriate sources of information the appropriate sources of information the source weights the source weights the data gathering methods the data gathering methods the design & selection of forms the design & selection of forms M. Theall, SCC 10-21-11 (c)

7 Source – Impact Matrix (Arreola, 2007) M. Theall, SCC 10-21-11 (c)

8 8 Steps to Develop a Comprehensive Professional Enrichment System (Theall, 2008) Determine: the needs for enrichment functions; the needs for enrichment functions; the principal clients of services; the principal clients of services; the configuration & location of programs; the configuration & location of programs; the allocation of resources; the allocation of resources; the intended impact of programs; the intended impact of programs; the connections to other campus programs. the connections to other campus programs. the leadership structure; the leadership structure; the processes to create & implement programs. the processes to create & implement programs. M. Theall, SCC 10-21-11 (c)

9 Function – Client Matrix (Theall, 2008) M. Theall, SCC 10-21-11 (c)

10 Sources of data : Student Ratings Student Ratings Peer Evaluation Peer Evaluation Administrator Evaluation Administrator Evaluation Self – Evaluation Self – Evaluation External Expert Evaluation External Expert Evaluation Alumni Ratings Alumni Ratings Teaching Awards & SoTL Teaching Awards & SoTL Media Documentation (videos) Media Documentation (videos) Employer Opinions of Graduates Employer Opinions of Graduates Student Learning Student Learning M. Theall, SCC 10-21-11 (c)

11 Berk’s “MULTISOURCE” evaluation array M. Theall, SCC 10-21-11 (c) Professor Colleagues Peers Self Students Dept. Head Admin. Asst. External Others

12 What information do stakeholders need? M. Theall, SCC 10-21-11 (c) EVALUATION INFORMATION MATRIX DEVELOPING A SYNERGY FOR IMPROVED PRACTICE

13 M. Theall, SCC 10-21-11 (c)

14 14 BASIC ISSUES affecting data decisions Reliability Reliability Validity Validity Generalizability Generalizability Feasibility Feasibility Skulduggery Skulduggery

15 Evaluation Purposes and Data FORMATIVE for improvement & revision INSTRUMENTAL CONSEQUENTIAL process & activities outcomes & effects SUMMATIVE for making decisions about merit & worth

16 Research Findings: Student ratings research Student ratings research Research on other sources of evaluation data Research on other sources of evaluation data Berk’s 13 sources of evidence for evaluating teaching Berk’s 13 sources of evidence for evaluating teaching M. Theall, SCC 10-21-11 (c)

17 Ratings are: Multidimensional Multidimensional Reliable and Stable Reliable and Stable Primarily a function of the instructor Primarily a function of the instructor Relatively valid as evidence of effective teaching Relatively valid as evidence of effective teaching Relatively unaffected by a number of variables posed as biases Relatively unaffected by a number of variables posed as biases Useful as teaching feedback Useful as teaching feedback Marsh, 1987, 2007 M. Theall, SCC 10-21-11 (c)

18 Uses of Ratings Data IMPROVING INSTRUCTION IMPROVING INSTRUCTION UNDERSTANDING CONTEXT & OUTCOMES UNDERSTANDING CONTEXT & OUTCOMES RESEARCH / SoTL RESEARCH / SoTL DECISION-MAKING DECISION-MAKING M. Theall, SCC 10-21-11 (c)

19 SEVEN MYTHS ABOUT STUDENT RATINGS 1. Students are not qualified to make judgments about teaching competence. 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

20 SEVEN MYTHS ABOUT STUDENT RATINGS 1.Students are not qualified to make judgments about teaching competence. 1. Students are qualified to rate (report) certain dimensions of teaching. M. Theall, SCC 10-21-11 (c)

21 SEVEN MYTHS ABOUT STUDENT RATINGS 2. Student ratings are simply popularity contests. 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

22 SEVEN MYTHS ABOUT STUDENT RATINGS 2. Student ratings are simply popularity contests. 2. Students do discriminate among dimensions of teaching and do not judge solely on the personal popularity of instructors. ( PS: Popularity is not necessarily a bad thing) M. Theall, SCC 10-21-11 (c)

23 SEVEN MYTHS ABOUT STUDENT RATINGS 3. Students are not able to make accurate judgments until after they have been away from the course for several years. 3. Students are not able to make accurate judgments until after they have been away from the course for several years. 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

24 SEVEN MYTHS ABOUT STUDENT RATINGS 3. Students are not able to make accurate judgments until after they have been away from the course for several years. 3. Ratings by current students are highly correlated with their later ratings and with ratings of the same instructors by of other students. M. Theall, SCC 10-21-11 (c)

25 SEVEN MYTHS ABOUT STUDENT RATINGS 4. Student ratings are unreliable. 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

26 SEVEN MYTHS ABOUT STUDENT RATINGS 4. Student ratings are unreliable.. Student ratings are reliable in terms of both agreement (similarity among students rating a course and the instructor) and stability (the extent to which the same student rates the course and the instructor similarly at two different times). 4. Student ratings are reliable in terms of both agreement (similarity among students rating a course and the instructor) and stability (the extent to which the same student rates the course and the instructor similarly at two different times). M. Theall, SCC 10-21-11 (c)

27 SEVEN MYTHS ABOUT STUDENT RATINGS 5. Student ratings are invalid 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

28 SEVEN MYTHS ABOUT STUDENT RATINGS 5. Student ratings are invalid 5. Student ratings are valid, as measured against a number of criteria, particularly students' learning. M. Theall, SCC 10-21-11 (c)

29 SEVEN MYTHS ABOUT STUDENT RATINGS 6. Students rate instructors on the basis of the grades they receive. 6. Students rate instructors on the basis of the grades they receive. 1 = agree 2 = disagree 3 = unsure 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

30 SEVEN MYTHS ABOUT STUDENT RATINGS 6. Students rate instructors on the basis of the grades they receive. 6. Student ratings are not unduly influenced by the grades students receive or expect to receive. M. Theall, SCC 10-21-11 (c)

31 SEVEN MYTHS ABOUT STUDENT RATINGS 7. Extraneous variables and conditions affect student ratings. 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

32 SEVEN MYTHS ABOUT STUDENT RATINGS 7. Extraneous variables and conditions affect student ratings 7. Student ratings are not unduly affected by such factors as student characteristics, course characteristics, and teacher characteristics. M. Theall, SCC 10-21-11 (c)

33 Other relationships findings: Class size: slight negative (curvelinear) Class size: slight negative (curvelinear) Prior interest in subject: positive Prior interest in subject: positive Elective vs. required courses: more positive for electives Elective vs. required courses: more positive for electives Expected grade: slight positive (r =.20) Expected grade: slight positive (r =.20) Work/difficulty: slight positive (curvelinear) Work/difficulty: slight positive (curvelinear) Course level: slight positive for upper division & graduate Course level: slight positive for upper division & graduate Rater anonymity: more positive if violated Rater anonymity: more positive if violated M. Theall, SCC 10-21-11 (c)

34 Other relationships findings: Purpose of eval: more positive if manipulated Purpose of eval: more positive if manipulated Instructor rank: none Instructor rank: none Teacher/student gender: none Teacher/student gender: none Teacher ethnicity/race: none Teacher ethnicity/race: none Research productivity: none Research productivity: none Student locus & performance attributions: none Student locus & performance attributions: none Student personality: none Student personality: none M. Theall, SCC 10-21-11 (c)

35 Research on other sources of data Peer Evaluation Best for teacher knowledge, certain course or curricular issues, assessment issues, currency /accuracy of content, (esp. when used along with student ratings). If on teaching, less reliable and higher on average than student ratings. Administrator Evaluation Necessary as part of process, but same problems as peers on teaching (criteria, process, instruments, validation, etc.) M. Theall, SCC 10-21-11 (c)

36 Research on other sources of data Self – Evaluation Provides the most complete picture of teacher thinking & instructional decisions/practices, but difficult to reliably interpret & use External Expert Evaluation Useful, but require process cautions and careful use/interpretation; having a purpose is important M. Theall, SCC 10-21-11 (c)

37 Research on other sources of data Alumni Ratings Can be useful but generally the same as student ratings given same instrument; can shed light on teaching in terms of content, process, or curricular issues for formative purposes. Media Documentation (videos) Excellent for formative purposes; unambiguous if used carefully to assess low-inference behaviors; need guidelines for use of others beyond teacher M. Theall, SCC 10-21-11 (c)

38 Research on other sources of data Teaching Awards & SoTL Awards unreliable due to lack of standard criteria & decision processes; SoTL valid and important IF recognized within dept/college/univ. Employer Opinions of Graduates Limited use; better for program evaluation & curricular issues M. Theall, SCC 10-21-11 (c)

39 Research on other sources of data Student Learning Outcomes Useful for formative (individual) or program purposes (if aggregated for assessment); not recommended for summative decisions. Test scores are not as reliable as ratings from a validated instrument. Criteria vary considerably (e.g., What does “All her students got ‘A’s” mean?) M. Theall, SCC 10-21-11 (c)

40 Statistical analysis possibilities for reports of results Descriptive statistics Descriptive statistics (item distributions in # and %) (item distributions in # and %) Central tendency (mean, mode, median) Central tendency (mean, mode, median) 1 2 3 3 4 5 (3, 3, 3) 1 2 3 3 4 5 (3, 3, 3) 1 1 2 3 4 5 5 (3, 1 & 5, 3) 1 1 2 3 4 5 5 (3, 1 & 5, 3) 1 2 3 4 5 5 (3.33, 5, 3.5) 1 2 3 4 5 5 (3.33, 5, 3.5) 1 2 3 4 5 5 5 (3.57, 5, 4) 1 2 3 4 5 5 5 (3.57, 5, 4) Standard deviations (sampling error) Standard deviations (sampling error) Enrolled / responded #s and ratio Enrolled / responded #s and ratio M. Theall, SCC 10-21-11 (c)

41 SAMPLING CRITERIA Class SizeMinimum acceptable response* 5-20 at least 80% 20-30at least 75% 30-50at least 60% 75% or more recommended 50-100at least 50% 66% or more recommended >100more than 50% *providing there is no systematic reason for absence or non- responding which might bias response. M. Theall, SCC 10-21-11 (c)

42 INSTRUCTIONAL REPORT of EDUCATIONAL SATISFACTION: I.R.E.S.) Universitas pro Omnibus Discipuli et Facultitas in Excelcis Instructor: U.N. Fortunate Course #: HIS123 Course name: History of Everything Term/year: Spring, 1994 A B C D E F O amount learned 3 16 46 21 14 0 1 overall teacher 1 12 40 29 18 0 0 overall course 2 8 49 20 11 0 0 Note: (A) =5= Best; (F)=6=Worst... Enrolled: 120; Responded: 53 M. Theall, SCC 10-21-11 (c)

43 INSTRUCTIONAL REPORT of EDUCATIONAL SATISFACTION: I.R.E.S.) Universitas pro Omnibus Discipuli et Facultitas in Excelcis Instructor: U.N. Fortunate Course #: HIS123 Course name: History of Everything Term/year: Spring, 1994 % / # responses > A B C D E F O mean s d T grp amount learned 3/2 16/10 46/29 21/13 14/9 0/0 1/1 2.64 0.88 27 low overall teacher 1/1 12/8 40/25 29/18 18/11 0/0 0/0 2.43 0.96 24 low overall course 2/1 18/11 49/31 20/13 11/7 0/0 0/0 2.81 0.93 33 low Raw score: (A) =5= Best; (E) =1= Worst; F= Not applicable; O = Omitted; Enrolled = 120; Responded = 63: (sample adequate) T-score: Standardized score where 40 – 60 = mean, and each 10 points in each direction is one standard deviation Group score:= 0-10% = low; 10-30% - low middle; 30-70% = middle; 70-90% = high middle; 90-100% = high M. Theall, SCC 10-21-11 (c)

44 Two evaluations of HIS 345 mean s d T group course amount learned 3.35 0.87 45 low-mid term/yr = spring, 1995 instr = UNFortunate overall teacher 2.76 0.76 35 low course = his 345 resp/enr = 29/61 overall course 2.85 0.90 37 low % resp=48 ______________________________________________________ amount learned 3.97 1.40 56 hi-mid term/yr= fall, 1995 instr = UNFortunate overall teacher 3.57 1.30 47 mid course = his 345 resp/enr = 20/42 overall course 3.63 1.24 50 mid % resp=48 M. Theall, SCC 10-21-11 (c)

45 Enrollment profiles for HIS 345 in two semesters Fr So Jn Sn Tot Course original enr. 6 17 15 23 61 term/yr = spring, 1995 Instr = UN Fortunate final enr. 5 14 12 20 51 course = his 345 resp/enr = 29/51 eval respondents 5 13 11 0 29 % resp=57 original enr. 3 11 12 16 42 term/yr = fall, 1995 instr = UN Fortunate final enr. 2 7 8 12 29 course = his 345 resp/enr = 20/29 eval respondents 2 4 5 9 20 % resp=69 M. Theall, SCC 10-21-11 (c)

46 Two evaluations of HIS 345 mean s d T group course amount learned 3.35 0.87 45 low-mid term/yr = spring, 1995 instr = UNFortunate overall teacher 2.76 0.76 35 low course = his 345 resp/enr = 29/61 overall course 2.85 0.90 37 low % resp=48 ______________________________________________________ amount learned 3.97 1.40 56 hi-mid term/yr= fall, 1995 instr = UNFortunate overall teacher 3.57 1.30 47 mid course = his 345 resp/enr = 20/42 overall course 3.63 1.24 50 mid % resp=48 M. Theall, SCC 10-21-11 (c)

47 Graphic display of 95% confidence intervals for individuals vs.comparison groups 1 2 3 4 5 Personal range Department range Institutional range Teacher A Teacher B

48 Guidelines for Good Evaluation Practice Guideline #1: (do your homework) Establish the purpose of the evaluation and the uses and users of ratings beforehand; Include all stakeholders in decisions about evaluation process and policy; Keep a balance between individual and institutional needs in mind; Build a real "system" for evaluation, not a haphazard and unsystematic process; M. Theall, SCC 10-21-11 (c)

49 Guidelines for Good Evaluation Practice Guideline #2: (establish protection for all) Publicly present clear information about the evaluation criteria, process, and procedures. Establish legally defensible process and a system for grievances; Establish clear lines of responsibility/ reporting for those who administer the system; Produce reports that can be easily and accurately understood. M. Theall, SCC 10-21-11 (c)

50 Guidelines for Good Evaluation Practice Guideline #3:(make it positive, not punitive) Absolutely include resources for improvement and support of teaching and teachers; Educate the users of ratings results to avoid misuse and misinterpretation; Keep formative evaluation confidential and separate from summative decision making; In summative decisions, compare teachers on the basis of data from similar situations; Consider the appropriate use of evaluation data for assessment and other purposes. M. Theall, SCC 10-21-11 (c)

51 Guidelines for Good Evaluation Practice Guideline #4 (verify & maintain the system) Use, adapt, or develop instrumentation suited to institutional/individual needs; Use multiple sources of information from several situations; Keep ratings data and validate the instruments used; Invest in the evaluation system and evaluate it regularly; Seek expert, outside assistance when necessary or appropriate. M. Theall, SCC 10-21-11 (c)

52 Guidelines for Effective Professional Development Faculty development (professional enrichment) programs implemented without links to the faculty evaluation program tend to be met with apathy. M. Theall, SCC 10-21-11 (c)

53 The focus of the professional enrichment program should be on the development of the additional skills faculty require in carrying out a wide variety of professional responsibilities. M. Theall, SCC 10-21-11 (c) Guidelines for Effective Professional Development

54 The program should provide resources and professional enrichment opportunities designed to assist faculty perform at a level consonant with the values and needs of the college. M. Theall, SCC 10-21-11 (c) Guidelines for Effective Professional Development

55 In designing your program be sure to attend to culture issues that may support, or prevent behavior change. M. Theall, SCC 10-21-11 (c) Guidelines for Effective Professional Deve lopment

56 Make certain that appropriate support is available within the institution to sustain the behavior changes brought about by the professional enrichment program. M. Theall, SCC 10-21-11 (c) Guidelines for Effective Professional Development

57 The program should be designed to target intact groups (departments, divisions, college) so that behavior changes become part of the ‘standard operating procedure’. M. Theall, SCC 10-21-11 (c) Guidelines for Effective Professional Development

58 The program should stress enhancement of performance (the addition of professional level skills) not remediation. M. Theall, SCC 10-21-11 (c) Guidelines for Effective Professional Development

59 Finally, the college reward system should be configured to reward both high quality performance and documented efforts to improve. M. Theall, SCC 10-21-11 (c) Guidelines for Effective Professional Development

60 References & Related Resources M. Theall, SCC 10-21-11 (c) For the Meta-Profession, see www.cedanet.com/META

61 Basic References Arreola, R. A. (2007) Developing a Comprehensive Faculty Evaluation System. (3rd ed.) Bolton, MA: Anker Publishing Company. [now from Jossey Bass] Arreola, R. A. (2006) “Monster at the foot of the bed: surviving the challenge of marketplace forces on higher education.” In S. Chadwick- Blossey & D. R. Robertson (Eds.) To Improve the Academy 24. Bolton, MA: Anker Publishing Company. Arreola, R. A. (2000b) Higher education’s meta profession. The Department Chair 11(2), 4-5, Fall. Berk, R. A. (2006) Thirteen strategies to measure college teaching. Sterling, VA: Stylus Publishing. M. Theall, SCC 10-21-11 (c)

62 Basic References Birnbaum, R. (2001) Management fads in higher education. Where they come from, what they do, and why they fail. San Francisco: Jossey Bass. Birnbaum, R. (1992) How academic leadership works. Understanding success and failure in the college presidency. San Francisco: Jossey Bass. Birnbaum, R. (1988) How colleges work: the cybernetics of academic organization and leadership. an Francisco: Jossey Bass Boice, R. (1992) The new faculty member. San Francisco: Jossey Bass. Bok, D. (2006) Universities in the Marketplace: The Commercialization of Higher Education. Princeton, NJ: Princeton University Press. M. Theall, SCC 10-21-11 (c)

63 Basic References Braskamp, L. A. & Ory, J. C. (1994). Assessing Faculty Work. San Francisco: Jossey-Bass. Braxton, J. M., Luckey, W. & Helland, P. (2006) Ideal and actual value patterns toward domains of scholarship in three types of colleges and universities. In J. M. Braxton (Ed.) “Analyzing faculty work and rewards: using Boyer’s four domains of scholarship.” New Directions for Institutional Research # 129. Spring. San Francisco: Jossey Bass. Burgan, Mary, (2006) Whatever happened to the faculty. Drift and decision in higher education. Baltimore, MD: Johns Hopkins Press. Chism, N. V. (2007) (2 nd ed) Peer review of teaching. Bolton, MA: Anker Publishing. M. Theall, SCC 10-21-11 (c)

64 Basic References Diamond, R. M. & Adam, B.(2002) (Eds.) A field guide to academic leadership. San Francisco: Jossey Bass. Diamond, R. M. & Adam B. E. (1995) The disciplines speak. Rewarding the scholarly, professional, and creative work of faculty. Washington, DC: American Association for Higher Education Diamond, R. M. & Adam B. E. (2000) The disciplines speak II. More statements on rewarding the scholarly, professional, and creative work of faculty. Washington, DC: American Association for Higher Education. Feldman, K. A. (1997) Identifying exemplary teachers and teaching: evidence from student ratings. In R. P. Perry & J. C. Smart (Eds.) Effective teaching in higher education: research and practice. New York: Agathon Press. [Reprinted in Perry & Smart, 2007] M. Theall, SCC 10-21-11 (c)

65 Basic References Cashin, W. E. (1990) Students do rate different academic fields differently. In M. Theall & J. Franklin (Eds.) “Student ratings of instruction: issues for improving practice.” New Directions for Teaching and Learning # 43. San Francisco: Jossey Bass. Centra, J. A. (1993). Reflective Faculty Evaluation: Enhancing Teaching and Determining Faculty Effectiveness. San Francisco: Jossey-Bass. Feldman, K. A. (1989). The association between student ratings of specific instructional dimensions and student achievement: Refining and extending the synthesis of data from multisection validity studies. Research in Higher Education, 30, 583-645. Gappa, J. M., Austin, A. E., & Trice, A. G. (2007) Rethinking faculty work: higher education’s strategic imperative. San Francisco: Jossey Bass. M. Theall, SCC 10-21-11 (c)

66 Basic References Marsh, H. W. (2007) Students’ evaluations of university teaching: dimensionality, reliability, validity, potential biases, and usefulness. In R. P. Perry & J. C. Smart (Eds.) The scholarship of teaching and learning in higher education: an evidence-based perspective. New York: Springer. Middaugh, M. F. (2001) Understanding faculty productivity. Standards and benchmarks for colleges and universities. San Francisco: Jossey Bass. O’Meara, K. A. & Rice, R. E. (2005) Faculty priorities reconsidered. San Francisco: Jossey Bass. M. Theall, SCC 10-21-11 (c)

67 Basic References Pascarella, E. T., Salisbury, M. H., & Blaich, C (2011) Exposure to effective instruction and college student persistence: A multi-institutional replication and extension. Journal of College Student Development, 52 (1), 4-19. Available at: http://www.education.uiowa.edu/crue/publications/documents/Exposu retoEffectiveInstructionandCollegeStudentPersistence--AMulti- institutionalReplicationa.pdf http://www.education.uiowa.edu/crue/publications/documents/Exposu retoEffectiveInstructionandCollegeStudentPersistence--AMulti- institutionalReplicationa.pdf Pascarella, E. T. & Terenzini, P. T. (2005) How college affects students. Volume 2: A third decade of research. San Francisco: Jossey Bass. Pascarella, E. T. & Terenzini, P. T. (1991) How college affects students. San Francisco: Jossey Bass. M. Theall, SCC 10-21-11 (c)

68 Basic References Perry, R. P. & Smart, J. C.(Eds.) (2007) The scholarship of teaching and learning in higher education: An evidence-based perspective. Dordrecht, The Netherlands: Springer. Schuster, J. H., & Finkelstein, M. (2006) The American faculty. The restructuring of academic work and careers. Baltimore: Johns Hopkins University Press. Scriven, M. (1967) “Methodology of evaluation.” In R. Tyler, R. Gagne, & M. Scriven (Eds.) Perspectives of curriculum evaluation. Chicago: Rand McNally & Company. Sorcinelli, M. D., Austin, A., Eddy, P. L. & Beach, A. L. (2006) Creating the future of faculty development: Learning from the past, understanding the present. Bolton, MA: Anker Publishing. M. Theall, SCC 10-21-11 (c)

69 Basic References Theall, M. (2006) Leadership in faculty evaluation and development: some thoughts on why and how the meta-profession can control its own destiny. In V. R. K. Prasad (Ed.) Education sector HR perspectives. Nagarjuna Hills, Punjagutta, India: Icfai University Press. Available at: http://www.cedanet.com/meta/meta_leader.pdfhttp://www.cedanet.com/meta/meta_leader.pdf Theall, M, Abrami, P. A. & Mets, L. (2001) (Eds.), “The student ratings debate. Are they valid? How can we best use them?” New Directions for Institutional Research No. 109. San Francisco: Jossey Bass. Theall, M. & Arreola, R. A. (2006) “The Meta- Profession of teaching.” NEA Higher Education Advocate, 22 (5), 5-8, June. M. Theall, SCC 10-21-11 (c)

70 Basic references Theall, M. & Feldman, K. A. (2007) ‘commentary and update on Feldman’s (1997) ‘Identifying exemplary teachers and teaching: evidence from student ratings’” In R. P. Perry & J. C. Smart (Eds.) The scholarship of teaching and learning in higher education: an evidence-based perspective. New York: Springer. Theall, M., & Franklin, J. L. (Eds.) (1990). Student ratings of instruction: Issues for improving practice. New Directions for Teaching and Learning #43. San Francisco: Jossey Bass. Theall, M., Mullinix, B., & Arreola, R. A. (2009) Promoting dialogue and action on meta-professional skills, roles, and responsibilities. In L. Nilson & J. Miller (Eds.) To improve the academy Vol. 28. San Francisco: Jossey Bass, 115-138. Wergin, J. F. (2007) (Ed.) Leadership in place. Bolton, MA: Anker Publishing. M. Theall, SCC 10-21-11 (c)

71 Basic references Other Jossey Bass New Directions volumes on evaluation/ratings: New Directions for Teaching & Learning #s: 83 (2000, K. Ryan, Ed.) 87 (2001, K. G. lewis, Ed.) 88 (2001, C. Knapper & P. Cranton, Eds.) 96 (2003, D. L. Sorenson & T. johnson, Eds.) New Directions for Institutional Research #: 114 (2002, C. L. Colbeck, Ed.) M. Theall, SCC 10-21-11 (c)

72 For more information, go to: http://www.cedanet.com/meta for documents and materials about the ‘meta-profession’ of the professoriate http://ntlf.com/pod/index.html See bottom line link to “Faculty evaluation” by M. Theall for a review of the research and an extended/annotated bibliography M. Theall, SCC 10-21-11 (c)

73 For information about student ratings instruments: Student Instructional Report (SIR / SIR II) Educational Testing Service 609-921-9000 www.ets.org Instructional Development and Educational Assessment Survey (IDEA) The IDEA Center Kansas State University 800-255-2757; idea@ksu.eduidea@ksu.edu http://www.idea.ksu.edu/ M. Theall, SCC 10-21-11 (c)

74 For information about student ratings instruments: Course-Instructor Evaluation Questionnaire (CIEQ) Lawrence Aleamoni University of Arizona www.cieq.com Student Evaluation of Educational Quality (SEEQ) Herbert W. Marsh Oxford University herb.marsh@edstud.ox.ac.uk M. Theall, SCC 10-21-11 (c)


Download ppt "Best Practices in Faculty Evaluation and Professional Enrichment A presentation for faculty and administrators at Sinclair Community College October 21,"

Similar presentations


Ads by Google