Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Hardest Part of Teaching

Similar presentations


Presentation on theme: "The Hardest Part of Teaching"— Presentation transcript:

1 The Hardest Part of Teaching
March Faculty Development Workshop Sponsored by PETAL

2 A Brief Note For most of us, the hardest part of teaching is not really the grading. It’s the waking up in time for the 8AM class.

3 Timeline of Events She shoots! She scores! Scooby Doo, who are you?
Consider this, Batman! I’m a doctor, not a dictionary! Can we talk? Wrap up

4 She Shoots! She Scores! Goals for the Workshop
Understand some of the terminology of assessment as springboard for thinking Define our goals in creating systems for assessing students here at Fisher Start the dialogue about grading and assessing students here at Fisher

5 Scooby Doo, Who Are You? Let’s Get Acquainted
Who am I? Dr. Kris Green, MST/CS/Mathematics I hate grading – reducing students to a single symbol I enjoy providing feedback to my students to help them learn I think tests, etc. should be a place to continue learning, rather than a proof of learning I rarely use the exact same anything twice Who are you? Name, Department, Ideas about grading Get the audience into groups of three or four, hopefully with different perspectives on grading. These groups can then discuss throughout the workshop. If there are not enough people or enough different approaches, then we can use just about any criteria for grouping – try numbering them or making sure that different departments are mixed together.

6 Consider This, Batman! Case Studies for Comparison
This is the tale of three students in a high school Latin II class. Each has an 85% average, but got there differently. Kris has received, despite his efforts, a score of 85% on every test, homework, and class exercise. Cindy started off in the 70% range, but has consistently been in the 90% range for the second half of the year. Mike is the opposite of Cindy. He started in the 90% range, then spent the second half of the semester in the 70% range. Do all three deserve the same course grade, traditionally a B? Goal: Get the discussion to a point where people see that it’s the term “B” that has to be evaluated, along with the reason for using an average in the first place. Possible fixes: Complicated weighting process to assign higher scores for consistency at different points in the semester. Redefine the term B. Redefine the overall approach to the grading so that averaging is not used (rubrics, for example) Consider also the “survival by partial credit” syndrome: Does Kris really deserve a “B”? He has not 100% correctly done anything in the course! This means that, in some sense, at least 15% of his work is in error!

7 I’m a doctor, not a dictionary! The Basic Terminology
From Grant Wiggins (Educative Assessment) The aim of assessment is primarily to educate and improve student performance, not merely to audit it. Assessment should be educative in two basic senses: It should be deliberately designed to teach (not just measure) by revealing to students what worthy work looks like (offering authentic tasks) It should provide rich and useful feedback to all students and to their teachers

8 The Guiding Light(s) Where should we head?
Assessment reform must center on the purpose, not merely on the techniques or tools, of assessment. Assessment reform is essentially a moral matter. Assessment is central, not peripheral, to instruction. Assessment anchors teaching, and authentic tasks anchor assessment. Ass performance improvement is local. See pages 17 – 18 of Wiggins for more details. Paraphrasing is below. Summative versus formative assessment: Changing the form of the questions does nothing if we still have one-shot, year-end tests. Students are entitled to a more educative and user-friendly system. Teachers are entitled to a system that fosters better teaching. Learning depends on the goals that are being assessed: Backwards design process. Genuine performance is more than drill work. State and national standards are reflected and honed at the local level.

9 What Are Little Grades Made Of? Components of Assessment
Collecting the data Consider the sources of the data Consider the frequency of the data Consider the relevance of the data Evaluating the data Comparison against standards Comparison against other work Providing effective feedback Assigning a grade-symbol

10 Just the Facts, Ma’am Some Possible Data Sources

11 Caveat Grader Qualitative v. Quantitative
But remember, the data we collect is qualitative data – how students are doing with the material, what students have done, what students are having trouble with. Consider the typical math scheme: Hand work in (qualitative) Put a percentage grade on work and average (quantitative) Assign a letter grade (qualitative) Multiple translations like this will loose meaning without clearly defined grade standards (not simply percentage point or total point requirements). We should provide “Grade Profiles” to our students – qualitative descriptions of what student performance at each letter grade looks like (good examples from Foundation for Critical Thinking, From: Richard Fulkerson, Sent: 2/4/02 4:02 PM, Subject: Re: SOPs for Grade Percentages? [long, excerpted] I'll support the idea that there is no objectivity at all in equating letter grades to numerical "percentages." And Ed White is certainly correct that you have to ask "percent of what?" In our field it rarely makes any sense to talk about what percent of a task a student got right. (And that's probably the wrong question anyway.) Most of our students are A spellers if that means they get 92% of their words spelled correctly in papers. If we figured out the total number of opportunities for grammar errors in a paper, and then calculated the percentage of errors actually made, the percent correct would also be very high, and equally useless. In addition, we don't generally "read" papers quantitatively; nor do professors in most other fields. So if I look at a paper and then somehow put a 93 on it, that number is already suspect for implying that I can accurately make distinctions among 100 levels of writing performance (with 100 being perfect). I'm pretty confident that I can make a reliable three-level rating (superior papers, satisfactory papers and unsatisfactory papers within a given course). If necessary, I might be willing to attempt a five-level category system (A - F). But all our research indicates that, without prior careful training in regard to the assignment, my ratings are unlikely to match those of a colleague. We don't improve things by pretending to have measurable percentages. They should be called "magical percentages."

12 Another Dichotomy Objective v. Subjective
Objective grading measures performance relative to fixed, universal standards Subjective grading is based on more relative measures like the rest of the class’s performance or a student’s earlier performance. But, all assessment requires judgment. Hiding the judgment in a single letter grade is dishonest and does not really help the student learn from his or her mistakes. Discussion Question: what kinds of judgment are required in assessing student work? Discussion Question: What are the pros and cons of objective and subjective grading? Should we include both types of grading in our courses?

13 Another Dichotomy Summative v. Formative
Summative evaluation is like a final exam: a one shot sampling of topics are covered and you are assessed as to whether you know/understand/can do them at that point only. Formative evaluation is on-going and is designed to help the student improve; thus, it is a part of the learning process: writing and revising a paper, for example.

14 Can We Talk? Questions for Discussion
What do I want the students to know, understand, and be able to do? How does this affect my teaching and planning? What does an A student look like? What about a B, C, D, or F? Are these profiles of A, B, C, D, F students consistent across the curriculum, or should they change as the level of the coursework changes? What is the role of standards in assessing students: should we hold them up to a rigid ruler or should the ruler flex based on the other students? How can we avoid grade compression and grade inflation? We should have five different large groups, one on each question, then report to the entire audience what they discussed.

15 The Check(list) is in the mail Checklist of Requirements
Let’s come up with 3-5 items in each group that would be necessary components for any system to assess students here at Fisher. We’ll share these and generate a master list with descriptors. I’ll this to everyone and place the information on my website, along with this PowerPoint: then look for teacher resources.

16 Selected Resources for Perusal
Grant Wiggins, Educative Assessment Tom Bourner and Steve Flowers Teaching and Learning Methods in Higher Education Part III of the New York State MST Standards Guide at Office of Academic Planning and Assessment, Univ. of Massachusetts Amherst at

17 Thank You


Download ppt "The Hardest Part of Teaching"

Similar presentations


Ads by Google