Presentation is loading. Please wait.

Presentation is loading. Please wait.

NHU Assessment Workshop

Similar presentations


Presentation on theme: "NHU Assessment Workshop"— Presentation transcript:

1 NHU Assessment Workshop
Barbara Wright Associate Director, WASC August 22, 2008 San Jose, CA

2 Our Roadmap Meaning & Goals of Assessment (30-45 minutes)
Best Practices (30-45 minutes) Break (15 minutes) Challenges of scoring with rubrics (30 minutes) Closing the loop (30-45 minutes) August 22, 2008 San Jose, CA

3 I. Meaning and Goals August 22, 2008 San Jose, CA

4 The Assessment Loop 1. Goals, 4. Use questions 2. Gathering evidence
3. Interpretation August 22, 2008 San Jose, CA

5 So what is assessment? A systematic process of 1) setting goals for or asking questions about student learning, 2) gathering evidence, 3) interpreting it, and 4) using it to improve the effects of college on students’ learning and development. August 22, 2008 San Jose, CA

6 Other (subordinate) steps in the assessment process . . .
Planning Mapping goals onto curriculum Adding outcomes to syllabi Offering faculty development Reporting Communicating Adding assessment to program review Assessing the assessment August 22, 2008 San Jose, CA

7 Mapping outcomes onto curriculum and pedagogy -- it can reveal . . .
where the skill is taught how it is taught how consistently it is reinforced where there are intervention points (But don’t obsess on syllabi or course descriptions. Ultimately, they’re just inputs, not outcomes.) August 22, 2008 San Jose, CA

8 More definitions . . . Inputs, outputs
Measurement, evidence, documentation Direct, indirect Cross-sectional, longitudinal Reliability, validity Authentic, academic Grades, standards Outcomes, baselines Accountability, improvement Summative, formative Assessment, evaluation Criteria, rubrics Pump, filter August 22, 2008 San Jose, CA

9 Our understanding of assessment has evolved . . .
from to A full range of knowledge, skills, dispositions Problem solving, investigating, reasoning, applying, communicating Comparing performance to established criteria Isolated facts, skills Memorization, reproduction Comparing performance against other students August 22, 2008 San Jose, CA

10 Shifts in assessment, cont.
Scoring right, wrong answers a single way to demonstrate knowledge, e.g. m/c or short-answer test Simplified evidence Looking at the whole reasoning process Multiple methods & opportunities, e.g., open-ended tasks, projects, observations Complex evidence August 22, 2008 San Jose, CA

11 Shifts in assessment, cont.
A secret, exclusive & fixed process Reporting only group means, normed scores Scientific A filter open, public & participatory Disaggregation, analysis, feedback Educative A pump August 22, 2008 San Jose, CA

12 Shifts in assessment, cont.
“teacher-proof” assessment Students as objects of measurement episodic, conclusive Reliability Respect, support for faculty judgment Students as participants, beneficiaries of feedback continual, integrative, developmental Validity August 22, 2008 San Jose, CA

13 II. Best Practices August 22, 2008 San Jose, CA

14 Some typical outcomes . . . Traditional, e.g., Communication
Critical thinking Quantitative analysis Knowledge in a range of disciplines August 22, 2008 San Jose, CA

15 More outcomes. . . Newer, e.g., Inquiry, synthesizing skills
Ability to work in diverse teams Information literacy Lifelong learning Self-assessment Civic engagement August 22, 2008 San Jose, CA

16 How can we possibly measure these things? We . . .
look for qualitative as well as quantitative evidence balance reliability with validity align our methods with our outcomes remember the whole cycle and the role of human judgment come as close as we can do no harm August 22, 2008 San Jose, CA

17 Choice of assessment method matters.
Students value and learn what we teach and test. How we teach and test matters as much as what What and how we assess also matters. We get more of what we test or assess, less of what we don’t. At a comprehensive, teaching institution like WKU, what students learn is of primary importance. There’s official curriculum and pedagogy, and there’s what students understand as implicit in curriculum and pedagogy. Don’t underestimate the implicit, or the importance of pulling implicit and explicit into alignment for greatest effectiveness. August 22, 2008 San Jose, CA

18 Descriptive data include …
Retention, graduation rates, time to degree Percentage going to grad school Percentage employed in their field of study SAT scores, class rank of incoming students Percentage completing a capstone, internship, semester abroad, etc. Problem: Information about learning is inferential at best. August 22, 2008 San Jose, CA

19 Indirect methods include …
Surveys Interviews Focus groups Inventories “Ethnographic research” Problem: These are reports about perceptions of learning, not evidence of learning itself. August 22, 2008 San Jose, CA

20 Direct methods include …
Portfolios Capstones Performances Common assignments, secondary readings Course management programs Local tests, comps in the major Commercial tests Student self-assessment Advantage: You’re looking directly at evidence of what students know and can do. August 22, 2008 San Jose, CA

21 Good assessment, like good teaching and learning, avoids the disconnects :
VS. Isolated course assignments Focus on teaching, testing of facts Acceptance, reproduction of answers Across-the-curriculum skills Higher-order thinking Intellectual curiosity, questions August 22, 2008 San Jose, CA

22 The disconnects, cont.: Complexity, nuance
VS. Complexity, nuance Going beyond the textbook, the formula Confidence, courage, joy in learning “Right” answers – or total relativism Fear of overstepping bounds Insecurity, risk-aversion, anger August 22, 2008 San Jose, CA

23 Higher-order thinking … ( adapted from L. Resnick, 1987)
It’s nonalgorithmic, i.e., the path of action is not fully specified in advance. It’s complex, i.e., the total path is not “visible” from any single vantage point. It often yields multiple solutions, each with costs and benefits. It requires nuanced judgment and interpretation It involves application of multiple criteria, which may conflict with one another August 22, 2008 San Jose, CA

24 Higher-order thinking, cont …
It often involves uncertainty; not everything about the task is known or can be. It requires self-regulation; someone else is not giving directions. It involves making meaning, discerning patterns in apparent disorder. It is effortful: the elaborations and judgments required entail considerable mental work and are likely to take time. August 22, 2008 San Jose, CA

25 Bloom’s taxonomy of cognitive processes (1956)
Knowledge – recall of information (tell, list, locate, name) Comprehension – understanding of information (explain, interpret, distinguish, restate, compare) Application – use of knowledge, rule, or method in a new situation (solve, construct, complete, classify) August 22, 2008 San Jose, CA

26 Bloom’s taxonomy, cont. Analysis – breaking information into components to understand and explore relationships (compare/ contrast, distinguish, separate, classify) Synthesis – putting ideas together in a new or unique way (create, invent, compose, predict, design, imagine, devise, formulate) Evaluation – judging the value of products or ideas based on criteria (judge, justify, verify, debate, recommend, prioritize, rate) August 22, 2008 San Jose, CA

27 Stages of Intellectual Development (William Perry, 1970):
Dualism (Good vs. bad, right vs. wrong, us vs. them. Knowledge, agency resides in external authority.) Multiplicity (Acceptance of diverse opinions, but w/o pattern or system; “everyone has a right to their own opinion.”) Relativism (Opinions, values rest on evidence, logic. Some things are unknowable and people may reasonably disagree. Knowledge depends on context.) Commitment (Choices and decisions made in awareness of relativism but based on an internalized, coherent set of values. Agency resides within the person.) August 22, 2008 San Jose, CA

28 “Surface” vs. “deep” learning (adapted from Noel Entwistle, 2000; see also John Biggs)
Unrelated bits of knowledge Memorization, formulas Difficulty “making sense” Relationships, connections Patterns, principles, active integration Logic, evidence, conclusions August 22, 2008 San Jose, CA

29 “Surface” vs. “deep” learning, cont.
Study without reflection, strategy Little meaning, value in course, tasks Feelings of pressure, worry, anger Understanding, metacognition Active interest, engagement Pleasure, satisfaction August 22, 2008 San Jose, CA

30 Methods for complex outcomes …
are open-ended focus on essentials, principles pose authentic, engaging tasks require meaning making, judgment require active expression are scored for understanding, not just regurgitation show development over time August 22, 2008 San Jose, CA

31 Methods for complex outcomes …
Portfolios Capstones Performances Common assignments, secondary readings Course management programs Local tests Student self-assessment August 22, 2008 San Jose, CA

32 III. Challenges of Scoring with Rubrics
August 22, 2008 San Jose, CA

33 For analysis and interpretation of evidence, you need
Outcomes and standards Rubrics or another rating tool An inclusive community of judgment Time for the task A structure and institution-wide expectation that this task will be done August 22, 2008 San Jose, CA

34 Four dimensions of learning --
What students learn (cognitive as well as affective, social, civic, professional, spiritual and other dimensions) How well (thoroughness, complexity, subtlety, agility, transferability) What happens over time (cumulative, developmental effects) Is this good enough? (the ED question) August 22, 2008 San Jose, CA

35 Thinking About Standards . . .
Absolute standards: the knowledge/skill level of champions, award winners, top experts Contextual standards: appropriate expectations for, e.g., a 10-year old, a college student, an experienced professional Developmental standards: amount of growth, progress over time, e.g., 2 years of college, 5 years (Institutional, regional, national standards?) August 22, 2008 San Jose, CA

36 Next steps . . . Unpack goals; define subgoals and performance indicators (“criteria”) Describe skill levels (“rubrics”) Define appropriate standards Define appropriate development Be inductive as well as deductive Train raters, run a pilot, then expand Keep refining August 22, 2008 San Jose, CA

37 What does inter-rater reliability training look like?
Develop a draft rubric Select student work of varying quality Gather the raters Have raters use rubric to rate, then explain their ratings Establish ground rules (e.g., no inferences) Select “anchor papers” Re-calibrate as needed Revise rubric as needed August 22, 2008 San Jose, CA

38 IV. Closing the Loop August 22, 2008 San Jose, CA

39 What is Assessment? A systematic process of 1) setting goals for or asking questions about student learning, 2) gathering evidence, 3) interpreting it, and 4) using it to improve the effects of college on students’ learning and development. In other words, it’s the whole process, not just steps 1 and 2. August 22, 2008 March 1, 2008 San Jose, CA 39 39

40 Using Findings Requires
Choice of methods (step 2) that are likely to lead to actionable information An inclusive “community of judgment” that analyzes and interprets the data (step 3) A judgment about whether learning results are “good enough” or require improvement Agreement on actions to follow, with necessary support, resources Action (“consequential validity”), oversight (step 4) Communication, synergy throughout August 22, 2008 March 1, 2008 San Jose, CA 40 40

41 Four dimensions of learning --
What students learn (cognitive as well as affective, social, civic, professional, spiritual and other dimensions) How well (thoroughness, complexity, subtlety, agility, transferability) What happens over time (cumulative, developmental effects) Is this good enough? (the ED question) August 22, 2008 San Jose, CA

42 Getting to “Good Enough”
Program/department judgments Benchmarking Collaboration, standard-setting with peer institutions Input from advisory boards, professional associations Use of external examiners August 22, 2008 March 1, 2008 San Jose, CA 42 42

43 Planning, step 1: Defining goals
Do they include what’s important, not just what’s easy to measure, e.g., Knowledge? Skills? Values and dispositions? Are they formulated in two/three dimensions: What? How well? What happens over time? And lead to compelling questions? August 22, 2008 San Jose, CA

44 Step 2: Choosing methods . . .
Are they adequate to complex outcomes? Are some or all local? Do they have content validity? Are they cumulative, integrative? Are they cost-effective? Will the evidence answer those compelling questions? Do they have consequential validity? August 22, 2008 San Jose, CA

45 Step 3: Interpretation, or making meaning out of data . . .
Is it included as an explicit step? Is the “community of interpretation” inclusive? Is it characterized by courage & candor? Is this a process of inquiry and understanding, not blaming? August 22, 2008 San Jose, CA

46 Interpretation, cont. . . . What did we find?
What do the findings mean? Who can shed light on them? What do we want to do about them? Is the plan feasible? What will it cost? Will it do no harm? August 22, 2008 San Jose, CA

47 Step 4: Using findings . . . Does the notion of “use” go beyond
Collecting data? Writing reports? Making recommendations? Planning for changes? Refining the assessment process And “close the loop,” i.e., Implement recommended changes? Revisit to determine the effect of changes? August 22, 2008 San Jose, CA

48 Using findings, cont. – do we have
Credible recommendations? Communication, broad buy-in (faculty, administrators, students)? Budget? Mechanisms? Oversight? A schedule for revisiting; moving on? Record-keeping? August 22, 2008 San Jose, CA

49 Benefits to students . . . Clarity about what’s expected
Accurate feedback on own performance Greater engagement, commitment Enhanced confidence Enhanced retention, completion August 22, 2008 San Jose, CA

50 Benefits to faculty Greater collegiality, support
Enhanced efficacy, satisfaction Shared responsibility for student learning, success Stronger programs Meaningful scholarship August 22, 2008 San Jose, CA

51 Benefits to the institution . . .
Great narratives for PR Happier employers Happier alums More successful fundraising, grant-getting Successful reaccreditation August 22, 2008 San Jose, CA


Download ppt "NHU Assessment Workshop"

Similar presentations


Ads by Google