Why Assessment Results Are Hard to Use (and what to do about it)
Hi…I’m the Assessment Director
The Theory
What Could Go Wrong?
Look familiar?
Assessment Design
Don’t outsource design (if you can help it)
Choosing an Achievement Scale Pick a scale that corresponds to the educational progress we expect to see. Example: 0 = Remedial work 1 = Fresh / Soph level work 2 = Jr / Sr level work 3 = What we expect of our graduates
Subjective measurements Use authentic data that already exists Sacrifices reliability for validity Create subjective ratings in context Sacrifices reliability for validity Assess what you observe, not just what you teach! Recovers some reliability For complex goals like general education:
Survey Attitudes and Behaviors CIRP, BCSSE, Student satisfaction inventory Link student ID numbers to other indicators These can be easily and productively outsourced
Analysis and Reporting
Some Tools ANOVA Logistic Regression and ROC curves Pivot tables
Longitudinal Analysis
Use data you already have Grades FAFSA, SAT, ACT items Demographic data ePortfolio statistics or work Library circulation statistics
Analysis: Example
You don’t have to average Measurement requires: Units Ability to aggregate What we actually do is estimation Without measurement we shouldn’t average
Averages
Proportions
Min / max
Using comparative scales
Confidence Intervals
Portfolio analysis: epic FAIL!
Multi-dimensional graphs
Analysis: Example
Proportions
Conclusions Design it yourself Use a sensible longitudinal scale Combine with other data Avoid averages
Last Requests? highered.blogspot.com