Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jonathan Huang Chuong Do Daphne Koller Zhenghao Chen Andrew Ng Chris Piech Stanford Coursera Tuned Models of Peer Assessment in MOOCs.

Similar presentations


Presentation on theme: "Jonathan Huang Chuong Do Daphne Koller Zhenghao Chen Andrew Ng Chris Piech Stanford Coursera Tuned Models of Peer Assessment in MOOCs."— Presentation transcript:

1 Jonathan Huang Chuong Do Daphne Koller Zhenghao Chen Andrew Ng Chris Piech Stanford Coursera Tuned Models of Peer Assessment in MOOCs

2 How can we efficiently grade 10,000 students? A variety of assessments 2

3 The Assessment Spectrum 3 Short Response Long Response Multiple choice Essay questions Coding assignments Proofs Easy to automate Limited ability to ask expressive questions or require creativity Hard to grade automatically Can assign complex assignments and provide complex feedback

4 Video lectures + embedded questions, weekly quizzes, open ended assignments Stanford/Coursera’s HCI course

5 Student work Slide credit: Chinmay Kulkarni

6 Calibrated peer assessment 1) Calibration2) Assess 5 Peers3) Self-Assess ✓ staff-graded Similar process also used in Mathematical Thinking, Programming Python, Listening to World Music, Fantasy and Science Fiction, Sociology, Social network analysis.... Slide credit: Chinmay Kulkarni (http://hci.stanford.edu/research/assess/) Image credit: Debbie Morrison (http://onlinelearninginsights.wordpress.com/) [Russell, ’05, Kulkarni et al., ‘13]

7 Largest peer grading network to-date HCI 1, Homework #5 77 “ground truth” submissions graded by everyone (staff included) HCI #1HCI #2 # Students3,6073,633 # Assignments55 # Submissions6,7027,270 # Peer grades31,06732,132

8 How well did peer grading do? within 5pp within 10pp Up to 20% students get a grade over 10% from ground truth! Black stuff  much room for improvement! ~1400 students

9

10 Peer Grading Desiderata Highly reliable/accurate assessment – Reduced workload for both students and course staff – Scalability (to, say, tens of thousands of students) – –Statistical model for estimating and correcting for grader reliability/bias –A simple method for reducing grader workload –Scalable estimation algorithm that easily handles MOOC sized courses Our work:

11 How to decide if a grader is good Submissions Graders 100% 30% 50% 55% 56% 54% Who should we trust? Idea: look at the other submissions graded by these graders! Need to reason with all submissions and peer grades jointly!

12 Statistical Formalization Student/Grader Submission BiasReliabilityTrue scoreObserved score Average grade inflation/deflation Grading variance

13 Model PG 1 True score of student u Grader reliability of student v Student v’s assessment of student u (observed) Grader bias of student v Modeling grader bias and reliability [Whitehill et al. (‘09), Bachrach et al. (‘12), Kamar et al. (‘12) ] Crowdsourcing [Batchelder & Romney (‘88)] Anthropology [Goldin & Ashley (‘11), Goldin (‘12)] Peer Assessment Related models in literature

14 Correlating bias variables across assignments Biases estimated from assignment T with biases at assignment T+1

15 Model PG 2 True score of student u Grader reliability of student v Student v’s assessment of student u (observed) Grader bias of student v Temporal coherence Grader bias at homework T depends on bias at T-1

16 Model PG 3 True score of student u Student v’s assessment of student u (observed) Grader bias of student v Coupled grader score and reliability Your reliability as a grader depends on your ability! Approximate Inference: Gibbs sampling (also implemented EM, Variational methods for a subset of the models) Running time: ~5 minutes for HCI 1 ** PG 3 cannot be Gibbs sampled in “closed form”

17 Incentives Scoring rules can impact student behavior Model PG 3 gives higher homework scores to students who are accurate graders! Model PG 3 gives high scoring graders more “sway” in computing a submission’s final score. Improves prediction accuracy Encourages students to grade better See [Dasgupta & Ghosh, ‘13] for a theoretical look at this problem

18 Baseline (median) prediction accuracy Model PG3 prediction accuracy Prediction Accuracy 33% reduction in RMSE Only 3% of submissions land farther than 10% from ground truth

19 Prediction Accuracy, All models HCI 1 HCI 2 PG 3 typically performs other models An improved rubric made baseline grading in HCI2 more accurate than HCI1 Despite an improved rubric in HCI2, the simplest model (PG1 with just bias) outperforms baseline grading on all metrics. Just modeling bias (constant reliability) captures ~95% of the improvement in RMSE

20 Experiments where confidence fell between.90-.95 When our model is 90% confident that its prediction is within K% of the true grade, then over 90% of the time in experiment, we are indeed within K%. (i.e., our model is conservative) Meaningful Confidence Estimates We can use confidence estimates to tell when a submission needs to be seen by more graders!

21 How many graders do you need? Some submissions need more graders! Some grader assignments can be reallocated! Note: This is quite an overconservative estimate (as in the last slide)

22 Mean Standard deviation Mean Standard deviation Understanding graders in the context of the MOOC Question: What factors influence how well a student will grade? “Easiest” submissions to grade “Harder” submissions to grade Better scoring graders grade better

23 Grader grade (z-score) Gradee grade (z-score) # standard deviations from mean Grade inflation Grade deflation Residual given grader and gradee scores Best students tend to downgrade the worst submissions The worst students tend to inflate the best submissions

24 How much time should you spend on grading? “sweet spot of grading”: ~ 20 minutes

25 What your peers say about you! Best submissions Worst submissions

26 sentiment polarity feedback length (words) residual (z-score) sentiment polarity feedback length Commenting styles in HCI On average, comments vary from neutral to positive, with few highly negative comments Students have more to say about weaknesses than strong points

27 00.10.20.30.40.50.60.70.80.91 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Positive Rate True Positive Rate Student engagement and peer grading just grade Task: predict whether a student will complete last homework just bias just reliability all features (AUC = 0.97605)

28 Takeaways Peer grading is an easy and practical way to grade open-ended assignments at scale Reasoning jointly over all submissions and accounting for bias/reliability can significantly improve current peer grading in MOOCs Grading performance can tell us about other learning factors such as student engagement or performance Real world deployment: our system was used in HCI 3!

29 The End

30 Gradient descent for linear regression ~40,000 submissions


Download ppt "Jonathan Huang Chuong Do Daphne Koller Zhenghao Chen Andrew Ng Chris Piech Stanford Coursera Tuned Models of Peer Assessment in MOOCs."

Similar presentations


Ads by Google