Let’s go over some of the solutions you handed in…. I will call on a small number of you
Let’s go over some of the solutions you handed in…. If I call on you, please come up and discuss – What you did If you’re not the first person I call, please focus on how your solution differed from previous students – How well it worked If you’re in the audience, please ask questions – But be nice…
Anyone else? Does anyone else in the audience have – Something clever they did and want to share? – Something clever they didn’t do but want to discuss? – A concern about how to do this right?
What mattered? What could you do to get better model performance? (Without cheating)
Grain-sizes Which grain-size(s) were the detection focus for each paper/case study?
Grain-sizes What are the advantages and disadvantages of working at these different grain-size(s)? – Student-level – Action-level – Observation-level – Problem/Activity-level – Day/Session-level – Lesson-level
Why… Should we not expect (or want) Detectors with Kappa = 0.75 For models built with training labels with inter-rater reliability Kappa = 0.62?
Other questions, comments, concerns about lectures?
Next Class Wednesday, September 24 Diagnostic Metrics Baker, R.S. (2014) Big Data and Education. Ch. 2, V1, V2, V3, V4. Fogarty, J., Baker, R., Hudson, S. (2005) Case Studies in the use of ROC Curve Analysis for Sensor-Based Estimates in Human Computer Interaction. Proceedings of Graphics Interface (GI 2005), 129-136. Russell, S., Norvig, P. (2010) Artificial Intelligence: A Modern Approach. Ch. 20: Learning Probabilistic Models. Basic HW 2 due