Download presentation

Presentation is loading. Please wait.

Published byAlly Freebern Modified over 3 years ago

1
Next Semester CSCI 5622 – Machine learning (Matt Wilder) great text by Hastie, Tibshirani, & Friedman great text ECEN 5018 – Game Theory ECEN 5322 – Analysis of high-dimensional datasets FALL 2014 http://ecee.colorado.edu/~fmeyer/class/ecen5322/ http://ecee.colorado.edu/~fmeyer/class/ecen5322/

2
Project Assignments 8 and 9 Your own project or my ‘student modeling’ project Individual or team

3
Battleship Game link to game

4
Data set 51 students 179 unique problems 4223 total problems ~ 15 hr of student usage

5
Data set

6
Test set embedded in spreadsheet

7
Bayesian Knowledge Tracing Students are learning a new skill (knowledge component) with a computerized tutoring system E.g., manipulation of algebra equations Students are given a series of problems to solve. Solution is either correct or incorrect. E.g., 0 0 1 0 0 1 1 0 0 0 0 0 1 0 0 0 1 1 0 1 0 1 1 1 Goal Infer when learning has taken place (Larger goal is to use this prediction to make inferences about other aspects of student performance, such as retention over time and generalization to other skills)

8
All Or Nothing Learning Model (Atkinson, 1960s) Two state finite-state machine Don’t Know Know Just learned Just forgotten c1c1 c0c0

9
Bayesian Knowledge Tracing Assumes No Forgetting Very sensible, given that sequence of problems is all within a single session. Don’t Know Know Just learned ρ1ρ1 ρ0ρ0

10
Inference Problem Given sequence of trials, infer the probability that the concept was just learned T: trial on which concept was learned (0…∞) 0 1 0 0 1 1 0 1 1 T = 2T < 1T = 6T > 8

11
T: trial on which concept was learned (0…∞) X i : response i is correct (X=1) or incorrect (X=0) P(T | X 1, …, X n ) S: latent state (0 = don’t know, 1 = know) ρ s : probability of correct response when S=s L: probability of transitioning from don’t-know to know state 0 1 0 0 1 1 0 1 1 T = 2T < 1T = 6T > 8 Don’t Know Know Just learned c1c1 c0c0

12
What I Did

14
Observation If you know the point in time at which learning occurred (T), then the order of trials before doesn’t matter. Neither does the order of trials after. What matters is the total count of number correct -> can ignore sequences

15
Notation: Simple Model

16
What We Should Be Able To Do Treat ρ 0, ρ 1, and T as RVs Do Bayesian inference on these variables Put hyperpriors on ρ 0, ρ 1, and T, and use the data (over multiple subjects) to inform the posteriors Loosen restriction on transition distribution Principled handling of ‘didn’t learn’ situation Poisson or Negative Binomial GeometricUniform

17
What CSCI 7222 Did In 2012 γ ρ0ρ0 ρ1ρ1 X α0α0 α1α1 student trial λ T k0k0 θ0θ0 k1k1 θ1θ1 k2k2 β θ2θ2

18
Most General Analog To BKT γ ρ0ρ0 ρ1ρ1 X student trial λ T α 0, 0 α 0, 1 k0k0 θ0θ0 k1k1 θ1θ1 k2k2 β θ2θ2 α 1, 0 α 1, 1 k1k1 θ1θ1 k0k0 θ0θ0

19
Sampling Although you might sample {ρ 0,s } and {ρ 1,s }, it would be preferable (more efficient) to integrate them out. See next slide Never represented explicitly (like topic model) It’s also feasible (and likely more efficient) to integrate out T s because it is discreet. If you wanted to do Gibbs sampling on T s, See next slide How to deal with remaining variables (λ,γ,α 0,α 1 )? See 2 slides ahead

20
Key Inference Problem If we are going to sample T (either to compute posteriors on hyperparameters, or to make final guess about moment-of- learning distribution), we must compute P(T s |{X s,i },λ,γ,α 0,α 1 )? Note that T s is discrete and has values in {0, 1, …, N} Normalization is feasible because T is discreet

21
Remaining Variables (λ, γ, α 0, α 1 ) Rowan: maximum likelihood estimation Find values that maximize P(x|λ,γ,α 0,α 1 ) Possibility of overfitting but not that serious an issue considering the amount of data and only 4 parameters Mohammad, Homa: Metropolis Hastings Requires analytic evaluation of P(λ|x) etc. but doesn’t require normalization constant Note: product is over students, marginalizing over T s all data

22
Remaining Variables (λ, γ, α 0, α 1 ) Mike: Likelihood weighting Sample λ, γ, α 0, α 1 from their respective priors For each student, compute data likelihood given sample, marginalizing over T s, ρ s,0, and ρ s,1 Weight that sample by data likelihood Rob Lindsey: Slice sampling

23
Latent Factor Models Item response theory (a.k.a. Rasch model) Traditional approach to modeling student and item effects in test taking (e.g., SATs) ability of student s difficulty of item i

24
Extending Latent Factor Models Need to consider problem and performance history

25
Bayesian Latent Factor Model ML approach search for α and δ values that maximize training set likelihood Bayesian approach define priors on α and δ, e.g., Gaussian Hierarchical Bayesian approach treat the σ α 2 and σ δ 2 as random variables, e.g., Gamma distributed with hyperpriors

26
Khajah, Wing, Lindsey, & Mozer model (paper)paper

Similar presentations

OK

Bayesian Optimization. Problem Formulation Goal Discover the X that maximizes Y Global optimization Active experimentation We can choose which values.

Bayesian Optimization. Problem Formulation Goal Discover the X that maximizes Y Global optimization Active experimentation We can choose which values.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on indian textile industries Ppt on needle stick injury images Download ppt on square and square roots for class 8 Ppt on information security awareness Ppt on vegetarian and non vegetarian diet Ppt on building information modeling Ppt on lunar and solar eclipse Sayings we share ppt online Ppt on uti mutual fund Ppt on ledger and trial balance