Download presentation

Presentation is loading. Please wait.

Published byNehemiah Button Modified over 3 years ago

1
Learning decomposition WARNING

2
Goals Understand what learning decomposition is –And basic intuition See how it was applied to a variety of problems Think about how to apply it to your data

3
Introduction to Reading Tutor More free-form than most of the cognitive tutors Random interventions Kids or tutor can initiate help Turn taking Never quite sure what student is trying to do

4
Project LISTEN’s Reading Tutor

5
What is a practice opportunity? (and are they all equally valuable?) Before story, tutor teaches ‘elephant’ Student sees word ‘elephant’ in sentence Student clicks for help on it Student reads it ‘Elephant’ occurs twice in the next sentence How many practice opportunities? Did instruction have any benefit? Did seeing word immediately afterwards help?

6
Procedure Determine (hopefully motivated) learning decompositions Find data that reflect learning Solve as a non-linear regression model –Fit model to each student Interpret model coefficient

7
Question: Does learner control result in more learning? In Reading Tutor, students pick story half the time Tutor picks other half Tutor selects stories much faster than student Suspect motivational benefit from learner control (willing to tolerate system over a school year) Is there a cognitive benefit? Compare learning of words that occur in student- vs. tutor-selected stories.

8
Find data that reflect learning Students perform many actions Only want those that indicate “real” learning to count Assumptions –First opportunity each day is purest marker Albert, Ken, and Joe have all observed difficulties with closely space trials –Don’t count stories student has already read Need outcome measure –Fuse accuracy, speed, and help performance

9
Approach Day Asked for help? Reading time (Sec) Prior opportunities Outcome Student selected Tutor selected 1Yes0.4003.0 1No0.510 1No0.520 2No-303.0 2No0.431 3No0.332

10
Procedure Determine (hopefully motivated) learning decompositions Find data that reflect learning Solve as a non-linear regression model –Fit model to each student Interpret model coefficient (B)

11
Learning curves Better Worse Performance = Ae -bt Input: number of prior trials (t) Output: expected performance

12
What if all trials aren’t equal? Normal model = Ae -bt Think about student vs. tutor chosen story –t1 = trials where student chose story –t2 = trials where tutor chose story Learning decomp model = Ae -b(t1+B*t2) –B determines relative efficacy of trials of type t1 and t2

13
Use regression to find relative weight of tutor-selected prior opportunities Day Asked for help? Reading time (Sec) Prior opportunities Outcome Student selected Tutor selected 1Yes0.4003.0 1No0.510 1No0.520 2No-203.0 2No0.421 3No0.322

14
Fit model to each student’s data StudentB Chris Smith0.3 Pat Johnson1.2 Sam Jackson0.5 Jessie Stevens0.9 Reagan Ronald0.7

15
Interpret B parameter B is scaling parameter –B>1 students benefit from tutor control –B 1 no benefit either way –B<1 student control is better B 0.8 for tutor-chosen stories (median) –Students learn more from student chosen stories (not my H0) What could be other causes of result?

16
Which students benefit? Top-down approach: –Think of plausible subgroups –See how/if B varies among them E.g. 1 st grader had 0.98, 2 nd graders 0.89, and 3 rd graders 0.49 –Suggests older kids benefit more (getting pickier?) Many possibilities, want to avoid fishing expedition

17
Which students benefit? Bottom- up approach StudentBBenefits from tutor control? Chris Smith0.3No Pat Johnson1.2Yes Sam Jackson0.5No Jessie Stevens0.9? Reagan Ronald0.7No Use regression results as training labels for classifier Predictors: –Gender –Grade –Test score (grade normed) –Disability status Boys benefit from learner control

18
Other learning decompositions: practice effects Open debate if more learning from rereading stories or reading new stories Generally believed spaced practice better for long term retention (but not short) Results –Reading new material better than rereading old stories (B = 0.5) –Later practice opportunities on same day are ineffective (B = 0.2)

19
Other learning decompositions: impact of instruction Reading Tutor has a bunch of random bits of instruction Do they do anything? –Solution: model instruction as an encounter and give it a weight Impact of instruction (in progress) –Spelling intervention worth 0.75 exposures –Word ID intervention worth 0.36 exposure –Neither is particularly effective –(but, first analytic approach to find any effect)

20
Using learning decomposition to model transfer (Xiaonan Zhang) How do students represent words? –Naïve model: words are independent –What about “cat” vs “cats”? Alternate models: –Word roots (cats, cat CAT) –Rimes (bat, cat AT) T1 = # prior times have read word T2 = # prior times have read root T3 = # prior times have read rime Substantial transfer at level of word root –55% as good as seeing the word itself

21
Hopefully Understand approach –Think of two types of learning that may have unequal impact –Divide up trials –Perform curve fitting See that it applies to variety of problems But…

22
Concerns We say things like “rereading is not as effective as reading different stories” –Is it safe to make causal inference from observational data? Wide- vs. Re-reading: troublesome –What if lower proficiency is true cause? Massed vs. Distributed practice: ok (?) Student vs. Tutor control: ok Interventions: ok What about student initiated help?

23
Interesting view (Jack Mostow) Each student has a B parameter E.g. Chris Smith has B=0.3 for rereading – Chris Smith learns 30% as much from rereading as wide reading –Impossible for traits of Chris Smith to be a confound (proficiency, disability, etc.) –But, states could still be a problem E.g. Chris only rereads after sleeping poorly

24
Compare LFA and Learning Decomposition Similar: –Use learning curves and performance data –Insight: a model that better predicts student performance is a better model of student’s mental processes (modulo complexity) Different: –Bottom-up vs. top-down –Each manipulates different aspect of representation

25
Bottom-up vs. Top-down Learning decomposition –Start with theory-driven idea –Estimate effect (if any) –No search LFA –Start with variety of factors –Perform search –Might not correspond to higher level construct Not necessarily a bad thing

26
Consider transfer at level of word roots Learning decomp: –Student exposure to words of same root is 55% as good as seeing the word i.e. cats, cats, cat cat, cat i.e. accepts, accepts, accept accept, accept LFA –Cats and cat are same skill (perfect transfer) i.e. cats, cats, cat > cat, cat –Accepts and accept are different skills i.e. accepts, accepts, accept < accept, accept

27
Consider transfer at level of word roots Learning decomp: –Student exposure to words of same root is 55% as good as seeing the word i.e. cats, cats, cat = cat, cat i.e. accepts, accepts, accept = accept, accept LFA –Cats and cat are same skill (perfect transfer) i.e. cats, cats, cat > cat, cat –Achieve and achieving are different skills i.e. accepts, accepts, accept < accept, accept

28
Consider transfer at level of word roots Learning decomp: –Student exposure to words of same root is 55% as good as seeing the word i.e. cats, cats, cat = cat, cat i.e. accepts, accepts, accept = accept, accept LFA –Cats and cat are same skill (perfect transfer) i.e. cats, cats, cat > cat, cat –Achieve and achieving are different skills i.e. accepts, accepts, accept < accept, accept

29
Student learning history SkillPrior practice opportunities Skill10 1 2 Skill20 Skill30 Skill13 Skill21

30
Learning factors analysis SkillPrior practice opportunities Skill10 1 2 Skill20 Skill30 Skill13 Skill21 Did student utilize skill1 here? Is it better to think of it as skill1’?

31
Learning decomposition SkillPrior practice opportunities Skill10 1 2 Skill20 Skill30 Skill13 Skill21 Did the student really have 3 prior practice opportunities? 1+1+1 = 3, but is there a better way of counting?

32
Wrapup Why model individual points Scope of learning decomposition How learning decomp differs from LFA

Similar presentations

OK

Chapter 26 Inferences for Regression. An Example: Body Fat and Waist Size Our chapter example revolves around the relationship between % body fat and.

Chapter 26 Inferences for Regression. An Example: Body Fat and Waist Size Our chapter example revolves around the relationship between % body fat and.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on depth first search vs breadth Habitats for kids ppt on batteries Ppt on mahatma gandhi quotes Ppt on object-oriented programming encapsulation Ppt on acute coronary syndrome risk Ppt on front office operation Skeletal muscle anatomy and physiology ppt on cells Ppt on network switching systems Ppt on economic order quantity definition Ppt on polynomials and coordinate geometry practice