Presentation is loading. Please wait.

Presentation is loading. Please wait.

Prediction and Change Detection Mark Steyvers Scott Brown Mike Yi University of California, Irvine This work is supported by a grant from the US Air Force.

Similar presentations


Presentation on theme: "Prediction and Change Detection Mark Steyvers Scott Brown Mike Yi University of California, Irvine This work is supported by a grant from the US Air Force."— Presentation transcript:

1 Prediction and Change Detection Mark Steyvers Scott Brown Mike Yi University of California, Irvine This work is supported by a grant from the US Air Force Office of Scientific Research (AFOSR grant number FA9550-04-1-0317)

2 Perception of Random Sequences People perceive too much structure: –Coin tosses: Gambler’s fallacy –Sports scoring sequence: Hot hand belief Sequences are (mostly) stationary but people perceive non-stationarity  Bias to detect too much change?

3 Our Approach Non-stationary random sequences – changes in parameters over time. How well can people make inferences about underlying changes? How well can people make predictions about future outcomes? Compare data to: –Bayesian (ideal observer) models –Descriptive models

4 Two Tasks Inference task what caused the latest observation? Observed Data Internal State (Unobserved) Future Data Prediction task what is the next most likely outcome?

5 ABCDABCD trial A A A A A B B B D D D D D D A A Sequence Generation Start with one of four normal distributions Draw samples from this distribution With probability alpha, switch to a new generating distribution (uniformly chosen) Alpha determines number of change points change points

6 Tomato Cans Experiment Cans roll out of pipes A, B, C, or D Machine perturbs position of cans (normal noise) (real experiment has response buttons and is subject paced) ABCDABCD

7 Tomato Cans Experiment (real experiment has response buttons and is subject paced) ABCDABCD Cans roll out of pipes A, B, C, or D Machine perturbs position of cans (normal noise) Curtain obscures sequence of pipes

8 Tasks ABCDABCD Inference: what pipe produced the last can? A, B, C, or D? Prediction: in what region will the next can arrive? 1, 2, 3, or 4? 12341234

9 Experiment 1 63 subjects 12 blocks –6 blocks of 50 trials for inference task –6 blocks of 50 trials for prediction task –Identical trials for inference and prediction Alpha = 0.1

10 Accuracy vs. Number of Perceived Changes ideal INFERENCEPREDICTION ideal (Each dot is a subject)

11 INFERENCE PREDICTION Sequence ABCDABCD Trial

12 INFERENCE PREDICTION Sequence Ideal Observer ABCDABCD Trial

13 INFERENCE PREDICTION Sequence Ideal Observer Individual subjects Trial ABCDABCD

14 INFERENCE PREDICTION Sequence Ideal Observer Trial ABCDABCD Individual subjects

15 INFERENCE PREDICTION Sequence Ideal Observer Trial ABCDABCD Individual subjects

16 Exp. 1b Alpha =.08,.16,.32 136 subjects Inference judgments only  Subjects track changes in alpha ideal

17 ClosedOpen Experiment 2: Plinko

18 (view full screen to see animation) Familiarization Trials Input pipe changes at each trial with prob. alpha

19 Observed Distributions Match Theory Note: mode of output distribution centers on input bin

20 (view full screen to see animation) Decision Phase Main phase of experiment uses closed device Inference task which input pipe was used, A, B, C, or D? Prediction task where will next ball arrive, A, B, C, or D?

21 Accuracy vs. Number of Perceived Changes INFERENCE PREDICTION 44 subjects

22 Main Finding Ideal observer: # changes in prediction = # changes in inference Subjects # changes in prediction >> # changes in inference Explanation?

23 Variability Matching Example output sequence: –ABAABC Strategy: match the observed variability in prediction sequence Suboptimal! Part of the variability is due to noise that is useless for prediction

24 Conclusion Subjects are able to track changes in dynamic decision environments Individual differences –Over-reaction: perceiving too much change –Under-reaction: perceiving too little change More over-reaction in prediction task

25 Do the experiments yourself: http://psiexp.ss.uci.edu

26 LEFT OVER SLIDES

27 Digital Plinko – open curtain

28 Digital Plinko – closed curtain

29 Analogy to Hot Hand Belief Inference task: does a player have a hot hand? Prediction task: will a player make the next shot?

30 Process Model Memory buffer for K samples Calculate prob. of new sample under normal distribution of buffer If prob. < τ, –Assume a change –Flush the buffer –Put new sample in buffer Inference responses based on buffer mean Prediction responses are the same, except the model tries to anticipate changes by making a purely random response on some fraction X of trials model subject

31 Sweeping Alpha and Sigma in Bayesian Model INFERENCE PREDICTION

32 Optimal Prediction Strategy Best Prediction = Last Inference Subject: inference:AABBBD… prediction:…ABABDC… AABBBD…AABBBD… Using shifted inference judgments for prediction, 70% of subjects improve in prediction performance

33 Observed Data Internal State (Unobserved) Future Data Prediction Inference Locus of Gambler’s fallacy?

34 Generating Model... Change probability Change points Distribution parameters Observed data...

35 Bayesian Inference Given observed sequence y, what are the latent states z and change points x? Cannot calculate this complex posterior distribution. Use posterior simulation instead: MCMC with Gibbs sampling...

36 Gibbs Sampling Simulate the high-dimensional distribution by sampling on lower-dimensional subsets of variables where each subset is conditioned on the value of all others. The sampling is done sequentially and proceeds until the sampled values approximate the target distribution.... Use the subset {z t, x t, x t+1 } Why include x t+1 ? To preserve consistency. For example, suppose before sampling, z t+1 ≠ z t, and therefore x t+1 = 1. If the sample leads to z t = z t+1, then x t+1 needs to be updated.

37 Gibbs Sampling Assume α is a constant (for now) The set of variables {z t, x t, x t+1 } is conditionally dependent only on these variables: {y t, z t-1, z t+1 } Sample values {z t, x t, x t+1 } from this distribution:

38 Gibbs Sampling For tomato cans experiment For plinko experiment, look up from a table (P = number of input pipes)

39 Plinko as a Hidden Markov Model Time … Output pipe sequence StartEnd

40 Example comparing HMM Viterbi algorithm to Gibbs sampling algorithm


Download ppt "Prediction and Change Detection Mark Steyvers Scott Brown Mike Yi University of California, Irvine This work is supported by a grant from the US Air Force."

Similar presentations


Ads by Google