Presentation is loading. Please wait.

Presentation is loading. Please wait.

Summary of part I: prediction and RL Prediction is important for action selection The problem: prediction of future reward The algorithm: temporal difference.

Similar presentations


Presentation on theme: "Summary of part I: prediction and RL Prediction is important for action selection The problem: prediction of future reward The algorithm: temporal difference."— Presentation transcript:

1 Summary of part I: prediction and RL Prediction is important for action selection The problem: prediction of future reward The algorithm: temporal difference learning Neural implementation: dopamine dependent learning in BG  A precise computational model of learning allows one to look in the brain for “hidden variables” postulated by the model  Precise (normative!) theory for generation of dopamine firing patterns  Explains anticipatory dopaminergic responding, second order conditioning  Compelling account for the role of dopamine in classical conditioning: prediction error acts as signal driving learning in prediction areas

2 prediction error hypothesis of dopamine model prediction error measured firing rate Bayer & Glimcher (2005) at end of trial:  t = r t - V t (just like R-W)

3 Global plan Reinforcement learning I: –prediction –classical conditioning –dopamine Reinforcement learning II: –dynamic programming; action selection –Pavlovian misbehaviour –vigor Chapter 9 of Theoretical Neuroscience

4 Action Selection Evolutionary specification Immediate reinforcement: –leg flexion –Thorndike puzzle box –pigeon; rat; human matching Delayed reinforcement: –these tasks –mazes –chess Bandler; Blanchard

5 Immediate Reinforcement stochastic policy: based on action values: 5

6 6 Indirect Actor use RW rule: switch every 100 trials

7 Direct Actor

8 8

9 Could we Tell? correlate past rewards, actions with present choice indirect actor (separate clocks): direct actor (single clock):

10 Matching: Concurrent VI-VI Lau, Glimcher, Corrado, Sugrue, Newsome

11 Matching income not return approximately exponential in r alternation choice kernel

12 12 Action at a (Temporal) Distance learning an appropriate action at x=1 : –depends on the actions at x=2 and x=3 –gains no immediate feedback idea: use prediction as surrogate feedback x=1 x=2x=3 x=1 x=2x=3

13 13 Action Selection start with policy: evaluate it: improve it: thus choose R more frequently than L;C 0.025 -0.175 -0.125 0.125 x=1 x=2x=3 x=1 x=2x=3 x=1x=3x=2

14 14 Policy value is too pessimistic action is better than average x=1x=3x=2

15 actor/critic m1m1 m2m2 m3m3 mnmn dopamine signals to both motivational & motor striatum appear, surprisingly the same suggestion: training both values & policies

16 Formally: Dynamic Programming

17 Variants: SARSA Morris et al, 2006

18 Variants: Q learning Roesch et al, 2007

19 Summary prediction learning –Bellman evaluation actor-critic –asynchronous policy iteration indirect method (Q learning) –asynchronous value iteration

20 Impulsivity & Hyperbolic Discounting humans (and animals) show impulsivity in: –diets –addiction –spending, … intertemporal conflict between short and long term choices often explained via hyperbolic discount functions alternative is Pavlovian imperative to an immediate reinforcer framing, trolley dilemmas, etc

21 Direct/Indirect Pathways direct: D1: GO; learn from DA increase indirect: D2: noGO; learn from DA decrease hyperdirect (STN) delay actions given strongly attractive choices Frank

22 DARPP-32: D1 effect DRD2: D2 effect

23 Three Decision Makers tree search position evaluation situation memory

24 Multiple Systems in RL model-based RL –build a forward model of the task, outcomes –search in the forward model (online DP) optimal use of information computationally ruinous cached-based RL –learn Q values, which summarize future worth computationally trivial bootstrap-based; so statistically inefficient learn both – select according to uncertainty

25 Animal Canary OFC; dlPFC; dorsomedial striatum; BLA? dosolateral striatum, amygdala

26 Two Systems:

27 Behavioural Effects

28 Effects of Learning distributional value iteration (Bayesian Q learning) fixed additional uncertainty per step

29 One Outcome shallow tree implies goal-directed control wins

30 Human Canary... if a  c and c  £££, then do more of a or b? –MB: b –MF: a (or even no effect) a b c

31 Behaviour action values depend on both systems: expect that will vary by subject (but be fixed)

32 Neural Prediction Errors (1  2) note that MB RL does not use this prediction error – training signal? R ventral striatum (anatomical definition)

33 Neural Prediction Errors (1) right nucleus accumbens behaviour 1-2, not 1

34 Pavlovian Control

35 The 6-State Task Huys et al, 2012

36 Evaluation

37 Results full lookahead: unbiased termination: value-based pruning: pigeon effect: full model best model

38 Parameter Values BDI (mean 3.7; range 0-15) more reliance on pruning means more `prone to depression’

39 Direct Evidence of Pruning no weird loss aversion +140 vs -X

40 40 Vigour Two components to choice: –what: lever pressing direction to run meal to choose –when/how fast/how vigorous free operant tasks real-valued DP

41 41 The model choose (action,  ) = (LP,  1 )  1 time Costs Rewards choose (action,  ) = (LP,  2 ) Costs Rewards  cost LP NP Other ? how fast  2 time S1S1 S2S2 vigour cost unit cost (reward) S0S0 URUR PRPR goal

42 42 The model choose (action,  ) = (LP,  1 )  1 time Costs Rewards choose (action,  ) = (LP,  2 ) Costs Rewards  2 time S1S1 S2S2 S0S0 Goal: Choose actions and latencies to maximize the average rate of return (rewards minus costs per time) ARL

43 43 Compute differential values of actions Differential value of taking action L with latency  when in state x ρ = average rewards minus costs, per unit time steady state behavior (not learning dynamics) (Extension of Schwartz 1993) Q L,  (x) = Rewards – Costs + Future Returns Average Reward RL

44 44  Choose action with largest expected reward minus cost 1. Which action to take? slow  delays (all) rewards net rate of rewards = cost of delay (opportunity cost of time)  Choose rate that balances vigour and opportunity costs 2.How fast to perform it? slow  less costly (vigour cost) Average Reward Cost/benefit Tradeoffs explains faster (irrelevant) actions under hunger, etc masochism

45 45 Optimal response rates Experimental data Niv, Dayan, Joel, unpublished 1 st Nose poke seconds since reinforcement Model simulation 1 st Nose poke seconds since reinforcementseconds

46 46 Optimal response rates Model simulation 50 0 0 % Reinforcements on lever A % Responses on lever A Model Perfect matching 020406080100 80 60 40 20 Pigeon A Pigeon B Perfect matching % Reinforcements on key A % Responses on key A Herrnstein 1961 Experimental data More: # responses interval length amount of reward ratio vs. interval breaking point temporal structure etc.

47 47 Effects of motivation (in the model) RR25 low utility high utility mean latency LPOther energizing effect

48 48 Effects of motivation (in the model) U R 50% RR25 response rate / minute seconds from reinforcement directing effect 1 low utility high utility mean latency LPOther energizing effect 2

49 49 What about tonic dopamine? Phasic dopamine firing = reward prediction error moreless Relation to Dopamine

50 50 Tonic dopamine = Average reward rate NB. phasic signal RPE for choice/value learning Aberman and Salamone 1999 # LPs in 30 minutes 1416 64 500 1000 1500 2000 2500 Control DA depleted Model simulation # LPs in 30 minutes Control DA depleted 1.explains pharmacological manipulations 2.dopamine control of vigour through BG pathways eating time confound context/state dependence (motivation & drugs?) less switching=perseveration

51 51 Tonic dopamine hypothesis Satoh and Kimura 2003 Ljungberg, Apicella and Schultz 1992 reaction time firing rate …also explains effects of phasic dopamine on response times $$$$$$♫♫♫♫♫♫

52 Sensory Decisions as Optimal Stopping consider listening to: decision: choose, or sample

53 Optimal Stopping

54 Key Quantities

55 Optimal Stopping

56 equivalent of state u=1 is and states u=2,3 is

57 Transition Probabilities

58 Computational Neuromodulation dopamine –phasic: prediction error for reward –tonic: average reward (vigour) serotonin –phasic: prediction error for punishment? acetylcholine: –expected uncertainty? norepinephrine –unexpected uncertainty; neural interrupt?

59 59 Conditioning Ethology –optimality –appropriateness Psychology –classical/operant conditioning Computation –dynamic progr. –Kalman filtering Algorithm –TD/delta rules –simple weights Neurobiology neuromodulators; amygdala; OFC nucleus accumbens; dorsal striatum prediction: of important events control: in the light of those predictions


Download ppt "Summary of part I: prediction and RL Prediction is important for action selection The problem: prediction of future reward The algorithm: temporal difference."

Similar presentations


Ads by Google