Presentation is loading. Please wait.

Presentation is loading. Please wait.

Summary of part I: prediction and RL Prediction is important for action selection The problem: prediction of future reward The algorithm: temporal difference.

Similar presentations


Presentation on theme: "Summary of part I: prediction and RL Prediction is important for action selection The problem: prediction of future reward The algorithm: temporal difference."— Presentation transcript:

1 Summary of part I: prediction and RL Prediction is important for action selection The problem: prediction of future reward The algorithm: temporal difference learning Neural implementation: dopamine dependent learning in BG  A precise computational model of learning allows one to look in the brain for “hidden variables” postulated by the model  Precise (normative!) theory for generation of dopamine firing patterns  Explains anticipatory dopaminergic responding, second order conditioning  Compelling account for the role of dopamine in classical conditioning: prediction error acts as signal driving learning in prediction areas

2 prediction error hypothesis of dopamine model prediction error measured firing rate Bayer & Glimcher (2005) at end of trial:  t = r t - V t (just like R-W)

3 Global plan Reinforcement learning I: –prediction –classical conditioning –dopamine Reinforcement learning II: –dynamic programming; action selection –Pavlovian misbehaviour –vigor Chapter 9 of Theoretical Neuroscience

4 Action Selection Evolutionary specification Immediate reinforcement: –leg flexion –Thorndike puzzle box –pigeon; rat; human matching Delayed reinforcement: –these tasks –mazes –chess Bandler; Blanchard

5 Immediate Reinforcement stochastic policy: based on action values: 5

6 6 Indirect Actor use RW rule: switch every 100 trials

7 Direct Actor

8 8

9 Could we Tell? correlate past rewards, actions with present choice indirect actor (separate clocks): direct actor (single clock):

10 Matching: Concurrent VI-VI Lau, Glimcher, Corrado, Sugrue, Newsome

11 Matching income not return approximately exponential in r alternation choice kernel

12 12 Action at a (Temporal) Distance learning an appropriate action at u=1 : –depends on the actions at u=2 and u=3 –gains no immediate feedback idea: use prediction as surrogate feedback

13 13 Action Selection start with policy: evaluate it: improve it: thus choose R more frequently than L;C 0.025 -0.175 -0.125 0.125

14 14 Policy value is too pessimistic action is better than average

15 actor/critic m1m1 m2m2 m3m3 mnmn dopamine signals to both motivational & motor striatum appear, surprisingly the same suggestion: training both values & policies

16 Variants: SARSA Morris et al, 2006

17 Variants: Q learning Roesch et al, 2007

18 Summary prediction learning –Bellman evaluation actor-critic –asynchronous policy iteration indirect method (Q learning) –asynchronous value iteration

19 Direct/Indirect Pathways direct: D1: GO; learn from DA increase indirect: D2: noGO; learn from DA decrease hyperdirect (STN) delay actions given strongly attractive choices Frank

20 DARPP-32: D1 effect DRD2: D2 effect

21 Current Topics Vigour & tonic dopamine Priors over decision problems (LH) Pavlovian-instrumental interactions –impulsivity –behavioural inhibition –framing Model-based, model-free and episodic control Exploration vs exploitation Game theoretic interactions (inequity aversion)

22 22 Vigour Two components to choice: –what: lever pressing direction to run meal to choose –when/how fast/how vigorous free operant tasks real-valued DP

23 23 The model choose (action,  ) = (LP,  1 )  1 time Costs Rewards choose (action,  ) = (LP,  2 ) Costs Rewards  cost LP NP Other ? how fast  2 time S1S1 S2S2 vigour cost unit cost (reward) S0S0 URUR PRPR goal

24 24 The model choose (action,  ) = (LP,  1 )  1 time Costs Rewards choose (action,  ) = (LP,  2 ) Costs Rewards  2 time S1S1 S2S2 S0S0 Goal: Choose actions and latencies to maximize the average rate of return (rewards minus costs per time) ARL

25 25 Compute differential values of actions Differential value of taking action L with latency  when in state x ρ = average rewards minus costs, per unit time steady state behavior (not learning dynamics) (Extension of Schwartz 1993) Q L,  (x) = Rewards – Costs + Future Returns Average Reward RL

26 26  Choose action with largest expected reward minus cost 1. Which action to take? slow  delays (all) rewards net rate of rewards = cost of delay (opportunity cost of time)  Choose rate that balances vigour and opportunity costs 2.How fast to perform it? slow  less costly (vigour cost) Average Reward Cost/benefit Tradeoffs explains faster (irrelevant) actions under hunger, etc masochism

27 27 Optimal response rates Experimental data Niv, Dayan, Joel, unpublished 1 st Nose poke seconds since reinforcement Model simulation 1 st Nose poke seconds since reinforcementseconds

28 28 Optimal response rates Model simulation 50 0 0 % Reinforcements on lever A % Responses on lever A Model Perfect matching 020406080100 80 60 40 20 Pigeon A Pigeon B Perfect matching % Reinforcements on key A % Responses on key A Herrnstein 1961 Experimental data More: # responses interval length amount of reward ratio vs. interval breaking point temporal structure etc.

29 29 Effects of motivation (in the model) energizing effect RR25 low utility high utility mean latency LPOther energizing effect

30 30 Effects of motivation (in the model) U R 50% RR25 response rate / minute seconds from reinforcement directing effect 1 low utility high utility mean latency LPOther energizing effect 2

31 31 What about tonic dopamine? Phasic dopamine firing = reward prediction error moreless Relation to Dopamine

32 32 Tonic dopamine = Average reward rate NB. phasic signal RPE for choice/value learning Aberman and Salamone 1999 # LPs in 30 minutes 1416 64 500 1000 1500 2000 2500 Control DA depleted Model simulation # LPs in 30 minutes Control DA depleted 1.explains pharmacological manipulations 2.dopamine control of vigour through BG pathways eating time confound context/state dependence (motivation & drugs?) less switching=perseveration

33 33 Tonic dopamine hypothesis Satoh and Kimura 2003 Ljungberg, Apicella and Schultz 1992 reaction time firing rate …also explains effects of phasic dopamine on response times $$$$$$♫♫♫♫♫♫

34 Three Decision Makers tree search position evaluation situation memory

35 Multiple Systems in RL model-based RL –build a forward model of the task, outcomes –search in the forward model (online DP) optimal use of information computationally ruinous cached-based RL –learn Q values, which summarize future worth computationally trivial bootstrap-based; so statistically inefficient learn both – select according to uncertainty

36 Two Systems:

37 Behavioural Effects

38 Effects of Learning distributional value iteration (Bayesian Q learning) fixed additional uncertainty per step

39 One Outcome shallow tree implies goal-directed control wins

40 Pavlovian & Instrumental Conditioning Pavlovian –learning values and predictions –using TD error Instrumental –learning actions: by reinforcement (leg flexion) by (TD) critic –(actually different forms: goal directed & habitual)

41 Pavlovian-Instrumental Interactions synergistic –conditioned reinforcement –Pavlovian-instrumental transfer Pavlovian cue predicts the instrumental outcome behavioural inhibition to avoid aversive outcomes neutral –Pavlovian-instrumental transfer Pavlovian cue predicts outcome with same motivational valence opponent –Pavlovian-instrumental transfer Pavlovian cue predicts opposite motivational valence –negative automaintenance

42 -ve Automaintenance in Autoshaping simple choice task –N: nogo gives reward r=1 –G: go gives reward r=0 learn three quantities –average value –Q value for N –Q value for G instrumental propensity is

43 -ve Automaintenance in Autoshaping Pavlovian action –assert: Pavlovian impetus towards G is v(t) –weight Pavlovian and instrumental advantages by ω – competitive reliability of Pavlov new propensities new action choice

44 -ve Automaintenance in Autoshaping basic –ve automaintenance effect (μ=5) lines are theoretical asymptotes equilibrium probabilities of action

45 Impulsivity & Hyperbolic Discounting humans (and animals) show impulsivity in: –diets –addiction –spending, … intertemporal conflict between short and long term choices often explained via hyperbolic discount functions alternative is Pavlovian imperative to an immediate reinforcer framing, trolley dilemmas, etc

46 Sensory Decisions as Optimal Stopping consider listening to: decision: choose, or sample

47 Optimal Stopping equivalent of state u=1 is and states u=2,3 is

48 Transition Probabilities

49 Computational Neuromodulation dopamine –phasic: prediction error for reward –tonic: average reward (vigour) serotonin –phasic: prediction error for punishment? acetylcholine: –expected uncertainty? norepinephrine –unexpected uncertainty; neural interrupt?

50 50 Conditioning Ethology –optimality –appropriateness Psychology –classical/operant conditioning Computation –dynamic progr. –Kalman filtering Algorithm –TD/delta rules –simple weights Neurobiology neuromodulators; amygdala; OFC nucleus accumbens; dorsal striatum prediction: of important events control: in the light of those predictions

51 class of stylized tasks with states, actions & rewards –at each timestep t the world takes on state s t and delivers reward r t, and the agent chooses an action a t Markov Decision Process

52 World:You are in state 34. Your immediate reward is 3. You have 3 actions. Robot:I’ll take action 2. World: You are in state 77. Your immediate reward is -7. You have 2 actions. Robot: I’ll take action 1. World:You’re in state 34 (again). Your immediate reward is 3. You have 3 actions. Markov Decision Process

53 Stochastic process defined by: –reward function: r t ~ P(r t | s t ) –transition function: s t ~ P(s t+1 | s t, a t )

54 Markov Decision Process Stochastic process defined by: –reward function: r t ~ P(r t | s t ) –transition function: s t ~ P(s t+1 | s t, a t ) Markov property –future conditionally independent of past, given s t

55 The optimal policy Definition: a policy such that at every state, its expected value is better than (or equal to) that of all other policies Theorem: For every MDP there exists (at least) one deterministic optimal policy.  by the way, why is the optimal policy just a mapping from states to actions? couldn’t you earn more reward by choosing a different action depending on last 2 states?


Download ppt "Summary of part I: prediction and RL Prediction is important for action selection The problem: prediction of future reward The algorithm: temporal difference."

Similar presentations


Ads by Google