Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reinforcement Learning: A Tutorial Rich Sutton AT&T Labs -- Research www.research.att.com/~sutton.

Similar presentations


Presentation on theme: "Reinforcement Learning: A Tutorial Rich Sutton AT&T Labs -- Research www.research.att.com/~sutton."— Presentation transcript:

1 Reinforcement Learning: A Tutorial Rich Sutton AT&T Labs -- Research www.research.att.com/~sutton

2 Outline Core RL Theory –Value functions –Markov decision processes –Temporal-Difference Learning –Policy Iteration The Space of RL Methods –Function Approximation –Traces –Kinds of Backups Lessons for Artificial Intelligence Concluding Remarks

3 RL is Learning from Interaction Environment action state reward Agent complete agent temporally situated continual learning & planning object is to affect environment environment stochastic & uncertain

4 Ball-Ballancing Demo RL applied to a real physical system Illustrates online learning An easy problem...but learns as you watch! Courtesy of GTE Labs: Hamid Benbrahim Judy Franklin John Doleac Oliver Selfridge Same learning algorithm as early pole-balancer

5 Markov Decision Processes (MDPs) In RL, the environment is a modeled as an MDP, defined by S – set of states of the environment A(s) – set of actions possible in state s  S P(s,s',a) – probability of transition from s to s' given a R(s,s',a) – expected reward on transition s to s' given a  – discount rate for delayed reward discrete time, t = 0, 1, 2,... t... s t a r t +1 s a r t +2 s a r t +3 s... t +3 a

6 Find a policy   s   S  a  A(s) (could be stochastic) that maximizes the value (expected future reward) of each s : and each s,a pair: The Objective is to Maximize Long-term Total Discounted Reward V (s) = E { r +  r +  r + s =s,  } rewards These are called value functions - cf. evaluation functions t +1 t +2 t +3 t... 2  Q (s,a) = E { r +  r +  r + s =s, a =a,  } t +1 t +2 t +3 t t... 2 

7 Optimal Value Functions and Policies There exist optimal value functions: And corresponding optimal policies :  * is the greedy policy with respect to Q *

8 1-Step Tabular Q-Learning (Watkins, 1989) On each state transition: TD Error a table entry Update: Assumes finite MDP Optimal behavior found without a model of the environment!

9 Example of Value Functions

10 4Grid Example

11 Policy Iteration

12 Summary of RL Theory: Generalized Policy Iteration Policy Value Function policy evaluation policy improvement value learning greedification

13 Outline Core RL Theory –Value functions –Markov decision processes –Temporal-Difference Learning –Policy Iteration The Space of RL Methods –Function Approximation –Traces –Kinds of Backups Lessons for Artificial Intelligence Concluding Remarks

14 Sarsa Backup Backup! The value of this pair Is updated from the value of the next pair TD Error On transition: Update: Rummery & Niranjan Holland Sutton

15 Dimensions of RL algorithms 1. Action Selection and Exploration 2. Function Approximation 3. Eligibility Trace 4. On-line Experience vs Simulated Experience 5. Kind of Backups...

16 1. Action Selection (Exploration) Ok, you've learned a value function, How do you pick Actions? Greedy Action Selection: Always select the action that looks best:  -Greedy Action Selection: Be greedy most of the time Occasionally take a random action Surprisingly, this is the state of the art. Exploitation vs Exploration!

17 2. Function Approximation Function Approximator Could be: table Backprop Neural Network Radial-Basis-Function Network Tile Coding (CMAC) Nearest Neighbor, Memory Based Decision Tree gradient- descent methods targets or errors

18 Adjusting Network Weights estimated value e.g., gradient-descent Sarsa: target value weight vector standard backprop gradient

19 Sparse Coarse Coding fixed expansive Re-representation Linear last layer Coarse: Large receptive fields Sparse: Few features present at one time features...........

20 The Mountain Car Problem Minimum-Time-to-Goal Problem Moore, 1990 Goal Gravity wins SITUATIONS: car's position and velocity ACTIONS: three thrusts: forward, reverse, none REWARDS: always –1 until car reaches the goal No Discounting

21 RL Applied to the Mountain Car 1. TD Method = Sarsa Backup 2. Action Selection = Greedy with initial optimistic values 3. Function approximator = CMAC 4. Eligibility traces = Replacing (Sutton, 1995) Video Demo

22 Tile Coding applied to Mountain Car a.k.a. CMACs (Albus, 1980)

23 The Acrobot Problem e.g., Dejong & Spong, 1994 Sutton, 1995 Minimum–Time–to–Goal: 4 state variables: 2 joint angles 2 angular velocities CMAC of 48 layers RL same as Mountain Car

24 (TD error) (eligibility of at time t ) 3. Eligibility Traces (*) The in TD( ), Q( ), Sarsa( )... A more primitive way of dealing with delayed reward The link to Monte Carlo methods Passing credit back farther than one action The general form is: change in at time t ~ global specific to e.g., was taken at t ? or shortly before t ? at time t

25 visits to Eligibility Traces TIME accumulating trace replace trace

26 Complex TD Backups (*) trial primitive TD backups 1– (1– ) 2 3  = 1 e.g. TD ( ) Incrementally computes a weighted mixture of these backups as states are visited = 0 Simple TD = 1 Simple Monte Carlo

27 Kinds of Backups Dynamic programming Temporal- difference learning Monte Carlo Exhaustive search bootstrapping, full backups sample backups shallow backups deep backups

28 Outline Core RL Theory –Value functions –Markov decision processes –Temporal-Difference Learning –Policy Iteration The Space of RL Methods –Function Approximation –Traces –Kinds of Backups Lessons for Artificial Intelligence Concluding Remarks

29 World-Class Applications of RL TD-Gammon and Jellyfish Tesauro, Dahl World's best backgammon player Elevator Control Crites & Barto World's best down-peak elevator controller Inventory Management Van Roy, Bertsekas, Lee & Tsitsiklis 10-15% improvement over industry standard methods Dynamic Channel Assignment Singh & Bertsekas, Nie & Haykin World's best assigner of radio channels to mobile telephone calls

30 Lesson #1: Approximate the Solution, Not the Problem All these applications are large, stochastic, optimal control problems –Too hard for conventional methods to solve exactly –Require problem to be simplified RL just finds an approximate solution –... which can be much better!

31 Backgammon SITUATIONS: configurations of the playing board (about 10 ) ACTIONS: moves REWARDS: win: +1 lose: –1 else: 0 20 Pure delayed reward Tesauro, 1992,1995

32 TD-Gammon Tesauro, 1992–1995 Start with a random network Play millions of games against self Learn a value function from this simulated experience This produces arguably the best player in the world Action selection by 2–3 ply search Value TD error

33 Neurogammon same network, but trained from 15,000 expert-labeled examples Lesson #2: The Power of Learning from Experience # hidden units performance against gammontool 50% 70% 0 10204080 TD-Gammon self-play Tesauro, 1992 Expert examples are expensive and scarce Experience is cheap and plentiful! And teaches the real solution

34 Lesson #3: The Central Role of Value Functions...of modifiable moment-by-moment estimates of how well things are going All RL methods learn value functions All state-space planners compute value functions Both are based on "backing up" value Recognizing and reacting to the ups and downs of life is an important part of intelligence

35 Lesson #4: Learning and Planning can be radically similar Historically, planning and trial-and-error learning have been seen as opposites But RL treats both as processing of experience environment model experience value/policy on-line interaction simulation learning planning

36 1-Step Tabular Q-Planning 1. Generate a state, s, and action, a 2. Consult model for next state, s', and reward, r 3. Learn from the experience, s,a,r,s' : 4. go to 1 With function approximation and cleverness in search control (Step 1), this is a viable, perhaps even superior, approach to state-space planning

37 Lesson #5: Accretive Computation "Solves" Planning Dilemmas Processes only loosely coupled Can proceed in parallel, asynchronously Quality of solution accumulates in value & model memories Reactivity/Deliberation dilemma "solved" simply by not opposing search and memory Intractability of planning "solved" by anytime improvement of solution value/policy model experience acting model learning direct RL planning

38 Dyna Algorithm 1. s  current state 2. Choose an action, a, and take it 3. Receive next state, s’, and reward, r 4. Apply RL backup to s, a, s’, r e.g., Q-learning update 5. Update Model( s, a ) with s’, r 6. Repeat k times: - select a previously seen state-action pair s,a - s’, r  Model( s, a ) - Apply RL backup to s, a, s’, r 7. Go to 1

39 Actions –> Behaviors MDPs seems too too flat, low-level Need to learn/plan/design agents at a higher level at the level of behaviors rather than just low-level actions e.g., open-the-door rather than twitch-muscle-17 walk-to-work rather than 1-step-forward Behaviors are based on entire policies, directed toward subgoals enable abstraction, hierarchy, modularity, transfer...

40 Rooms Example HALLWAYS O 2 O 1 4 rooms 4 hallways 8 multi-step options Given goal location, quickly plan shortest route up down rightleft (to each room's 2 hallways) G? 4 unreliable primitive actions Fail 33% of the time Goal states are given a terminalvalue of 1  =.9 All rewards zero ROOM

41 Planning by Value Iteration

42 Behaviors are much like Primitive Actions Define a higher level Semi-MDP, overlaid on the original MDP Can be used with same planning methods - faster planning - guaranteed correct results Interchangeable with primitive actions Models and policies can be learned by TD methods Lesson #6: Generality is no impediment to working with higher-level knowledge Provide a clear semantics for multi-level, re-usable knowledge for the general MDP context

43 Summary of Lessons 1. Approximate the Solution, Not the Problem 2. The Power of Learning from Experience 3. The Central Role of Value Functions in finding optimal sequential behavior 4. Learning and Planning can be Radically Similar 5. Accretive Computation "Solves Dilemmas" 6. A General Approach need not be Flat, Low-level All stem from trying to solve MDPs

44 Outline Core RL Theory –Value functions –Markov decision processes –Temporal-Difference Learning –Policy Iteration The Space of RL Methods –Function Approximation –Traces –Kinds of Backups Lessons for Artificial Intelligence Concluding Remarks

45 Strands of History of RL Trial-and-error learning Temporal-difference learning Optimal control, value functions Thorndike (  ) 1911 Minsky Klopf Barto et al Secondary reinforcement (  ) Samuel Witten Sutton Hamilton (Physics) 1800s Shannon Bellman/Howard (OR) Werbos Watkins Holland

46 Radical Generality of the RL/MDP problem Determinism Goals of achievement Complete knowledge Closed world Unlimited deliberation General stochastic dynamics General goals Uncertainty Reactive decision-making Classical RL/MDPs

47 Getting the degree of abstraction right Actions can be low level (e.g. voltages to motors), high level (e.g., accept a job offer), "mental" (e.g., shift in focus of attention) Situations: direct sensor readings, or symbolic descriptions, or "mental" situations (e.g., the situation of being lost) An RL agent is not like a whole animal or robot, which consist of many RL agents as well as other components The environment is not necessarily unknown to the RL agent, only incompletely controllable Reward computation is in the RL agent's environment because the RL agent cannot change it arbitrarily

48 Some of the Open Problems Incomplete state information Exploration Structured states and actions Incorporating prior knowledge Using teachers Theory of RL with function approximators Modular and hierarchical architectures Integration with other problem–solving and planning methods

49 Dimensions of RL Algorithms TD --- Monte Carlo ( ) Model --- Model free Kind of Function Approximator Exploration method Discounted --- Undiscounted Computed policy --- Stored policy On-policy --- Off-policy Action values --- State values Sample model --- Distribution model Discrete actions --- Continuous actions

50 A Unified View There is a space of methods, with tradeoffs and mixtures - rather than discrete choices Value function are key - all the methods are about learning, computing, or estimating values - all the methods store a value function All methods are based on backups - different methods just have backups of different shapes and sizes, sources and mixtures

51 Psychology and RL P AVLOVIAN CONDITIONING The single most common and important kind of learning Ubiquitous - from people to paramecia learning to predict and anticipate, learning affect TD methods are a compelling match to the data Most modern, real-time methods are based on Temporal- Difference Learning I NSTRUMENTAL L EARNING (O PERANT C ONDITIONING ) "Actions followed by good outcomes are repeated" Goal-directed learning, The Law of Effect Biological version of Reinforcement Learning But careful comparisons to data have not yet been made Hawkins & Kandel Hopfield & Tank Klopf Moore et al. Byrne et al. Sutton & Barto Malaka

52 Neuroscience and RL RL interpretations of brain structures are being explored in several areas: Basal Ganglia Dopamine Neurons Value Systems

53 Final Summary RL is about a problem: - goal-directed, interactive, real-time decision making - all hail the problem! Value functions seem to be the key to solving it, and TD methods the key new algorithmic idea There are a host of immediate applications - large-scale stochastic optimal control and scheduling There are also a host of scientific problems An exciting time

54 Bibliography Sutton, R.S., Barto, A.G., Reinforcement Learning: An Introduction. MIT Press 1998. Kaelbling, L.P., Littman, M.L., and Moore, A.W. "Reinforcement learning: A survey". Journal of Artificial Intelligence Research, 4, 1996. Special issues on Reinforcement Learning: Machine Learning, 8(3/4),1992, and 22(1/2/3), 1996. Neuroscience: Schultz, W., Dayan, P., and Montague, P.R. "A neural substrate of prediction and reward". Science, 275:1593--1598, 1997. Animal Learning: Sutton, R.S. and Barto, A.G. "Time- derivative models of Pavlovian reinforcement". In Gabriel, M. and Moore, J., Eds., Learning and Computational Neuroscience, pp. 497--537. MIT Press, 1990. See also web pages, e.g., starting from http://www.cs.umass.edu/~rich.


Download ppt "Reinforcement Learning: A Tutorial Rich Sutton AT&T Labs -- Research www.research.att.com/~sutton."

Similar presentations


Ads by Google