Chapter 6: Temporal Difference Learning

Slides:



Advertisements
Similar presentations
Reinforcement Learning: Learning from Interaction
Advertisements

Lecture 18: Temporal-Difference Learning
Programming exercises: Angel – lms.wsu.edu – Submit via zip or tar – Write-up, Results, Code Doodle: class presentations Student Responses First visit.
TEMPORAL DIFFERENCE LEARNING Mark Romero – 11/03/2011.
Monte-Carlo Methods Learning methods averaging complete episodic returns Slides based on [Sutton & Barto: Reinforcement Learning: An Introduction, 1998]
11 Planning and Learning Week #9. 22 Introduction... 1 Two types of methods in RL ◦Planning methods: Those that require an environment model  Dynamic.
Computational Modeling Lab Wednesday 18 June 2003 Reinforcement Learning an introduction part 3 Ann Nowé By Sutton.
1 Monte Carlo Methods Week #5. 2 Introduction Monte Carlo (MC) Methods –do not assume complete knowledge of environment (unlike DP methods which assume.
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
1 Temporal-Difference Learning Week #6. 2 Introduction Temporal-Difference (TD) Learning –a combination of DP and MC methods updates estimates based on.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution.
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction From Sutton & Barto Reinforcement Learning An Introduction.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution.
Application of Reinforcement Learning in Network Routing By Chaopin Zhu Chaopin Zhu.
Reinforcement Learning Mitchell, Ch. 13 (see also Barto & Sutton book on-line)
Chapter 5: Monte Carlo Methods
Chapter 6: Temporal Difference Learning
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 From Sutton & Barto Reinforcement Learning An Introduction.
Chapter 6: Temporal Difference Learning
Chapter 6: Temporal Difference Learning
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution.
Reinforcement Learning
1 Tópicos Especiais em Aprendizagem Prof. Reinaldo Bianchi Centro Universitário da FEI 2006.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 11: Temporal Difference Learning (cont.), Eligibility Traces Dr. Itamar Arel College.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
Learning Theory Reza Shadmehr & Jörn Diedrichsen Reinforcement Learning 2: Temporal difference learning.
Introduction to Reinforcement Learning
Bayesian Reinforcement Learning Machine Learning RCC 16 th June 2011.
Decision Making Under Uncertainty Lec #8: Reinforcement Learning UIUC CS 598: Section EA Professor: Eyal Amir Spring Semester 2006 Most slides by Jeremy.
Reinforcement Learning
CMSC 471 Fall 2009 Temporal Difference Learning Prof. Marie desJardins Class #25 – Tuesday, 11/24 Thanks to Rich Sutton and Andy Barto for the use of their.
Reinforcement Learning Eligibility Traces 主講人:虞台文 大同大學資工所 智慧型多媒體研究室.
Schedule for presentations. 6.1: Chris? – The agent is driving home from work from a new work location, but enters the freeway from the same point. Thus,
Retraction: I’m actually 35 years old. Q-Learning.
Machine Learning 5. Parametric Methods.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 5: Monte Carlo Methods pMonte Carlo methods learn from complete sample.
Reinforcement Learning Elementary Solution Methods
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
CS 5751 Machine Learning Chapter 13 Reinforcement Learning1 Reinforcement Learning Control learning Control polices that choose optimal actions Q learning.
OPERATING SYSTEMS CS 3502 Fall 2017
Reinforcement Learning (RL)
Chapter 5: Monte Carlo Methods
CMSC 471 – Spring 2014 Class #25 – Thursday, May 1
Reinforcement learning (Chapter 21)
Reinforcement Learning
An Overview of Reinforcement Learning
Deep reinforcement learning
Biomedical Data & Markov Decision Process
Temporal-Difference Learning and Monte Carlo Methods
CMSC 671 – Fall 2010 Class #22 – Wednesday 11/17
Reinforcement learning
CMSC 471 Fall 2009 RL using Dynamic Programming
Chapter 4: Dynamic Programming
Chapter 4: Dynamic Programming
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Chapter 2: Evaluative Feedback
یادگیری تقویتی Reinforcement Learning
September 22, 2011 Dr. Itamar Arel College of Engineering
October 6, 2011 Dr. Itamar Arel College of Engineering
Chapter 6: Temporal Difference Learning
Chapter 1: Introduction
Chapter 9: Planning and Learning
CMSC 471 – Fall 2011 Class #25 – Tuesday, November 29
Chapter 7: Eligibility Traces
Chapter 4: Dynamic Programming
October 20, 2010 Dr. Itamar Arel College of Engineering
Chapter 2: Evaluative Feedback
Presentation transcript:

Chapter 6: Temporal Difference Learning Objectives of this chapter: Introduce Temporal Difference (TD) learning Focus first on policy evaluation, or prediction, methods Then extend to control methods R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

TD Prediction Policy Evaluation (the prediction problem): for a given policy p, compute the state-value function Recall: target: the actual return after time t target: an estimate of the return R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Simple Monte Carlo T T T T T T T T T T T R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Simplest TD Method T T T T T T T T T T T R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

cf. Dynamic Programming T T T T T T T T T T T T T R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

TD Bootstraps and Samples Bootstrapping: update involves an estimate MC does not bootstrap DP bootstraps TD bootstraps Sampling: update does not involve an expected value MC samples DP does not sample TD samples R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Example: Driving Home R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Driving Home Changes recommended by Monte Carlo methods (a=1) by TD methods (a=1) R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Advantages of TD Learning TD methods do not require a model of the environment, only experience TD, but not MC, methods can be fully incremental You can learn before knowing the final outcome Less memory Less peak computation You can learn without the final outcome From incomplete sequences Both MC and TD converge (under certain assumptions to be detailed later), but which is faster? R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Random Walk Example Values learned by TD(0) after various numbers of episodes R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

TD and MC on the Random Walk Data averaged over 100 sequences of episodes R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Optimality of TD(0) Batch Updating: train completely on a finite amount of data, e.g., train repeatedly on 10 episodes until convergence. Compute updates according to TD(0), but only update estimates after each complete pass through the data. For any finite Markov prediction task, under batch updating, TD(0) converges for sufficiently small a. Constant-a MC also converges under these conditions, but to a difference answer! R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Random Walk under Batch Updating After each new episode, all previous episodes were treated as a batch, and algorithm was trained until convergence. All repeated 100 times. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

You are the Predictor Suppose you observe the following 8 episodes: A, 0, B, 0 B, 1 B, 0 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

You are the Predictor R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

You are the Predictor The prediction that best matches the training data is V(A)=0 This minimizes the mean-square-error on the training set This is what a batch Monte Carlo method gets If we consider the sequentiality of the problem, then we would set V(A)=.75 This is correct for the maximum likelihood estimate of a Markov model generating the data i.e, if we do a best fit Markov model, and assume it is exactly correct, and then compute what it predicts (how?) This is called the certainty-equivalence estimate This is what TD(0) gets R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Learning An Action-Value Function R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Sarsa: On-Policy TD Control Turn this into a control method by always updating the policy to be greedy with respect to the current estimate: R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Windy Gridworld undiscounted, episodic, reward = –1 until goal R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Results of Sarsa on the Windy Gridworld R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Q-Learning: Off-Ploicy TD Control R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Cliffwalking e-greedy, e = 0.1 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Actor-Critic Methods Explicit representation of policy as well as value function Minimal computation to select actions Can learn an explicit stochastic policy Can put constraints on policies Appealing as psychological and neural models R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Actor-Critic Details R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Dopamine Neurons and TD Error W. Schultz et al. Universite de Fribourg R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Average Reward Per Time Step the same for each state if ergodic R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

R-Learning R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Access-Control Queuing Task Apply R-learning n servers Customers have four different priorities, which pay reward of 1, 2, 3, or 4, if served At each time step, customer at head of queue is accepted (assigned to a server) or removed from the queue Proportion of randomly distributed high priority customers in queue is h Busy server becomes free with probability p on each time step Statistics of arrivals and departures are unknown n=10, h=.5, p=.06 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Afterstates Usually, a state-value function evaluates states in which the agent can take an action. But sometimes it is useful to evaluate states after agent has acted, as in tic-tac-toe. Why is this useful? What is this in general? R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

Summary TD prediction Introduced one-step tabular model-free TD methods Extend prediction to control by employing some form of GPI On-policy control: Sarsa Off-policy control: Q-learning and R-learning These methods bootstrap and sample, combining aspects of DP and MC methods R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction