Reinforcement learning (Chapter 21)

Slides:



Advertisements
Similar presentations
Lirong Xia Reinforcement Learning (1) Tue, March 18, 2014.
Advertisements

Reinforcement Learning
Markov Decision Process
Reinforcement Learning
RL for Large State Spaces: Value Function Approximation
Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.
Ai in game programming it university of copenhagen Reinforcement Learning [Outro] Marco Loog.
Reinforcement Learning
Reinforcement learning (Chapter 21)
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
Markov Decision Processes
Planning under Uncertainty
Apprenticeship learning for robotic control Pieter Abbeel Stanford University Joint work with Andrew Y. Ng, Adam Coates, Morgan Quigley.
Reinforcement learning
Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Lezione 6 - Reinforcement Learning Prof. Giancarlo.
CS 182/CogSci110/Ling109 Spring 2008 Reinforcement Learning: Algorithms 4/1/2008 Srini Narayanan – ICSI and UC Berkeley.
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
Pieter Abbeel and Andrew Y. Ng Reinforcement Learning and Apprenticeship Learning Pieter Abbeel and Andrew Y. Ng Stanford University.
1 Reinforcement Learning: Learning algorithms Function Approximation Yishay Mansour Tel-Aviv University.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
Utility Theory & MDPs Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
Reinforcement Learning
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
CSE-573 Reinforcement Learning POMDPs. Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect.
Eick: Reinforcement Learning. Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning.
© D. Weld and D. Fox 1 Reinforcement Learning CSE 473.
Eick: Reinforcement Learning. Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning.
MDPs (cont) & Reinforcement Learning
CS 188: Artificial Intelligence Spring 2007 Lecture 23: Reinforcement Learning: III 4/17/2007 Srini Narayanan – ICSI and UC Berkeley.
Reinforcement learning (Chapter 21)
Reinforcement Learning Based on slides by Avi Pfeffer and David Parkes.
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
Possible actions: up, down, right, left Rewards: – 0.04 if non-terminal state Environment is observable (i.e., agent knows where it is) MDP = “Markov Decision.
Reinforcement Learning
Reinforcement Learning Guest Lecturer: Chengxiang Zhai Machine Learning December 6, 2001.
Deep Learning and Deep Reinforcement Learning. Topics 1.Deep learning with convolutional neural networks 2.Learning to play Atari video games with Deep.
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
CS 5751 Machine Learning Chapter 13 Reinforcement Learning1 Reinforcement Learning Control learning Control polices that choose optimal actions Q learning.
1 Passive Reinforcement Learning Ruti Glick Bar-Ilan university.
Reinforcement Learning
A Crash Course in Reinforcement Learning
Reinforcement learning (Chapter 21)
Reinforcement Learning (1)
CMSC 471 – Spring 2014 Class #25 – Thursday, May 1
Reinforcement Learning
Markov Decision Processes
Deep reinforcement learning
Reinforcement Learning
"Playing Atari with deep reinforcement learning."
Reinforcement Learning
Reinforcement learning
Instructors: Fei Fang (This Lecture) and Dave Touretzky
RL for Large State Spaces: Value Function Approximation
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2007
October 6, 2011 Dr. Itamar Arel College of Engineering
CS 188: Artificial Intelligence Spring 2006
Introduction to Reinforcement Learning and Q-Learning
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Spring 2006
CS 416 Artificial Intelligence
Reinforcement Learning
Deep Reinforcement Learning: Learning how to act using a deep neural network Psych 209, Winter 2019 February 12, 2019.
Reinforcement Learning (2)
Markov Decision Processes
Markov Decision Processes
Reinforcement Learning
Reinforcement Learning (2)
CS 440/ECE448 Lecture 22: Reinforcement Learning
Presentation transcript:

Reinforcement learning (Chapter 21)

Reinforcement learning Regular MDP Given: Transition model P(s’ | s, a) Reward function R(s) Find: Policy (s) Reinforcement learning Transition model and reward function initially unknown Still need to find the right policy “Learn by doing”

Offline (MDPs) vs. Online (RL) Offline Solution Online Learning Source: Berkeley CS188

Reinforcement learning: Basic scheme In each time step: Take some action Observe the outcome of the action: successor state and reward Update some internal representation of the environment and policy If you reach a terminal state, just start over (each pass through the environment is called a trial) Why is this called reinforcement learning? See http://en.wikipedia.org/wiki/Reinforcement for discussion of reinforcement learning in psychology

Applications of reinforcement learning Backgammon What is the state representation? What is the transition model? What is the reward function? http://www.research.ibm.com/massive/tdl.html http://en.wikipedia.org/wiki/TD-Gammon

Applications of reinforcement learning AlphaGo What is the state representation? What is the transition model? What is the reward function? https://deepmind.com/research/alphago/

Applications of reinforcement learning Learning a fast gait for Aibos Initial gait Learned gait Policy Gradient Reinforcement Learning for Fast Quadrupedal Locomotion Nate Kohl and Peter Stone. IEEE International Conference on Robotics and Automation, 2004.

Applications of reinforcement learning Stanford autonomous helicopter Pieter Abbeel et al.

Applications of reinforcement learning Playing Atari with deep reinforcement learning Video V. Mnih et al., Nature, February 2015

Applications of reinforcement learning End-to-end training of deep visuomotor policies Video Sergey Levine et al., Berkeley

Applications of reinforcement learning Object detection Video J. Caicedo and S. Lazebnik,Active Object Localization with Deep Reinforcement Learning, ICCV 2015

OpenAI Gym https://gym.openai.com/

Reinforcement learning strategies Model-based Learn the model of the MDP (transition probabilities and rewards) and try to solve the MDP concurrently Model-free Learn how to act without explicitly learning the transition probabilities P(s’ | s, a) Q-learning: learn an action-utility function Q(s,a) that tells us the value of doing action a in state s

Model-based reinforcement learning Learning the model: Keep track of how many times state s’ follows state s when you take action a and update the transition probability P(s’ | s, a) according to the relative frequencies Keep track of the rewards R(s) Learning how to act: Estimate the utilities U(s) using Bellman’s equations Choose the action that maximizes expected future utility: Is there any problem with this “greedy” approach?

Exploration vs. exploitation Source: Berkeley CS188

Exploration vs. exploitation Exploration: take a new action with unknown consequences Pros: Get a more accurate model of the environment Discover higher-reward states than the ones found so far Cons: When you’re exploring, you’re not maximizing your utility Something bad might happen Exploitation: go with the best strategy found so far Maximize reward as reflected in the current utility estimates Avoid bad stuff Might also prevent you from discovering the true optimal strategy

Exploration strategies Idea: explore more in the beginning, become more and more greedy over time ε-greedy: with probability 1–ε, follow the greedy policy, with probability ε, take random action Possibly decrease ε over time More complex exploration functions to bias towards less visited state-action pairs E.g., keep track of how many times each state-action pair has been seen, return over-optimistic utility estimate if a given pair has not been seen enough times

Model-free reinforcement learning Idea: learn how to act without explicitly learning the transition probabilities P(s’ | s, a) Q-learning: learn an action-utility function Q(s,a) that tells us the value of doing action a in state s

Model-free reinforcement learning Idea: learn how to act without explicitly learning the transition probabilities P(s’ | s, a) Q-learning: learn an action-utility function Q(s,a) that tells us the value of doing action a in state s Relationship between Q-values and utilities: With Q-values, you don’t need the transition model to select the next action: Compare with:

Model-free reinforcement learning Q-learning: learn an action-utility function Q(s,a) that tells us the value of doing action a in state s Bellman equation for Q values: Compare to Bellman equation for utilities: The expression for U(s) has max_a on the right hand side because U(s) = max_a Q(s,a). In other words, to get from the expression for Q(s,a) (eq. 1) to the one for U(s) (eq. 2), add max_a to both sides of eq. 1 and replace max_a’ Q(s’,a’) in eq. 1 by U(s’) in eq. 2

Model-free reinforcement learning Q-learning: learn an action-utility function Q(s,a) that tells us the value of doing action a in state s Bellman equation for Q values: Problem: we don’t know (and don’t want to learn) P(s’ | s, a) Solution: build up estimates of Q(s,a) over time by making small updates based on observed transitions What is the relationship between the constraint on the Q-values and the Bellman equation?

TD learning Motivation: the mean of a sequence x1, x2, … can be computed incrementally: By analogy, temporal difference (TD) updates to Q(s,a) have the form Source: D. Silver

TD learning TD update: Suppose we have observed the transition (s,a,s’) The target is the return if (s,a,s’) was the only possible transition:

TD learning TD update: Suppose we have observed the transition (s,a,s’) Full update equation: “Updating a guess towards a guess”

Should start at 1 and decay as O(1/t) TD algorithm outline At each time step t From current state s, select an action a given exploration policy Get the successor state s’ Perform the TD update: Learning rate Should start at 1 and decay as O(1/t) e.g., (t) = c/(c - 1 + t)

Exploration policies Standard (“greedy”) selection of optimal action: Epsilon-greedy: with probability ε, take random action Policy recommended by textbook: exploration function Number of times we’ve taken action a’ in state s (optimistic reward estimate)

SARSA In TD Q-learning, we’re learning about the optimal policy while following the exploration policy

SARSA In TD Q-learning, we’re learning about the optimal policy while following the exploration policy Alternative (SARSA): also select action a’ according to exploration policy SARSA vs. Q-learning example

TD Q-learning demos Andrej Karpathy’s demo Older Java-based demo

Function approximation So far, we’ve assumed a lookup table representation for utility function U(s) or action-utility function Q(s,a) But what if the state space is really large or continuous? Alternative idea: approximate the utility function, e.g., as a weighted linear combination of features: RL algorithms can be modified to estimate these weights More generally, functions can be nonlinear (e.g., neural networks) Recall: features for designing evaluation functions in games Benefits: Can handle very large state spaces (games), continuous state spaces (robot control) Can generalize to previously unseen states

Other techniques Policy search: instead of getting the Q-values right, you simply need to get their ordering right Write down the policy as a function of some parameters and adjust the parameters to improve the expected reward Learning from imitation: instead of an explicit reward function, you have expert demonstrations of the task to learn from