Def gradientDescent(x, y, theta, alpha, m, numIterations): xTrans = x.transpose() replaceMe =.0001 for i in range(0, numIterations): hypothesis = np.dot(x,

Slides:



Advertisements
Similar presentations
Lirong Xia Reinforcement Learning (1) Tue, March 18, 2014.
Advertisements

Lirong Xia Reinforcement Learning (2) Tue, March 21, 2014.
Markov Decision Process
Value Iteration & Q-learning CS 5368 Song Cui. Outline Recap Value Iteration Q-learning.
RL for Large State Spaces: Value Function Approximation
Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.
Announcements  Homework 3: Games  Due tonight at 11:59pm.  Project 2: Multi-Agent Pacman  Has been released, due Friday 2/21 at 5:00pm.  Optional.
Ai in game programming it university of copenhagen Reinforcement Learning [Outro] Marco Loog.
Reinforcement learning (Chapter 21)
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
Markov Decision Processes
Planning under Uncertainty
Reinforcement learning
Announcements Homework 3: Games Project 2: Multi-Agent Pacman
91.420/543: Artificial Intelligence UMass Lowell CS – Fall 2010
Reinforcement Learning
CS 182/CogSci110/Ling109 Spring 2008 Reinforcement Learning: Algorithms 4/1/2008 Srini Narayanan – ICSI and UC Berkeley.
CS 188: Artificial Intelligence Fall 2009
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
1 Kunstmatige Intelligentie / RuG KI Reinforcement Learning Johan Everts.
CS 188: Artificial Intelligence Fall 2009 Lecture 10: MDPs 9/29/2009 Dan Klein – UC Berkeley Many slides over the course adapted from either Stuart Russell.
Markov Decision Processes Value Iteration Pieter Abbeel UC Berkeley EECS TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:
91.420/543: Artificial Intelligence UMass Lowell CS – Fall 2010 Reinforcement Learning 10/25/2010 A subset of slides from Dan Klein – UC Berkeley Many.
CS 188: Artificial Intelligence Fall 2009 Lecture 12: Reinforcement Learning II 10/6/2009 Dan Klein – UC Berkeley Many slides over the course adapted from.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
12/16 Projects Write-up – Generally, 1-2 pages – What was your idea? – How did you try to accomplish it? – What worked? – What you could have done differently?
Reinforcement Learning
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
CSE 573: Artificial Intelligence
CSE-573 Reinforcement Learning POMDPs. Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect.
Reinforcement Learning
CHAPTER 10 Reinforcement Learning Utility Theory.
© D. Weld and D. Fox 1 Reinforcement Learning CSE 473.
QUIZ!!  T/F: RL is an MDP but we do not know T or R. TRUE  T/F: In model free learning we estimate T and R first. FALSE  T/F: Temporal Difference learning.
Quiz 6: Utility Theory  Simulated Annealing only applies to continuous f(). False  Simulated Annealing only applies to differentiable f(). False  The.
MDPs (cont) & Reinforcement Learning
gaflier-uas-battles-feral-hogs/ gaflier-uas-battles-feral-hogs/
CS 188: Artificial Intelligence Spring 2007 Lecture 23: Reinforcement Learning: III 4/17/2007 Srini Narayanan – ICSI and UC Berkeley.
Reinforcement learning (Chapter 21)
Announcements  Upcoming due dates  Wednesday 11/4, 11:59pm Homework 8  Friday 10/30, 5pm Project 3  Watch out for Daylight Savings and UTC.
CSE 473Markov Decision Processes Dan Weld Many slides from Chris Bishop, Mausam, Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer.
COMP 2208 Dr. Long Tran-Thanh University of Southampton Reinforcement Learning.
QUIZ!!  T/F: Optimal policies can be defined from an optimal Value function. TRUE  T/F: “Pick the MEU action first, then follow optimal policy” is optimal.
Reinforcement Learning
Warmup: Which one would you try next? 1 Tries: 1000 Winnings: 900 Tries: 100 Winnings: 90 Tries: 5 Winnings: 4 Tries: 100 Winnings: 0 ABCD.
CS 188: Artificial Intelligence Spring 2007 Lecture 21:Reinforcement Learning: II MDP 4/12/2007 Srini Narayanan – ICSI and UC Berkeley.
CSE 473: Artificial Intelligence Spring 2012 Reinforcement Learning Dan Weld Many slides adapted from either Dan Klein, Stuart Russell, Luke Zettlemoyer.
CS 541: Artificial Intelligence Lecture XI: Reinforcement Learning Slides Credit: Peter Norvig and Sebastian Thrun.
Announcements  Homework 3: Games  Due tonight at 11:59pm.  Project 2: Multi-Agent Pacman  Has been released, due Friday 2/19 at 5:00pm.  Optional.
CS 188: Artificial Intelligence Fall 2007 Lecture 12: Reinforcement Learning 10/4/2007 Dan Klein – UC Berkeley.
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
CS 188: Artificial Intelligence Fall 2008 Lecture 11: Reinforcement Learning 10/2/2008 Dan Klein – UC Berkeley Many slides over the course adapted from.
1 Passive Reinforcement Learning Ruti Glick Bar-Ilan university.
Reinforcement Learning (1)
Reinforcement learning (Chapter 21)
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Quiz 6: Utility Theory Simulated Annealing only applies to continuous f(). False Simulated Annealing only applies to differentiable f(). False The 4 other.
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008
CSE 473: Artificial Intelligence Autumn 2011
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Spring 2006
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Spring 2006
Reinforcement Learning (2)
Markov Decision Processes
Markov Decision Processes
Reinforcement Learning (2)
Presentation transcript:

def gradientDescent(x, y, theta, alpha, m, numIterations): xTrans = x.transpose() replaceMe =.0001 for i in range(0, numIterations): hypothesis = np.dot(x, theta) loss = hypothesis - y # Want the mean squared error here. Only use # for debugging to make sure we're progressing cost = np.sum(replaceMe) / (m) if i % 9999 == 0: print i, cost # avg gradient per example gradient = np.dot(xTrans, replaceMe) / m # update theta = theta * replaceMe return theta gradientDescent.py

Genetic algorithms

Fitness function Population, lots of parameters Types of problems can solve – – Co-evolution – h/neatdemo.html h/neatdemo.html – h/robotmovies/clip7.gif h/robotmovies/clip7.gif

CptS 580: Reinforcement Learning Graduate or Undergradate && (CptS 350 || Matt’s permission) How can an agent learn to act in an environment? No absolute truth Trial and Error

Basic idea: – Receive feedback in the form of rewards – Agent’s utility is defined by the reward function – Must (learn to) act so as to maximize expected rewards

Sequential tasks How could we learn to play tic-tac-toe? – What do we want to optimize (i.e., what’s the reward)? – What are the actions? – How would we represent state? – How would we figure out how to act (to maximize the reward)?

Reinforcement Learning (RL) RL algorithms attempt to find a policy – maximizing cumulative reward for the agent – over the course of the problem Typically represented by a Markov Decision Process RL differs from supervised learning: – correct input/output pairs are never presented – sub-optimal actions never explicitly corrected

Grid World  The agent lives in a grid  Walls block the agent’s path  The agent’s actions do not always go as planned:  80% of the time, the action North takes the agent North (if there is no wall there)  10% of the time, North takes the agent West; 10% East  If there is a wall in the direction the agent would have been taken, the agent stays put  Small “living” reward each step  Big rewards come at the end  Goal: maximize sum of rewards*

Grid Futures 10 Deterministic Grid WorldStochastic Grid World X X E N S W X ? X XX

Markov Decision Processes An MDP is defined by: – A set of states s  S – A set of actions a  A – A transition function T(s,a,s’) Prob that a from s leads to s’ i.e., P(s’ | s,a) Also called the model – A reward function R(s, a, s’) Sometimes just R(s) or R(s’) – A start state (or distribution) – Maybe a terminal state – A discount factor: γ MDPs are a family of non- deterministic search problems – Reinforcement learning: MDPs where we don’t know the transition or reward functions 11

What is Markov about MDPs? Andrey Markov ( ) “Markov” generally means that given the present state, the future and the past are independent For Markov decision processes, “Markov” means:

Solving MDPs In deterministic single-agent search problems, want an optimal plan, or sequence of actions, from start to a goal In an MDP, we want an optimal policy  *: S → A – A policy  gives an action for each state – An optimal policy maximizes expected utility if followed – Defines a reflex agent Optimal policy when R(s, a, s’) = for all non-terminal s

Example Optimal Policies R(s) = -2.0 R(s) = -0.4 R(s) = -0.03R(s) =

Recap: Defining MDPs Markov decision processes: – States S – Start state s 0 – Actions A – Transitions P(s’|s,a) (or T(s,a,s’)) – Rewards R(s,a,s’) (and discount  ) MDP quantities so far: – Policy = Choice of action for each state – Utility (or return) = sum of discounted rewards a s s, a s,a,s’ s’ 16

Optimal Utilities Fundamental operation: compute the values (optimal expectimax utilities) of states s Why? Optimal values define optimal policies! Define the value of a state s: V * (s) = expected utility starting in s and acting optimally Define the value of a q-state (s,a): Q * (s,a) = expected utility starting in s, taking action a and thereafter acting optimally Define the optimal policy:  * (s) = optimal action from state s a s s, a s,a,s’ s’ 17

Value Iteration Idea: – Start with V 0 * (s) = 0, which we know is right (why?) – Given V i *, calculate the values for all states for depth i+1: – This is called a value update or Bellman update – Repeat until convergence Theorem: will converge to unique optimal values – Basic idea: approximations get refined towards optimal values – Policy may converge long before values do 18

Example: Value Iteration Information propagates outward from terminal states and eventually all states have correct value estimates V1V1 V2V2 19

Example: Value Iteration Information propagates outward from terminal states and eventually all states have correct value estimates V2V2 V3V3 20

Discounted Rewards Rewards in the future are worth less than an immediate reward – Because of uncertainty: who knows if/when you’re going to get to that reward state

Discounted Rewards Discount factor ϒ (often.9-1.0) Assume a reward years in the future is only worth of the value of immediate reward For each state, calculate a utility value equal to the Sum of Future Discounted Rewards

Reinforcement Learning Reinforcement learning: – Still assume an MDP: A set of states s  S A set of actions (per state) A A model T(s,a,s’) A reward function R(s,a,s’) A discount factor γ (could be 1) – Still looking for a policy  (s) – New twist: don’t know T or R i.e. don’t know which states are good or what the actions do Must actually try actions and states out to learn 23

Passive Learning Simplified task – You don’t know the transitions T(s,a,s’) – You don’t know the rewards R(s,a,s’) – You are given a policy  (s) – Goal: learn the state values – … what policy evaluation did In this case: – Learner “along for the ride” – No choice about what actions to take – Just execute the policy and learn from experience – We’ll get to the active case soon – This is NOT offline planning! You actually take actions in the world and see what happens… 24

Model-Based Learning Idea: – Learn the model empirically through experience – Solve for values as if the learned model were correct Simple empirical model learning – Count outcomes for each s,a – Normalize to give estimate of T(s,a,s’) – Discover R(s,a,s’) when we experience (s,a,s’) Solving the MDP with the learned model – Iterative policy evaluation, for example 25  (s) s s,  (s) s,  (s),s’ s’

Sample-Based Policy Evaluation? Who needs T and R? Approximate the expectation with samples (drawn from T!) 26  (s) s s,  (s) s1’s1’ s2’s2’ s3’s3’ s,  (s),s’ s’ Almost! But we only actually make progress when we move to i+1.

Temporal-Difference Learning Big idea: learn from every experience! – Update V(s) each time we experience (s,a,s’,r) – Likely s’ will contribute updates more often Temporal difference learning – Policy still fixed! – Move values toward value of whatever successor occurs: running average! 27  (s) s s,  (s) s’ Sample of V(s): Update to V(s): Same update:

Exponential Moving Average Exponential moving average – Makes recent samples more important – Forgets about the past (distant past values were wrong anyway) – Easy to compute from the running average Decreasing learning rate can give converging averages 28

Problems with TD Value Learning TD value leaning is a way to do policy evaluation However, if we want to turn values into a (new) policy, we’re sunk: Idea: learn Q-values directly a s s, a s,a,s’ s’ 29

Active Learning Full reinforcement learning – You don’t know the transitions T(s,a,s’) – You don’t know the rewards R(s,a,s’) – You can choose any actions you like – Goal: learn the optimal policy – … what value iteration did! In this case: – Learner makes choices! – Fundamental tradeoff: exploration vs. exploitation – This is NOT offline planning! You actually take actions in the world and find out what happens… 30

Q-Learning Q-Learning: sample-based Q-value iteration Learn Q*(s,a) values – Receive a sample (s,a,s’,r) – Consider your old estimate: – Consider your new sample estimate: – Incorporate the new estimate into a running average: 31

Q-Learning Properties Amazing result: Q-learning converges to optimal policy – If you explore enough – If you make the learning rate small enough – … but not decrease it too quickly! – Basically doesn’t matter how you select actions (!) Neat property: off-policy learning – learn optimal policy without following it (some caveats) SE SE 33

Q-Learning Q-learning produces tables of q-values: 34

Exploration / Exploitation Several schemes for forcing exploration – Simplest: random actions (  greedy) Every time step, flip a coin With probability , act randomly With probability 1- , act according to current policy – Problems with random actions? You do explore the space, but keep thrashing around once learning is done One solution: lower  over time Another solution: exploration functions 35

The Story So Far: MDPs and RL If we know the MDP – Compute V*, Q*,  * exactly – Evaluate a fixed policy  If we don’t know the MDP – We can estimate the MDP then solve – We can estimate V for a fixed policy  – We can estimate Q*(s,a) for the optimal policy while executing an exploration policy 36  Model-based DPs  Value and policy Iteration  Policy evaluation  Model-based RL  Model-free RL:  Value learning  Q-learning Things we know how to do: Techniques:

Q-Learning In realistic situations, we cannot possibly learn about every single state! – Too many states to visit them all in training – Too many states to hold the q-tables in memory Instead, we want to generalize: – Learn about some small number of training states from experience – Generalize that experience to new, similar states – This is a fundamental idea in machine learning, and we’ll see it over and over again 37

Example: Pacman Let’s say we discover through experience that this state is bad: In naïve q learning, we know nothing about this state or its q states: Or even this one! 38

Feature-Based Representations Solution: describe a state using a vector of features – Features are functions from states to real numbers (often 0/1) that capture important properties of the state – Example features: Distance to closest ghost Distance to closest dot Number of ghosts 1 / (dist to dot) 2 Is Pacman in a tunnel? (0/1) …… etc. – Can also describe a q-state (s, a) with features (e.g. action moves closer to food) 39

Linear Feature Functions Using a feature representation, we can write a q function (or value function) for any state using a few weights: Advantage: our experience is summed up in a few powerful numbers Disadvantage: states may share features but be very different in value! 40

Function Approximation Q-learning with linear q-functions: Intuitive interpretation: – Adjust weights of active features – E.g. if something unexpectedly bad happens, disprefer all states with that state’s features Formal justification: online least squares 41

Example: Q-Pacman 42

Linear regression Given examples Predict given a new point 43

Linear regression Prediction