Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lirong Xia Reinforcement Learning (1) Tue, March 18, 2014.

Similar presentations


Presentation on theme: "Lirong Xia Reinforcement Learning (1) Tue, March 18, 2014."— Presentation transcript:

1 Lirong Xia Reinforcement Learning (1) Tue, March 18, 2014

2 Midterm –mean 82/99 –Problem 5 (b) 2 has 8 points now –your grade on LMS Project 2 due/Project 3 out on this Friday 1 Reminder

3 Markov decision processes Computing the optimal policy –value iteration –policy iteration 2 Last time

4 Grid World 3 The agent lives in a grid Walls block the agents path The agents actions do not always go as planned: –80% of the time, the action North takes the agent North (if there is no wall there) –10% of the time, North takes the agent West; 10% East –If there is a wall in the direction the agent would have taken, the agent stays for this turn Small living reward each step Big rewards come at the end Goal: maximize sum of reward

5 Markov Decision Processes 4 An MDP is defined by: –A set of states s S –A set of actions a A –A transition function T( s, a, s ) Prob that a from s leads to s i.e., p( s | s, a ) sometimes called the model –A reward function R( s, a, s ) Sometimes just R( s ) or R( s ) –A start state (or distribution) –Maybe a terminal state MDPs are a family of nondeterministic search problems –Reinforcement learning (next class): MDPs where we dont know the transition or reward functions

6 Recap: Defining MDPs 5 Markov decision processes: –States S –Start state s 0 –Actions A –Transition p(s|s,a) (or T(s,a,s)) –Reward R(s,a,s) (and discount ϒ ) MDP quantities so far: –Policy = Choice of action for each (MAX) state –Utility (or return) = sum of discounted rewards

7 Optimal Utilities 6 Fundamental operation: compute the values (optimal expectimax utilities) of states s Define the value of a state s: –V*(s) = expected utility starting in s and acting optimally Define the value of a Q-state (s,a): –Q*(s,a) = expected utility starting in s, taking action a and thereafter acting optimally Define the optimal policy: – π *(s) = optimal action from state s

8 The Bellman Equations 7 One-step lookahead relationship amongst optimal utility values:

9 Solving MDPs 8 We want to find the optimal policy Proposal 1: modified expectimax search, starting from each state s:

10 Value Iteration 9 Idea: –Start with V 1 (s) = 0 –Given V i, calculate the values for all states for depth i+1: –Repeat until converge –Use V i as evaluation function when computing V i+1

11 Example: Value Iteration 10 Information propagates outward from terminal states and eventually all states have correct value estimates

12 Policy Iteration 11 Alternative approach: –Step 1: policy evaluation: calculate utilities for some fixed policy (not optimal utilities!) –Step 2: policy improvement: update policy using one-step look-ahead with resulting converged (but not optimal!) utilities as future values –Repeat steps until policy converges

13 Still have an MDP: –A set of states –A set of actions (per state) A –A model T(s,a,s) –A reward function R(s,a,s) Still looking for an optimal policy π* (s) New twist: dont know T and/or R, but can observe R –Learn by doing –can have multiple episodes (trials) 12 Today: Reinforcement learning

14 Studied experimentally for more than 60 years in psychology –Rewards: food, pain, hunger, drugs, etc. Example: foraging –Bees learn near-optimal foraging plan in field of artificial flowers with controlled nectar supplies 13 Example: animal learning

15 Stanford autonomous helicopter http://heli.stanford.edu/ 14 What can you do with RL

16 Model-based learning –learn the model of MDP (transition probability and reward) –compute the optimal policy as if the learned model is correct Model-free learning –learn the optimal policy without explicitly learning the transition probability –Q-learning: learn the Q-state Q(s,a) directly 15 Reinforcement learning methods

17 Model-Based Learning 16 Idea: –Learn the model empirically in episodes –Solve for values as if the learned model were correct Simple empirical model learning –Count outcomes for each s,a –Normalize to give estimate of T(s,a,s) –Discover R(s,a,s) when we experience (s,a,s) Solving the MDP with the learned model –Iterative policy evaluation, for example

18 Example: Model-Based Learning 17 Episodes: (1,1) up-1 (1,2) up-1 (1,2) up-1 (1,2) up-1 (1,3) right-1 (1,3) right-1 (2,3) right-1 (2,3) right-1 (3,3) right-1 (3,3) right-1 (3,2) up-1 (3,2) up-1 (4,2) exit-100 (3,3) right-1 (done) (4,3) exit +100 (done) γ= 1 T(, right, ) = 1/3 T(, right, ) =2/2

19 Model-based vs. Model-Free 18 Want to compute an expectation weighted by probability p(x) (e.g. expected utility): Model-based: estimate p(x) from samples, compute expectation Model-free: estimate expectation directly from samples Why does this work? Because samples appear with the right frequencies!

20 Flip a biased coin –if head, you get $10 –if tail, you get $1 –you dont know the probability of head/tail –What is your expected gain? 8 episodes –h, t, t, t, h, t, t, h Model-based: p(h)=3/8, so E(gain) = 10*3/8+1*5/8=35/8 Model-free: (10+1+1+1+10+1+1+10)/8=35/8 19 Example

21 Sample-Based Policy Evaluation? 20 Approximate the expectation with samples (drawn from an unknown T!) Almost! But we cannot rewind time to get samples from s in the same episode

22 Temporal-Difference Learning 21 Big idea: learn from every experience! –Update V(s) each time we experience (s,a,s,R) –Likely s will contribute updates more often Temporal difference learning –Policy still fixed

23 Exponential Moving Average 22 Exponential moving average –Makes recent samples more important –Forgets about the past (distant past values were wrong anyway) –Easy to compute from the running average Decreasing learning rate can give converging averages

24 Problems with TD Value Learning 23 TD value learning is a model-free way to do policy evaluation However, if we want to turn values into a (new) policy, were sunk: Idea: learn Q-value directly Makes action selection model-free too!

25 Active Learning 24 Full reinforcement learning –You dont know the transitions T(s,a,s) –You dont know the rewards R(s,a,s) –You can choose any actions you like –Goal: learn the optimal policy In this case: –Learner makes choices! –Fundamental tradeoff: exploration vs. exploitation exploration: try new actions exploitation: focus on optimal actions based on the current estimation –This is NOT offline planning! You actually take actions in the world and see what happens…

26 Detour: Q-Value Iteration 25 Value iteration: find successive approx optimal values –Start with V 0 *(s)=0 –Given V i *, calculate the values for all states for depth i+1: But Q-values are more useful –Start with Q 0 *(s,a)=0 –Given Q i *, calculate the Q-values for all Q-states for depth i+1:

27 Q-Learning 26 Q-Learning: sample-based Q-value iteration Learn Q*(s,a) values –Receive a sample (s,a,s,R) –Consider your old estimate: Q(s,a) –Consider your new sample estimate: –Incorporate the new estimate into a running average

28 Q-Learning Properties 27 Amazing result: Q-learning converges to optimal policy –If you explore enough –If you make the learning rate small enough –…but not decrease it too quickly! –Basically doesnt matter how you select actions (!) Neat property: off-policy learning –Learn optimal policy without following it (some caveats)

29 Q-Learning 28 Q-learning produces tables of Q-values:

30 Exploration / Exploitation 29 Several schemes for forcing exploration –Simplest: random actions (ε greedy) Every time step, flip a coin With probability ε, act randomly With probability 1-ε, act according to current policy –Problems with random actions? You do explore the space, but keep thrashing around once learning is done One solution: lower ε over time Another solution: exploration functions

31 Exploration Functions 30 When to explore –Random actions: explore a fixed amount –Better idea: explore areas whose badness is not (yet) established Exploration function –Takes a value estimate and a count, and returns an optimistic utility, e.g. (exact form not important) sample =


Download ppt "Lirong Xia Reinforcement Learning (1) Tue, March 18, 2014."

Similar presentations


Ads by Google