Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reinforcement Learning: Learning to get what you want... Sutton & Barto, Reinforcement Learning: An Introduction, MIT Press 1998.

Similar presentations


Presentation on theme: "Reinforcement Learning: Learning to get what you want... Sutton & Barto, Reinforcement Learning: An Introduction, MIT Press 1998."— Presentation transcript:

1 Reinforcement Learning: Learning to get what you want... Sutton & Barto, Reinforcement Learning: An Introduction, MIT Press 1998. http://www.cs.ualberta.ca/~sutton/book/the-book.html Kaelbling, Littman, & Moore, ``Reinforcement Learning: A Survey,'' Journal of Artificial Intelligence Research, Volume 4, 1996. http://people.csail.mit.edu/u/l/lpk/public_html/papers/rl-survey.ps

2 Meet Mack the Mouse* Mack lives a hard life as a psychology test subject Has to run around mazes all day, finding food and avoiding electric shocks Needs to know how to find cheese quickly, while getting shocked as little as possible Q: How can Mack learn to find his way around? * Mickey is still copyright ?

3 Start with an easy case V. simple maze: Whenever Mack goes left, he gets cheese Whenever he goes right, he gets shocked After reward/punishment, he’s reset back to start of maze Q: how can Mack learn to act well in this world?

4 Learning in the easy case Say there are two labels: “cheese” and “shock” Mack tries a bunch of trials in the world -- that generates a bunch of experiences: Now what?

5 But what to do? So we know that Mack can learn a mapping from actions to outcomes But what should Mack do in any given situation? What action should he take at any given time? Suppose Mack is the subject of a psychotropic drug study and has actually come to like shocks and hate cheese -- how does he act now?

6 Reward functions In general, we think of a reward function: R() tells us whether Mack thinks a particular outcome is good or bad Mack before drugs: R( cheese )=+1 R( shock )=-1 Mack after drugs: R( cheese )=-1 R( shock )=+1 Behavior always depends on rewards (utilities)

7 Maximizing reward So Mack wants to get the maximum possible reward (Whatever that means to him) For the one-shot case like this, this is fairly easy Now what about a harder case?

8 Reward over time In general: agent can be in a state s i at any time t Can choose an action a j to take in that state Rwd associated with a state: R(s i ) Or with a state/action transition: R(s i,a j ) Series of actions leads to series of rewards (s 1,a 1 ) → s 3 : R(s 3 ); (s 3,a 7 ) → s 14 : R(s 14 );...

9 Reward over time s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 s4s4 s2s2 s7s7 s 11 s8s8 s9s9 s 10

10 Reward over time s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 s4s4 s2s2 s7s7 s 11 s8s8 s9s9 s 10 V(s 1 )=R(s 1 )+R(s 4 )+R(s 11 )+R(s 10 )+...

11 Reward over time s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 s4s4 s2s2 s7s7 s 11 s8s8 s9s9 s 10 V(s 1 )=R(s 1 )+R(s 2 )+R(s 6 )+...

12 Where can you go? Definition: Complete set of all states agent could be in is called the state space: S Could be discrete or continuous We’ll usually work with discrete Size of state space: | S | S ={s 1,s 2,...,s | S | }

13 What can you do? Definition: Complete set of actions an agent could take is called the action space: A Again, discrete or cont. Again, we work w/ discrete Again, size: | A | A ={a 1,...,a | A | }

14 Experience & histories In supervised learning, “fundamental unit of experience”: feature vector+label Fundamental unit of experience in RL: At time t in some state s i, take action a j, get reward r t, end up in state s k Called an experience tuple or SARSA tuple

15 The value of history... Set of all experience during a single episode up to time t is a history: A.k.a., trace, trajectory

16 Policies Total accumulated reward (value, V ) depends on Where agent starts, initial s What agent does at each step (duh), a Plan of action is called a policy, π Policy defines what action to take in every state of the system: A.k.a. controller, control law, decision rule, etc.

17 Policies Value is a function of start state and policy: Useful to think about finite horizon and infinite horizon values:

18 Finite horizon reward Assuming that an episode is finite: Agent acts in the world for a finite number of time steps, T, experiences history h T What should total aggregate value be?

19 Finite horizon reward Assuming that an episode is finite: Agent acts in the world for a finite number of time steps, T, experiences history h T What should total aggregate value be? Total accumulated reward: Occasionally useful to use average reward:

20 Gonna live forever... Often, we want to model a process that is indefinite Infinitely long Of unknown length (don’t know in advance when it will end) Runs ‘til it’s stopped (randomly) Have to consider infinitely long histories Q: what does value mean over an infinite history?

21 Reaaally long-term reward Let be an infinite history We define the infinite-horizon discounted value to be: where is the discount factor Q1: Why does this work? Q2: if R max is the max possible reward attainable in the environment, what is V max ?


Download ppt "Reinforcement Learning: Learning to get what you want... Sutton & Barto, Reinforcement Learning: An Introduction, MIT Press 1998."

Similar presentations


Ads by Google