Presentation is loading. Please wait.

Presentation is loading. Please wait.

Policies and exploration and eligibility, oh my!.

Similar presentations


Presentation on theme: "Policies and exploration and eligibility, oh my!."— Presentation transcript:

1 Policies and exploration and eligibility, oh my!

2 Administrivia Reminder: final project Written report: due Thu, May 10, by noon Hard deadline! No late days... Oral reports: May 1, May 3 7 people registered ⇒ 4 (3) pres/session ⇒ 18-20 min/pres (incl questions; setup; etc.) Volunteers?

3 River of time Last time: The Q function The Q -learning algorithm Q -learning in action Today: Notes on writing & presenting Q -learning cont’d Action selection & exploration The off-policy property Use of experience; eligibility traces Radioactive breadcrumbs

4 Final project writing FAQ Q: How formal a document should this be? A: Very formal. This should be as close in style to the papers we have read as possible. Pay attention to the sections that they have -- introduction, background, approach, experiments, etc. Try to establish a narrative -- “tell the story” As always, use correct grammar, spelling, etc.

5 Final project writing FAQ Q: How long should the final report be? A: As long as necessary, but no longer. A’: I would guess that it would take ~10-15 pages(5000-7500 words) to describe your work well.

6 Final project writing FAQ Q: Any particular document format? A: I prefer: 8.5”x11” paper 1” margins 12pt font double-spaced In LaTeX: \renewcommand{baselinestretch}{1.6} Stapled!

7 Final project writing FAQ Q: Any other tips? A: Yes: DON’T BE VAGUE -- be as specific and concrete as possible about what you did/what other people did/etc.

8 The Q -learning algorithm Algorithm: Q_learn Inputs: State space S ; Act. space A Discount  (0<=  <1); Learning rate  (0<=  <1) Outputs: Q Q =random(| S |,| A |); // Initialize Repeat { s =get_current_world_state() a =pick_next_action( Q, s ) ( r, s’ )=act_in_world( a ) Q ( s, a )= Q ( s, a )+  *( r +  *max_ a’ ( Q ( s’, a’ ))- Q ( s, a )) } Until (bored)

9 Why does this work? Multiple ways to think of it The (more nearly) intuitive: Look at the key update step in the Q -learning alg: I.e., a weighted avg between current Q(s,a) and sampled Q(s’,a’)

10 Why does this work? Still... Why should that weighted avg be the right thing? Compare update eqn w/ Bellman eqn:

11 Why does this work? Still... Why should that weighted avg be the right thing? Compare w/ Bellman eqn:

12 Why does this work? Still... Why should that weighted avg be the right thing? Compare w/ Bellman eqn... I.e., the update is based on a sample of the true distribution, T, rather than the full expectation that is used in the Bellman eqn/policy iteration alg First time agent finds a rewarding state, s r,  of that reward will be propagated back by one step via Q update to s r-1, a state one step away from s r Next time, the state two away from s r will be updated, and so on...

13 Picking the action One critical step underspecified in Q learn alg: a =pick_next_action( Q, s ) How should you pick an action at each step?

14 Picking the action One critical step underspecified in Q learn alg: a =pick_next_action( Q, s ) How should you pick an action at each step? Could pick greedily according to Q Might tend to keep doing the same thing and not explore at all. Need to force exploration.

15 Picking the action One critical step underspecified in Q learn alg: a =pick_next_action( Q, s ) How should you pick an action at each step? Could pick greedily according to Q Might tend to keep doing the same thing and not explore at all. Need to force exploration. Could pick an action at random Ignores everything you’ve learned about Q so far Would you still converge?

16 Off-policy learning Exploit a critical property of the Q learn alg: Lemma (w/o proof): The Q learning algorithm will converge to the correct Q* independently of the policy being executed, so long as: Every (s,a) pair is visited infinitely often in the infinite limit  is chosen to be small enough (usually decayed)

17 Off-policy learning I.e., Q -learning doesn’t care what policy is being executed -- will still converge Called an off-policy method: the policy being learned can be diff than the policy being executed Off-policy property tells us: we’re free to pick any policy we like to explore, so long as we guarantee infinite visits to each (s,a) pair Might as well choose one that does (mostly) as well as we know how to do at each step

18 “Almost greedy” exploring Can’t be just greedy w.r.t. Q (why?) Typical answers: ε-greedy: execute argmax a {Q(s,a)} w/ prob (1- ε ) and a random action w/ prob ε Boltzmann exploration: pick action a w/ prob:

19 The value of experience We observed that Q learning converges slooooooowly... Same is true of many other RL algs But we can do better (sometimes by orders of magnitude) What’re the biggest hurdles to Q convergence?

20 The value of experience What’re the biggest hurdles to Q convergence? Well, there are many Big one, though, is: poor use of experience Each timestep only changes one Q(s,a) value Takes many steps to “back up” experience very far

21 That eligible state Basic problem: Every step, Q only does a one- step backup Forgot where it was before that No sense of the sequence of state/actions that got it where it is now Want to have a long-term memory of where the agent has been; update the Q values for all of them

22 That eligible state Want to have a long-term memory of where the agent has been; update the Q values for all of them Idea called eligibility traces: Have a memory cell for each state/action pair Set memory when visit that state/action Each step, update all eligible states

23 Retrenching from Q Can integrate eligibility traces w/ Q -learning But it’s a bit of a pain Need to track when agent is “on policy” or “off policy”, etc. Good discussion in Sutton & Barto

24 Retrenching from Q We’ll focus on a (slightly) simpler learning alg: SARSA learning V. similar to Q learning Strictly on policy: only learns about policy it’s actually executing E.g., learns instead of

25 The Q -learning algorithm Algorithm: Q_learn Inputs: State space S ; Act. space A Discount  (0<=  <1); Learning rate  (0<=  <1) Outputs: Q Q =random(| S |,| A |); // Initialize Repeat { s =get_current_world_state() a =pick_next_action( Q, s ) ( r, s’ )=act_in_world( a ) Q ( s, a )= Q ( s, a )+  *( r +  *max_ a’ ( Q ( s’, a’ ))- Q ( s, a )) } Until (bored)

26 SARSA-learning algorithm Algorithm: SARSA_learn Inputs: State space S ; Act. space A Discount  (0<=  <1); Learning rate  (0<=  <1) Outputs: Q Q =random(| S |,| A |); // Initialize s =get_current_world_state() a =pick_next_action( Q, s ) Repeat { ( r, s’ )=act_in_world( a ) a’ =pick_next_action( Q, s’ ) Q ( s, a )= Q ( s, a )+  *( r +  * Q ( s’, a’ )- Q ( s, a )) a = a’ ; s = s’ ; } Until (bored)

27 SARSA-learning algorithm Algorithm: SARSA_learn Inputs: State space S ; Act. space A Discount  (0<=  <1); Learning rate  (0<=  <1) Outputs: Q Q =random(| S |,| A |); // Initialize s =get_current_world_state() a =pick_next_action( Q, s ) Repeat { ( r, s’ )=act_in_world( a ) a’ =pick_next_action( Q, s’ ) Q ( s, a )= Q ( s, a )+  *( r +  * Q ( s’, a’ )- Q ( s, a )) a = a’ ; s = s’ ; } Until (bored)

28 SARSA vs. Q SARSA and Q -learning very similar SARSA updates Q(s,a) for the policy it’s actually executing Lets the pick_next_action() function pick action to update Q updates Q(s,a) for greedy policy w.r.t. current Q Uses max_ a to pick action to update might be diff than the action it executes at s’

29 SARSA vs. Q In practice: Q will learn the “true” π*, but SARSA will learn about what it’s actually doing Exploration can get Q -learning in trouble...

30 Getting Q in trouble... “Cliff walking” example (Sutton & Barto, Sec 6.5)

31 Getting Q in trouble... “Cliff walking” example (Sutton & Barto, Sec 6.5)


Download ppt "Policies and exploration and eligibility, oh my!."

Similar presentations


Ads by Google