Presentation is loading. Please wait.

Presentation is loading. Please wait.

Policies and exploration and eligibility, oh my!.

Similar presentations


Presentation on theme: "Policies and exploration and eligibility, oh my!."— Presentation transcript:

1 Policies and exploration and eligibility, oh my!

2 Administrivia Reminder: R3 due Thurs Anybody not have a group? Reminder: final project Written report: due Thu, May 11, by noon Oral reports: Apr 27, May 2, May 4 15 people registered ⇒ 5 pres/session ⇒ 15 min/pres Volunteers?

3 River of time Last time: The Q function The Q -learning algorithm Q -learning in action Today: Notes on writing & presenting Action selection & exploration The off-policy property Use of experience; eligibility traces Radioactive breadcrumbs

4 Final project writing FAQ Q: How formal a document should this be? A: Very formal. This should be as close in style to the papers we have read as possible. Pay attention to the sections that they have -- introduction, background, approach, experiments, etc. Try to establish a narrative -- “tell the story” As always, use correct grammar, spelling, etc.

5 Final project writing FAQ Q: How long should the final report be? A: As long as necessary, but no longer. A’: I would guess that it would take ~10-15 pages to describe your work well.

6 Final project writing FAQ Q: Any particular document format? A: I prefer: 8.5”x11” paper 1” margins 12pt font double-spaced In LaTeX: \renewcommand{baselinestretch}{1.6} Stapled!

7 Final project writing FAQ Q: Any other tips? A: Yes: DON’T BE VAGUE -- be as specific and concrete as possible about what you did/what other people did/etc.

8 The Q -learning algorithm Algorithm: Q_learn Inputs: State space S ; Act. space A Discount  (0<=  <1); Learning rate  (0<=  <1) Outputs: Q Repeat { s =get_current_world_state() a =pick_next_action( Q, s ) ( r, s’ )=act_in_world( a ) Q ( s, a )= Q ( s, a )+  *( r +  *max_ a’ ( Q ( s’, a’ ))- Q ( s, a )) } Until (bored)

9 Why does this work? Still... Why should that weighted avg be the right thing? Compare w/ Bellman eqn:

10 Why does this work? Still... Why should that weighted avg be the right thing? Compare w/ Bellman eqn... I.e., the update is based on a sample of the true distribution, T, rather than the full expectation that is used in the Bellman eqn/policy iteration alg First time agent finds a rewarding state, s r,  of that reward will be propagated back by one step via Q update to s r-1, a state one step away from s r Next time, the state two away from s r will be updated, and so on...

11 Picking the action One critical step underspecified in Q learn alg: a =pick_next_action( Q, s ) How should you pick an action at each step?

12 Picking the action One critical step underspecified in Q learn alg: a =pick_next_action( Q, s ) How should you pick an action at each step? Could pick greedily according to Q Might tend to keep doing the same thing and not explore at all. Need to force exploration.

13 Picking the action One critical step underspecified in Q learn alg: a =pick_next_action( Q, s ) How should you pick an action at each step? Could pick greedily according to Q Might tend to keep doing the same thing and not explore at all. Need to force exploration. Could pick an action at random Ignores everything you’ve learned about Q so far Would you still converge?

14 Off-policy learning Exploit a critical property of the Q learn alg: Lemma (w/o proof): The Q learning algorithm will converge to the correct Q* independently of the policy being executed, so long as: Every (s,a) pair is visited infinitely often in the infinite limit  is chosen to be small enough (usually decayed)

15 Off-policy learning I.e., Q learning doesn’t care what policy is being executed -- will still converge Called an off-policy method: the policy being learned can be diff than the policy being executed Off-policy property tells us: we’re free to pick any policy we like to explore, so long as we guarantee infinite visits to each (s,a) pair Might as well choose one that does (mostly) as well as we know how to do at each step

16 “Almost greedy” exploring Can’t be just greedy w.r.t. Q (why?) Typical answers: ε-greedy: execute argmax a {Q(s,a)} w/ prob (1- ε ) and a random action w/ prob ε Boltzmann exploration: pick action a w/ prob:

17 The value of experience We observed that Q learning converges slooooooowly... Same is true of many other RL algs But we can do better (sometimes by orders of magnitude) What’re the biggest hurdles to Q convergence?

18 The value of experience We observed that Q learning converges slooooooowly... Same is true of many other RL algs But we can do better (sometimes by orders of magnitude) What’re the biggest hurdles to Q convergence? Well, there are many Big one, though, is: poor use of experience Each timestep only changes one Q(s,a) value Takes many steps to “back up” experience very far

19 That eligible state Basic problem: Every step, Q only does a one- step backup Forgot where it was before that No sense of the sequence of state/actions that got it where it is now Want to have a long-term memory of where the agent has been; update the Q values for all of them

20 That eligible state Want to have a long-term memory of where the agent has been; update the Q values for all of them Idea called eligibility traces: Have a memory cell for each state/action pair Set memory when visit that state/action Each step, update all eligible states

21 Retrenching from Q Can integrate eligibility traces w/ Q -learning But it’s a bit of a pain Need to track when agent is “on policy” or “off policy”, etc. Good discussion in Sutton & Barto

22 Retrenching from Q We’ll focus on a (slightly) simpler learning alg: SARSA learning V. similar to Q learning Strictly on policy: only learns about policy it’s actually executing E.g., learns instead of

23 The Q -learning algorithm Algorithm: Q_learn Inputs: State space S ; Act. space A Discount  (0<=  <1); Learning rate  (0<=  <1) Outputs: Q Repeat { s =get_current_world_state() a =pick_next_action( Q, s ) ( r, s’ )=act_in_world( a ) Q ( s, a )= Q ( s, a )+  *( r +  *max_ a’ ( Q ( s’, a’ ))- Q ( s, a )) } Until (bored)

24 SARSA-learning algorithm Algorithm: SARSA_learn Inputs: State space S ; Act. space A Discount  (0<=  <1); Learning rate  (0<=  <1) Outputs: Q s =get_current_world_state() a =pick_next_action( Q, s ) Repeat { ( r, s’ )=act_in_world( a ) a’ =pick_next_action( Q, s’ ) Q ( s, a )= Q ( s, a )+  *( r +  * Q ( s’, a’ )- Q ( s, a )) a = a’ ; s = s’ ; } Until (bored)

25 SARSA vs. Q SARSA and Q -learning very similar SARSA updates Q(s,a) for the policy it’s actually executing Lets the pick_next_action() function pick action to update Q updates Q(s,a) for greedy policy w.r.t. current Q Uses max_ a to pick action to update might be diff than the action it executes at s’ In practice: Q will learn the “true” π*, but SARSA will learn about what it’s actually doing Exploration can get Q -learning in trouble...

26 Getting Q in trouble... “Cliff walking” example (Sutton & Barto, Sec 6.5)

27 Getting Q in trouble... “Cliff walking” example (Sutton & Barto, Sec 6.5)

28 Radioactive breadcrumbs Can now define eligibility traces for SARSA In addition to Q(s,a) table, keep an e(s,a) table Records “eligibility” (real number) for each state/action pair At every step ( (s,a,r,s’,a’) tuple): Increment e(s,a) for current (s,a) pair by 1 Update all Q(s’’,a’’) vals in proportion to their e(s’’,a’’) Decay all e(s’’,a’’) by factor of  Leslie Kaelbling calls this the “radioactive breadcrumbs” form of RL

29 SARSA(  )-learning alg. Algorithm: SARSA(  )_learn Inputs: S, A,  (0<=  <1),  (0<=  <1),  (0<=  <1) Outputs: Q e ( s, a )=0 // for all s, a s =get_curr_world_st(); a =pick_nxt_act( Q, s ) Repeat { ( r, s’ )=act_in_world( a ) a’ =pick_next_action( Q, s’ )  = r +  * Q ( s’, a’ )- Q ( s, a ) e ( s, a )+=1 foreach ( s’’, a’’ ) pair in ( S X A ) { Q ( s’’, a’’ )= Q ( s’’, a’’ )+  * e ( s’’, a’’ )*  e ( s’’, a’’ )*=  } a = a’ ; s = s’ ; } Until (bored)


Download ppt "Policies and exploration and eligibility, oh my!."

Similar presentations


Ads by Google