Presentation is loading. Please wait.

Presentation is loading. Please wait.

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Formulating MDPs pFormulating MDPs Rewards Returns Values pEscalator pElevators.

Similar presentations


Presentation on theme: "R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Formulating MDPs pFormulating MDPs Rewards Returns Values pEscalator pElevators."— Presentation transcript:

1 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Formulating MDPs pFormulating MDPs Rewards Returns Values pEscalator pElevators pParking pObject recognition pParallel parking pShip steering pBioreactor pHelicoptor pGo pProtein Sequences pFootball pBaseball pTraveling salesman pQuake pScene Interpretation pRobot walking pPortfolio management pStock index prediction

2 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 2 Chapter 7: Eligibility Traces

3 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 3 N-step TD Prediction pIdea: Look farther into the future when you do TD backup (1, 2, 3, …, n steps)

4 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 4 pMonte Carlo: pTD: Use V to estimate remaining return pn-step TD: 2 step return: n-step return: Mathematics of N-step TD Prediction

5 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 5 Learning with N-step Backups pBackup (on-line or off-line): pError reduction property of n-step returns pUsing this, you can show that n-step methods converge Maximum error using n-step return Maximum error using V

6 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 6 Random Walk Examples pHow does 2-step TD work here? pHow about 3-step TD?

7 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 7 A Larger Example pTask: 19 state random walk pDo you think there is an optimal n? for everything?

8 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 8 Averaging N-step Returns  n-step methods were introduced to help with TD( ) understanding pIdea: backup an average of several returns e.g. backup half of 2-step and half of 4- step pCalled a complex backup Draw each component Label with the weights for that component One backup

9 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 9 Forward View of TD( )  TD( ) is a method for averaging all n-step backups weight by n-1 (time since visitation) -return:  Backup using -return:

10 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 10 -return Weighting Function Until terminationAfter termination

11 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 11 Relation to TD(0) and MC  -return can be rewritten as:  If = 1, you get MC:  If = 0, you get TD(0) Until terminationAfter termination

12 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 12 Forward View of TD( ) II pLook forward from each state to determine update from future states and rewards:

13 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 13 -return on the Random Walk pSame 19 state random walk as before  Why do you think intermediate values of are best?

14 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 14 Backward View  Shout  t backwards over time  The strength of your voice decreases with temporal distance by 

15 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 15 Backward View of TD( ) pThe forward view was for theory pThe backward view is for mechanism pNew variable called eligibility trace On each step, decay all traces by  and increment the trace for the current state by 1 Accumulating trace

16 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 16 On-line Tabular TD( )

17 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 17 Relation of Backwards View to MC & TD(0) pUsing update rule:  As before, if you set to 0, you get to TD(0)  If you set to 1, you get MC but in a better way Can apply TD(1) to continuing tasks Works incrementally and on-line (instead of waiting to the end of the episode)

18 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 18 Forward View = Backward View  The forward (theoretical) view of TD( ) is equivalent to the backward (mechanistic) view for off-line updating pThe book shows:  On-line updating with small  is similar Backward updatesForward updates algebra shown in book

19 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 19 On-line versus Off-line on Random Walk pSame 19 state random walk pOn-line performs better over a broader range of parameters

20 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 20 Control: Sarsa( ) pSave eligibility for state-action pairs instead of just states

21 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 21 Sarsa( ) Algorithm

22 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 22 Sarsa( ) Gridworld Example pWith one trial, the agent has much more information about how to get to the goal not necessarily the best way pCan considerably accelerate learning

23 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 23 Three Approaches to Q( ) pHow can we extend this to Q- learning? pIf you mark every state action pair as eligible, you backup over non-greedy policy Watkins: Zero out eligibility trace after a non-greedy action. Do max when backing up at first non- greedy choice.

24 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 24 Watkins’s Q( )

25 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 25 Peng’s Q( ) pDisadvantage to Watkins’s method: Early in learning, the eligibility trace will be “cut” (zeroed out) frequently resulting in little advantage to traces pPeng: Backup max action except at end Never cut traces pDisadvantage: Complicated to implement

26 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 26 Naïve Q( ) pIdea: is it really a problem to backup exploratory actions? Never zero traces Always backup max at current action (unlike Peng or Watkins’s) pIs this truly naïve? pWorks well is preliminary empirical studies What is the backup diagram?

27 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 27 Comparison Task From McGovern and Sutton (1997). Towards a better Q( )  Compared Watkins’s, Peng’s, and Naïve (called McGovern’s here) Q( ) on several tasks. See McGovern and Sutton (1997). Towards a Better Q( ) for other tasks and results (stochastic tasks, continuing tasks, etc) pDeterministic gridworld with obstacles 10x10 gridworld 25 randomly generated obstacles 30 runs  = 0.05,  = 0.9, = 0.9,  = 0.05, accumulating traces

28 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 28 Comparison Results From McGovern and Sutton (1997). Towards a better Q( )

29 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 29 Convergence of the Q( )’s pNone of the methods are proven to converge. Much extra credit if you can prove any of them. pWatkins’s is thought to converge to Q *  Peng’s is thought to converge to a mixture of Q  and Q * pNaïve - Q * ?

30 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 30 Eligibility Traces for Actor-Critic Methods  Critic: On-policy learning of V . Use TD( ) as described before. pActor: Needs eligibility traces for each state-action pair. pWe change the update equation: pCan change the other actor-critic update: to where

31 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 31 Replacing Traces pUsing accumulating traces, frequently visited states can have eligibilities greater than 1 This can be a problem for convergence pReplacing traces: Instead of adding 1 when you visit a state, set that trace to 1

32 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 32 Replacing Traces Example pSame 19 state random walk task as before  Replacing traces perform better than accumulating traces over more values of

33 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 33 Why Replacing Traces? pReplacing traces can significantly speed learning pThey can make the system perform well for a broader set of parameters pAccumulating traces can do poorly on certain types of tasks Why is this task particularly onerous for accumulating traces?

34 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 34 More Replacing Traces pOff-line replacing trace TD(1) is identical to first-visit MC pExtension to action-values: When you revisit a state, what should you do with the traces for the other actions? Singh and Sutton say to set them to zero:

35 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 35 Implementation Issues with Traces pCould require much more computation But most eligibility traces are VERY close to zero pIf you implement it in Matlab, backup is only one line of code and is very fast (Matlab is optimized for matrices)

36 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 36 Variable  Can generalize to variable  Here is a function of time Could define

37 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 37 Conclusions pProvides efficient, incremental way to combine MC and TD Includes advantages of MC (can deal with lack of Markov property) Includes advantages of TD (using TD error, bootstrapping) pCan significantly speed learning pDoes have a cost in computation

38 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 38 The two views


Download ppt "R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Formulating MDPs pFormulating MDPs Rewards Returns Values pEscalator pElevators."

Similar presentations


Ads by Google