Presentation is loading. Please wait.

Presentation is loading. Please wait.

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 5: Monte Carlo Methods pMonte Carlo methods learn from complete sample.

Similar presentations


Presentation on theme: "R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 5: Monte Carlo Methods pMonte Carlo methods learn from complete sample."— Presentation transcript:

1 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 5: Monte Carlo Methods pMonte Carlo methods learn from complete sample returns Only defined for episodic tasks pMonte Carlo methods learn directly from experience On-line: No model necessary and still attains optimality Simulated: No need for a full model

2 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 2 Monte Carlo Policy Evaluation  Goal: learn V  (s)  Given: some number of episodes under  which contain s pIdea: Average returns observed after visits to s pEvery-Visit MC: average returns for every time s is visited in an episode pFirst-visit MC: average returns only for first time s is visited in an episode pBoth converge asymptotically 12345

3 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 3 First-visit Monte Carlo policy evaluation

4 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 4 Blackjack example pObject: Have your card sum be greater than the dealers without exceeding 21. pStates (200 of them): current sum (12-21) dealer’s showing card (ace-10) do I have a useable ace? pReward: +1 for winning, 0 for a draw, -1 for losing pActions: stick (stop receiving cards), hit (receive another card) pPolicy: Stick if my sum is 20 or 21, else hit

5 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 5 Blackjack value functions

6 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 6 Backup diagram for Monte Carlo pEntire episode included pOnly one choice at each state (unlike DP) pMC does not bootstrap pTime required to estimate one state does not depend on the total number of states

7 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 7 e.g., Elastic Membrane (Dirichlet Problem) The Power of Monte Carlo How do we compute the shape of the membrane or bubble?

8 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 8 RelaxationKakutani’s algorithm, 1945 Two Approaches

9 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 9 Monte Carlo Estimation of Action Values (Q) pMonte Carlo is most useful when a model is not available We want to learn Q *  Q  (s,a) - average return starting from state s and action a following  pAlso converges asymptotically if every state-action pair is visited pExploring starts: Every state-action pair has a non-zero probability of being the starting pair

10 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 10 Monte Carlo Control pMC policy iteration: Policy evaluation using MC methods followed by policy improvement pPolicy improvement step: greedify with respect to value (or action-value) function

11 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 11 Convergence of MC Control pGreedified policy meets the conditions for policy improvement: pAnd thus must be ≥  k by the policy improvement theorem pThis assumes exploring starts and infinite number of episodes for MC policy evaluation pTo solve the latter: update only to a given level of performance alternate between evaluation and improvement per episode

12 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 12 Monte Carlo Exploring Starts Fixed point is optimal policy  * Now proven (almost)

13 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 13 Blackjack example continued pExploring starts pInitial policy as described before

14 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 14 On-policy Monte Carlo Control greedy non-max pOn-policy: learn about policy currently executing pHow do we get rid of exploring starts? Need soft policies:  (s,a) > 0 for all s and a e.g.  -soft policy:  Similar to GPI: move policy towards greedy policy (i.e.  - soft)  Converges to best  -soft policy

15 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 15 On-policy MC Control

16 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 16 Off-policy Monte Carlo control pBehavior policy generates behavior in environment pEstimation policy is policy being learned about pAverage returns from behavior policy by probability their probabilities in the estimation policy

17 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 17 Learning about  while following

18 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 18 Off-policy MC control

19 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 19 Incremental Implementation pMC can be implemented incrementally saves memory pCompute the weighted average of each return incremental equivalentnon-incremental

20 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 20 Racetrack Exercise pStates: grid squares, velocity horizontal and vertical pRewards: -1 on track, -5 off track pActions: +1, -1, 0 to velocity p0 < Velocity < 5 pStochastic: 50% of the time it moves 1 extra square up or right

21 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 21 Summary pMC has several advantages over DP: Can learn directly from interaction with environment No need for full models No need to learn about ALL states Less harm by Markovian violations (later in book) pMC methods provide an alternate policy evaluation process pOne issue to watch for: maintaining sufficient exploration exploring starts, soft policies pIntroduced distinction between on-policy and off-policy methods pNo bootstrapping (as opposed to DP)


Download ppt "R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 5: Monte Carlo Methods pMonte Carlo methods learn from complete sample."

Similar presentations


Ads by Google