Presentation is loading. Please wait.

Presentation is loading. Please wait.

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 2: Evaluative Feedback pEvaluating actions vs. instructing by giving correct.

Similar presentations


Presentation on theme: "R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 2: Evaluative Feedback pEvaluating actions vs. instructing by giving correct."— Presentation transcript:

1 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 2: Evaluative Feedback pEvaluating actions vs. instructing by giving correct actions pPure evaluative feedback depends totally on the action taken. Pure instructive feedback depends not at all on the action taken. pSupervised learning is instructive; optimization is evaluative pAssociative vs. Nonassociative: Associative: inputs mapped to outputs; learn the best output for each input Nonassociative: “learn” (find) one best output pn-armed bandit (at least how we treat it) is: Nonassociative Evaluative feedback

2 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 2 The n-Armed Bandit Problem pChoose repeatedly from one of n actions; each choice is called a play pAfter each play, you get a reward, where These are unknown action values Distribution of depends only on pObjective is to maximize the reward in the long term, e.g., over 1000 plays To solve the n-armed bandit problem, you must explore a variety of actions and the exploit the best of them

3 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 3 The Exploration/Exploitation Dilemma pSuppose you form estimates pThe greedy action at t is pYou can’t exploit all the time; you can’t explore all the time pYou can never stop exploring; but you should always reduce exploring action value estimates

4 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 4 Action-Value Methods pMethods that adapt action-value estimates and nothing else, e.g.: suppose by the t-th play, action had been chosen times, producing rewards then p pWhereis a true value of the action a taken and is its estimate at time t “sample average”

5 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 5  -Greedy Action Selection pGreedy action selection:   -Greedy: {... the simplest way to try to balance exploration and exploitation

6 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 6 10-Armed Testbed pn = 10 possible actions pEach is chosen randomly from a normal distribution: peach is also normal: p1000 plays prepeat the whole thing 2000 times and average the results

7 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 7  -Greedy Methods on the 10-Armed Testbed

8 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 8 Softmax Action Selection pSoftmax action selection methods grade action probs. by estimated values. pThe most common softmax uses a Gibbs, or Boltzmann, distribution: “computational temperature”

9 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 9 Binary Bandit Tasks Suppose you have just two actions: and just two rewards: Then you might infer a target or desired action: { and then always play the action that was most often the target Call this the supervised algorithm It works fine on deterministic tasks…

10 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 10 Contingency Space The space of all possible binary bandit tasks: If the probability of success for action 1 is 0.1 and for action 2 is 0.2, then the supervised algorithm will fail since in almost all cases will switch to other action

11 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 11 Linear Learning Automata Both actions, represent a stochastic, incremental version of the supervised algorithm where is the probability of selecting action a

12 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 12 Performance on Binary Bandit Tasks A and B

13 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 13 Incremental Implementation Recall the sample average estimation method: Can we do this incrementally (without storing all the rewards)? We could keep a running sum and count, or, equivalently: The average of the first k rewards is (dropping the dependence on ): This is a common form for update rules: NewEstimate = OldEstimate + StepSize[Target – OldEstimate ]

14 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 14 Tracking a Nonstationary Problem Choosing to be a sample average is appropriate in a stationary problem, i.e., when none of the change over time, But not in a nonstationary problem. Better in the nonstationary case is: exponential, recency-weighted average

15 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 15 Optimistic Initial Values pAll methods so far depend on, i.e., they are biased. pSuppose instead we initialize the action values optimistically, i.e., on the 10-armed testbed, use Q is larger than the mean value of both rewards so it encourages exploration This approach may not work well in nonstationary problems after initial exploration dies out

16 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 16 Reinforcement Comparison pCompare rewards to a reference reward,, e.g., an average of observed rewards pStrengthen or weaken the action taken depending on pLet denote the preference for action pPreferences determine action probabilities, e.g., by Gibbs distribution: pThen:

17 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 17 Performance of a Reinforcement Comparison Method

18 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 18 Pursuit Methods pMaintain both action-value estimates and action preferences pAlways “pursue” the greedy action, i.e., make the greedy action more likely to be selected pAfter the t-th play, update the action values to get pThe new greedy action is pThen: and the probs. of the other actions decremented to maintain the sum of 1

19 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 19 Performance of a Pursuit Method

20 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 20 Associative Search actions Bandit 3 Imagine switching bandits at each play Associate the policy with a clue signal (red, yellow, green) similar to the n-armed bandit methods for its immediate reward but also similar to full reinforcement method for its policy learning

21 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 21 Conclusions pThese are all very simple methods but they are complicated enough—we will build on them pIdeas for improvements: estimating uncertainties... interval estimation approximating Bayes optimal solutions Gittens indices (specialized algorithm) pThe full RL problem offers some ideas for solution...


Download ppt "R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 2: Evaluative Feedback pEvaluating actions vs. instructing by giving correct."

Similar presentations


Ads by Google