ONLINE Q-LEARNER USING MOVING PROTOTYPES by Miguel Ángel Soto Santibáñez
Reinforcement Learning What does it do? Tackles the problem of learning control strategies for autonomous agents. What is the goal? The goal of the agent is to learn an action policy that maximizes the total reward it will receive from any starting state.
Reinforcement Learning What does it need? This method assumes that training information is available in the form of a real-valued reward signal given for each state- action transition. This method assumes that training information is available in the form of a real-valued reward signal given for each state- action transition. i.e. (s, a, r) i.e. (s, a, r) What problems? Very often, reinforcement learning fits a problem setting known as a Markov decision process (MDP).
Reinforcement Learning vs. Dynamic programming reward function r(s, a) r r(s, a) r state transition function state transition function δ(s, a) s’ δ(s, a) s’
Q-learning An off-policy control algorithm. Advantage: Converges to an optimal policy in both deterministic and nondeterministic MDPs. Disadvantage: Only practical on a small number of problems.
Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Repeat (for each episode) Initialize s Initialize s Repeat (for each step of the episode) Repeat (for each step of the episode) Choose a from s using an exploratory policy Choose a from s using an exploratory policy Take action a, observe r, s’ Take action a, observe r, s’ Q(s, a) Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] Q(s, a) Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ a’ s s’ s s’
Introduction to Q-learning Algorithm An episode: { (s 1, a 1, r 1 ), (s 2, a 2, r 2 ), … (s n, a n, r n ), } An episode: { (s 1, a 1, r 1 ), (s 2, a 2, r 2 ), … (s n, a n, r n ), } s’: δ(s, a) s’ s’: δ(s, a) s’ Q(s, a): Q(s, a): γ, α : γ, α :
A Sample Problem A B r = 8 r = 0 r = - 8
States and actions states:actions: NSEWNSEW
The Q(s, a) function N S W E states actionsactions
Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Repeat (for each episode) Initialize s Initialize s Repeat (for each step of the episode) Repeat (for each step of the episode) Choose a from s using an exploratory policy Choose a from s using an exploratory policy Take action a, observe r, s’ Take action a, observe r, s’ Q(s, a) Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] Q(s, a) Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ a’ s s’ s s’
Initializing the Q(s, a) function N S W E states actionsactions
Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Repeat (for each episode) Initialize s Initialize s Repeat (for each step of the episode) Repeat (for each step of the episode) Choose a from s using an exploratory policy Choose a from s using an exploratory policy Take action a, observe r, s’ Take action a, observe r, s’ Q(s, a) Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] Q(s, a) Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ a’ s s’ s s’
An episode
Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Repeat (for each episode) Initialize s Initialize s Repeat (for each step of the episode) Repeat (for each step of the episode) Choose a from s using an exploratory policy Choose a from s using an exploratory policy Take action a, observe r, s’ Take action a, observe r, s’ Q(s, a) Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] Q(s, a) Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ a’ s s’ s s’
Calculating new Q(s, a) values 1 st step:2 nd step: 3 rd step: 4 th step:
The Q(s, a) function after the first episode N S W E states actionsactions
A second episode
Calculating new Q(s, a) values 1 st step:2 nd step: 3 rd step: 4 th step:
The Q(s, a) function after the second episode N S W E states actionsactions
The Q(s, a) function after a few episodes N S W E states actionsactions
One of the optimal policies N S W E states actionsactions
An optimal policy graphically
Another of the optimal policies N S W E states actionsactions
Another optimal policy graphically
The problem with tabular Q-learning What is the problem? Only practical in a small number of problems because: a)Q-learning can require many thousands of training iterations to converge in even modest-sized problems. b)Very often, the memory resources required by this method become too large.
Solution What can we do about it? Use generalization. Use generalization. What are some examples? Tile coding, Radial Basis Functions, Fuzzy function approximation, Hashing, Artificial Neural Networks, LSPI, Regression Trees, Kanerva coding, etc.
Shortcomings Tile coding: Curse of Dimensionality. Tile coding: Curse of Dimensionality. Kanerva coding: Static prototypes. Kanerva coding: Static prototypes. LSPI: Require a priori knowledge of the Q-function. LSPI: Require a priori knowledge of the Q-function. ANN: Require a large number of learning experiences. ANN: Require a large number of learning experiences. Batch + Regression trees: Slow and requires lots of memory. Batch + Regression trees: Slow and requires lots of memory.
Needed properties 1)Memory requirements should not explode exponentially with the dimensionality of the problem. 2)It should tackle the pitfalls caused by the usage of “static prototypes”. 3)It should try to reduce the number of learning experiences required to generate an acceptable policy. NOTE: All this without requiring a priori knowledge of the Q-function. NOTE: All this without requiring a priori knowledge of the Q-function.
Overview of the proposed method 1) The proposed method limits the number of prototypes available to describe the Q-function (as Kanerva coding). 2) The Q-function is modeled using a regression tree (as the batch method proposed by Sridharan and Tesauro). 3) But prototypes are not static, as in Kanerva coding, but dynamic. 4) The proposed method has the capacity to update the Q- function once for every available learning experience (it can be an online learner).
Changes on the normal regression tree
Basic operations in the regression tree Rupture Rupture Merging Merging
Impossible Merging
Rules for a sound tree parent children parent
Impossible Merging
Sample Merging The “smallest predecessor”
Sample Merging List 1
Sample Merging The node to be inserted
Sample Merging List 1 List 1.1 List 1.2
Sample Merging
The agent Detectors’ Signals Actuators’ Signals Agent Reward
Applications BOOK STORE
Results first application TabularQ-learningMovingPrototypesBatchMethod PolicyQualityBestBestWorst ComputationalComplexity O(n) O(n log(n)) O(n2) O(n3) MemoryUsageBadBestWorst
Results first application (details) TabularQ-learningMovingPrototypesBatchMethod PolicyQuality $2,423,355$2,423,355$2,297,100 MemoryUsage10,202prototypes413prototypes11,975prototypes
Results second application MovingPrototypesLSPI(least-squares policy iteration) PolicyQualityBestWorst ComputationalComplexity O(n log(n)) O(n2) O(n) MemoryUsageWorstBest
Results second application (details) MovingPrototypesLSPI(least-squares policy iteration) PolicyQuality forever (succeeded) 26 time steps (failed) 170 time steps (failed) forever (succeeded) Required Learning Experiences ,902,621183, MemoryUsage about 170 prototypes 2 weight parameters
Results third application Reason for this experiment: Evaluate the performance of the proposed method in a scenario that we consider ideal for this method, namely one, for which there is no application specific knowledge available. What took to learn a good policy: Less than 2 minutes of CPU time. Less than 2 minutes of CPU time. Less that 25,000 learning experiences. Less that 25,000 learning experiences. Less than 900 state-action-value tuples. Less than 900 state-action-value tuples.
Swimmer first movie
Swimmer second movie
Swimmer third movie
Future Work Different types of splits. Different types of splits. Continue characterization of the method Moving Prototypes. Continue characterization of the method Moving Prototypes. Moving prototypes + LSPI. Moving prototypes + LSPI. Moving prototypes + Eligibility traces. Moving prototypes + Eligibility traces.