Presentation is loading. Please wait.

Presentation is loading. Please wait.

Optimal Tuning of Continual Online Exploration in Reinforcement Learning Youssef Achbany, Francois Fouss, Luh Yen, Alain Pirotte & Marco Saerens Information.

Similar presentations


Presentation on theme: "Optimal Tuning of Continual Online Exploration in Reinforcement Learning Youssef Achbany, Francois Fouss, Luh Yen, Alain Pirotte & Marco Saerens Information."— Presentation transcript:

1 Optimal Tuning of Continual Online Exploration in Reinforcement Learning Youssef Achbany, Francois Fouss, Luh Yen, Alain Pirotte & Marco Saerens Information Systems Research Unit (ISYS) Université de Louvain Belgium

2 Achbany Youssef - UCL 2 Outline Introduction Mathematical concepts Modelling exploration by entropy Optimal policy Preliminary experiments Conclusion and further work

3 Achbany Youssef - UCL 3 Introduction One of the challenges of reinforcement learning is to manage: The tradeoff between exploration and exploitation. Exploitation aims to capitalize on already well-established solutions. Exploration: aims to continually try new ways of solving the problem. is relevant when the environment is changing.

4 Achbany Youssef - UCL 4 Introduction Simple routing problem The goal is to reach a destination node (13) From an initial node (1) To minimize costs For each node Set of admissible actions Weight (cost) associated We define a probability distribution on the set of admissible actions

5 Achbany Youssef - UCL 5 Mathematical concepts We have a set of states, S = {1, 2, …,n} s t = k means that the system is in state k at time t In each state s = k, we have a set of admissible control actions, U(k) So that u(k)  U(k) is a control action available at state k

6 Achbany Youssef - UCL 6 Mathematical concepts When we choose action u(s t ) at state s t, A bounded cost C(u(s t )| s t ) < ∞ is incurred The system jumps to state s t+1 = f(u(s t )| s t ) Where f is a function We suppose the network of states does not contain any negative cycle

7 Achbany Youssef - UCL 7 Mathematical concepts For each state s, we define a probability distribution on the set of admissible actions, P(u(s)| s) Meaning that the choice is randomized This introduces exploration – not only exploitation This is the main contribution of our work

8 Achbany Youssef - UCL 8 Mathematical concepts For instance if, in state s = k, there are three admissible actions, The probability distribution P(u(k)| s=k) involves three values k uk1uk1 P(u k 1 |k) uk2uk2 uk3uk3 P(u k 3 |k) P(u k 2 |k)

9 Achbany Youssef - UCL 9 Mathematical concepts The policy  is defined as the set of all probability distributions for all states

10 Achbany Youssef - UCL 10 Mathematical concepts The goal is to reach a destination state, s = d From an initial state, s 0 = k 0 While minimizing the total expected cost The expectation is taken on the policy, that is, on all the random variables u(k) associated to the states

11 Achbany Youssef - UCL 11 Mathematical concepts In other words, we have to determine the best policy  that minimizes V  (k 0 ) That is, the best probability distributions This is standard, except the fact that we introduce choice randomisation

12 Achbany Youssef - UCL 12 Mathematical concepts We now introduce a way to control exploration We introduce the degree of exploration, E k, defined on each state k Which is the entropy of the probability distribution of actions in this state k

13 Achbany Youssef - UCL 13 Modelling exploration by entropy The degree of exploration, E k, is defined as the entropy at state k The minimum is 0 (no exploration) The maximum is log(n k ) where n k is the number of admissible actions in state k (full exploration)

14 Achbany Youssef - UCL 14 Modelling exploration by entropy While the exploration rate is defined as and takes its value between 0 (no exploration) and 1 (full exploration).

15 Achbany Youssef - UCL 15 Modelling exploration by entropy The goal now is to determine the optimal policy under exploration constraints That is, seek the policy,  *, among for which the expected cost, V  (k 0 ), is minimal while guarantying a given degree of exploration (entropy) in each state k

16 Achbany Youssef - UCL 16 Modelling exploration by entropy In other words, where the E k are provided/fixed by the user/designer They control the degree of exploration at each node k

17 Achbany Youssef - UCL 17 Modelling exploration by entropy Thus, we route the agents as fast as possible, while exploring the network

18 Achbany Youssef - UCL 18 Optimal policy Here are the necessary optimality conditions (for a local minimum), very similar to Bellman’s equations V  * (k) is the optimal expected cost from state k P(i|k) is the probability of chosing action i satisfying the entropy constraint through  k

19 Achbany Youssef - UCL 19 Optimal policy Which lead to the following updating rules Convergence has been proved in a stationary environment

20 Achbany Youssef - UCL 20 Optimal policy This updating rule has a nice interpretation: Route the agents preferably (with probability P(i|k) ) to the state from which the expected cost is minimal Including the direct cost for reaching this state

21 Achbany Youssef - UCL 21 Optimal policy If  k is large (zero entropy: no exploration), we obtain which is the common value iteration algorithm or Bellman’s equation for finding the shortest path

22 Achbany Youssef - UCL 22 Optimal policy If  k is zero (maximum entropy: full exploration), We perform a blind exploration We estimate the « average first passage time » Without taking the costs into consideration: where n k is the number of admissible actions in state k

23 Achbany Youssef - UCL 23 Advantages of our algorithm Our strategy could be interesting if the environment is changing And there is a need for continuous exploration Indeed, if no exploration is performed, The agent will not notice the changes unless they occur on the shortest path So that the policy will not be adjusted In other words, we propose an optimal exploration/exploitation trade-off

24 Achbany Youssef - UCL 24 Preliminary experiments Simple Network routing Dynamic Uncertain

25 Achbany Youssef - UCL 25 Preliminary experiments Exploration rate of 0% for all nodes (no exploration)

26 Achbany Youssef - UCL 26 Preliminary experiments Entropy rate of 30% for all nodes

27 Achbany Youssef - UCL 27 Preliminary experiments Entropy rate of 60% for all nodes

28 Achbany Youssef - UCL 28 Preliminary experiments Entropy rate of 90% for all nodes

29 Achbany Youssef - UCL 29 Preliminary experiments Other experimental simulations are provided in: Tuning continual exploration in reinforcement learning (Technical report submitted for publication). http://www.isys.ucl.ac.be/staff/francois/Articles /Achbany2005a.pdf

30 Achbany Youssef - UCL 30 Conclusion In this work, we presented a model integrating both exploration and exploitation in a common framework. The exploration rate is controlled by the entropy of the choice probability distribution defined on the states of the system. When no exploration is performed (zero entropy on each node), the model reduces to the common value iteration algorithm computing the minimum cost policy. On the other hand, when full exploration is performed (maximum entropy on each node), the model reduces to a "blind" exploration, without considering the costs.

31 Achbany Youssef - UCL 31 Further work This model has been extended to Stochastic shortest paths problems Discounted problems Acyclic graphs Edit-distances between string Developing links with Q-learning

32 Achbany Youssef - UCL 32 Thank you !!!


Download ppt "Optimal Tuning of Continual Online Exploration in Reinforcement Learning Youssef Achbany, Francois Fouss, Luh Yen, Alain Pirotte & Marco Saerens Information."

Similar presentations


Ads by Google