Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exploration and Apprenticeship Learning in Reinforcement Learning Pieter Abbeel and Andrew Y. Ng Stanford University.

Similar presentations


Presentation on theme: "Exploration and Apprenticeship Learning in Reinforcement Learning Pieter Abbeel and Andrew Y. Ng Stanford University."— Presentation transcript:

1 Exploration and Apprenticeship Learning in Reinforcement Learning Pieter Abbeel and Andrew Y. Ng Stanford University

2 Pieter Abbeel and Andrew Y. Ng Overview Reinforcement learning in systems with unknown dynamics. Algorithms such as E 3 (Kearns and Singh, 2002) learn the dynamics by using exploration policies. Aggressive exploration is dangerous for many systems. We show that in apprenticeship learning, when we have a teacher demonstration of the task, this explicit exploration step is unnecessary and instead we can just use exploitation policies.

3 Pieter Abbeel and Andrew Y. Ng Markov Decision Process (MDP), (S, A, P sa, H, s 0, R). Policy  : S ! A. Utility of a policy  U(  ) = E [  R(s t ) |  ]. Goal: find policy  that maximizes U(  ). Reinforcement learning formalism H t=0

4 Pieter Abbeel and Andrew Y. Ng Accurate dynamics model P sa Motivating example Textbook model Specification Accurate dynamics model P sa Collect flight data. Textbook model Specification Learn model from data. How to fly helicopter for data collection? How to ensure that entire flight envelope is covered by the data collection process?

5 Pieter Abbeel and Andrew Y. Ng Learning the dynamical model State-of-the-art: E 3 algorithm, Kearns and Singh (2002). (And its variants/extensions: Kearns and Koller, 1999; Kakade, Kearns and Langford, 2003; Brafman and Tennenholtz, 2002.) Have good model of dynamics? NO “Explore” YES “Exploit”

6 Pieter Abbeel and Andrew Y. Ng Aggressive manual exploration

7 Pieter Abbeel and Andrew Y. Ng Learning the dynamical model State-of-the-art: E 3 algorithm, Kearns and Singh (2002). (And its variants/extensions: Kearns and Koller, 1999; Kakade, Kearns and Langford, 2003; Brafman and Tennenholtz, 2002.) Have good model of dynamics? NO “Explore” YES “Exploit” Exploration policies are impractical: they do not even try to perform well. Can we avoid explicit exploration and just exploit?

8 Pieter Abbeel and Andrew Y. Ng Apprenticeship learning of the model Expert human pilot flight (a 1, s 1, a 2, s 2, a 3, s 3, ….) Learn P sa Dynamics model P sa Reinforcement learning max E[R(s 0 )+…+R(s H )] Control policy  (a 1, s 1, a 2, s 2, a 3, s 3, ….) Autonomous flight Learn P sa Duration? Performance? Number of iterations?

9 Pieter Abbeel and Andrew Y. Ng Typical scenario Initially: all state-action pairs are inaccurately modeled. Accurately modeled state-action pair. Inaccurately modeled state-action pair.

10 Pieter Abbeel and Andrew Y. Ng Typical scenario (2) Teacher demonstration. Not frequentl y visited by teacher’s policy. Frequentl y visited by teacher’s policy. Accurately modeled state-action pair. Inaccurately modeled state-action pair.

11 Pieter Abbeel and Andrew Y. Ng Typical scenario (3) Accurately modeled state-action pair. Inaccurately modeled state-action pair. First exploitation policy. Not frequentl y visited by teacher’s policy. Frequentl y visited by teacher’s policy. Frequently visited by first exploitation policy.

12 Pieter Abbeel and Andrew Y. Ng Typical scenario (4) Accurately modeled state-action pair. Inaccurately modeled state-action pair. Second exploitation policy. Not frequentl y visited by teacher’s policy. Frequentl y visited by teacher’s policy. Frequently visited by second exploitation policy.

13 Pieter Abbeel and Andrew Y. Ng Typical scenario (5) Accurately modeled state-action pair. Inaccurately modeled state-action pair. Third exploitation policy. Not frequentl y visited by teacher’s policy. Frequentl y visited by teacher’s policy. Frequently visited by third exploitation policy.  Model accurate for exploitation policy.  Model accurate for teacher’s policy.  Exploitation policy better than teacher in model. Also better than teacher in real world. Done.

14 Pieter Abbeel and Andrew Y. Ng Two dynamics models Discrete dynamics: Finite S and A. Dynamics P sa are described by state transition probabilities P(s’|s,a). Learn dynamics from data using maximum likelihood. Continuous, linear dynamics: Continuous valued states and actions. (S = < n S, A = < n A ). s t+1 = G  (s t ) + H a t + w t. Estimate G, H from data using linear regression.

15 Pieter Abbeel and Andrew Y. Ng Performance guarantees Let any ,  > 0 be given. Theorem. F or U(  ) ¸ U(  T ) -  within N=O(poly(1/ ,1/ ,H,R max,  )) iterations with probability 1- , it suffices: N teacher =  (poly(1/ ,1/ ,H,R max,  )), N exploit =  (poly(1/ ,1/ ,H,R max,  )). To perform as well as teacher, it suffices: a poly number of iterations a poly number of teacher demonstrations a poly number of trials with each exploitation policy.  = |S|,|A| (discrete),  = n S,n A,||G|| Fro,||H|| Fro (continuous). Take-home message: so long as a demonstration is available, it is not necessary to explicitly explore; it suffices to only exploit.

16 Pieter Abbeel and Andrew Y. Ng From initial pilot demonstrations, our model/simulator P sa will be accurate for the part of the state space (s,a) visited by the pilot. Our model/simulator will correctly predict the helicopter’s behavior under the pilot’s policy  T. Consequently, there is at least one policy (namely  T ) that looks capable of flying the helicopter well in our simulation. Thus, each time we solve the MDP using the current model/simulator P sa, we will find a policy that successfully flies the helicopter according to P sa. If, on the actual helicopter, this policy fails to fly the helicopter---despite the model P sa predicting that it should- --then it must be visiting parts of the state space that are inaccurately modeled. Hence, we get useful training data to improve the model. This can happen only a small number of times. Proof idea

17 Pieter Abbeel and Andrew Y. Ng IID = independent and identically distributed. Our algorithm All future states depend on current state. Exploitation policies depend on states visited. States visited depend on past exploitation policies. Exploitation policies depend on past exploitation policies. Very complicated non-IID sample generating process. Standard learning theory/convergence bounds (e.g., Hoeffding inequalities) cannot be used in our setting. Martingales, Azuma’s inequality, optional stopping theorem. Learning with non-IID samples

18 Pieter Abbeel and Andrew Y. Ng Related Work Schaal & Atkeson, 1994: open-loop policy as starting point for devil-sticking, slow exploration of state space. Smart & Kaelbling, 2000: model-free Q- learning, initial updates based on teacher. Supervised learning of a policy from demonstration, e.g., Sammut et al. (1992); Pomerleau (1989); Kuniyhoshi et al. (1994); Amit & Mataric (2002),… Apprenticeship learning for unknown reward function (Abbeel & Ng, 2004).

19 Pieter Abbeel and Andrew Y. Ng Conclusion Reinforcement learning in systems with unknown dynamics. Algorithms such as E 3 (Kearns and Singh, 2002) learn the dynamics by using exploration policies, which are dangerous/impractical for many systems. We show that this explicit exploration step is unnecessary in apprenticeship learning, when we have an initial teacher demonstration of the task. We attain near-optimal performance (compared to the teacher) simply by repeatedly executing “exploitation policies'' that try to maximize rewards. In finite-state MDPs, our algorithm scales polynomially in the number of states; in continuous-state linearly parameterized dynamical systems, it scales polynomially in the dimension of the state space.

20 Pieter Abbeel and Andrew Y. Ng End of talk, additional slides for poster after this

21 Pieter Abbeel and Andrew Y. Ng Dynamics model: s t+1 = G  (s t ) + H a t + w t Parameter estimates after k samples: (G (k),H (k) )= arg min G,H loss (k) (G,H) = arg min G,H  ( s t+1 – (G  (s t ) + H a t )) 2 Consider: Z (k) = loss (k) (G,H) – E[loss (k) (G,H)] Then: E[Z (k) | history up to time k-1] = Z (k-1) Thus: Z (0), Z (1), … is a martingale sequence. Using Azuma’s inequality (a standard martingale result) we prove convergence. Samples from teacher t=0 k

22 Pieter Abbeel and Andrew Y. Ng Consider: Z (k) = exp(loss (k) (G *,H * ) – loss (k) (G,H)) Then: E[Z (k) | history up to time k-1] = Z (k-1) Thus: Z (0), Z (1), … is a martingale sequence. Using the optional stopping theorem (a standard martingale result) we prove true parameters G *,H * outperform G, H with high probability for all k=0,1, … Samples from exploitation policies


Download ppt "Exploration and Apprenticeship Learning in Reinforcement Learning Pieter Abbeel and Andrew Y. Ng Stanford University."

Similar presentations


Ads by Google