Presentation is loading. Please wait.

Presentation is loading. Please wait.

Apprenticeship Learning for Robotic Control, with Applications to Quadruped Locomotion and Autonomous Helicopter Flight Pieter Abbeel Stanford University.

Similar presentations


Presentation on theme: "Apprenticeship Learning for Robotic Control, with Applications to Quadruped Locomotion and Autonomous Helicopter Flight Pieter Abbeel Stanford University."— Presentation transcript:

1 Apprenticeship Learning for Robotic Control, with Applications to Quadruped Locomotion and Autonomous Helicopter Flight Pieter Abbeel Stanford University (Variation hereof) presented at Cornell/USC/UCSD/Michigan/UNC/Duke/UCLA/UW/EPFL/Berkeley/CMU Winter/Spring 2008 In collaboration with: Andrew Y. Ng, Adam Coates, J. Zico Kolter, Morgan Quigley, Dmitri Dolgov, Sebastian Thrun.

2

3 Big picture and key challenges Dynamics Model P sa Reward Function R Reinforcement Learning / Optimal Control Controller/ Policy  Prescribes action to take for each state Probability distribution over next states given current state and action Describes desirability of being in a state. Key challenges Providing a formal specification of the control task. Building a good dynamics model. Finding closed-loop controllers.

4 Apprenticeship learning algorithms Leverage expert demonstrations to learn to perform a desired task. Formal guarantees Running time Sample complexity Performance of resulting controller Enabled us to solve highly challenging, previously unsolved, real-world control problems in Quadruped locomotion Autonomous helicopter flight Overview

5 Example task: driving

6 Input: Dynamics model / Simulator P sa ( s t +1 | s t, a t ) No reward function Teacher’s demonstration: s 0, a 0, s 1, a 1, s 2, a 2, … (= trace of the teacher’s policy  *) Desired output: Policy, which (ideally) has performance guarantees, i.e., Note: R* is unknown. Problem setup

7 Formulate as standard machine learning problem Fix a policy class E.g., support vector machine, neural network, decision tree, deep belief net, … Estimate a policy from the training examples ( s 0, a 0 ), ( s 1, a 1 ), ( s 2, a 2 ), … E.g., Pomerleau, 1989; Sammut et al., 1992; Kuniyoshi et al., 1994; Demiris & Hayes, 1994; Amit & Mataric, 2002. Prior work: behavioral cloning

8 Limitations: Fails to provide strong performance guarantees Underlying assumption: policy simplicity

9 Problem structure Dynamics Model P sa Reward Function R Reinforcement Learning / Optimal Control Controller/ Policy  Prescribes action to take for each state: typically very complex Often fairly succinct

10 Apprenticeship learning [Abbeel & Ng, 2004] Assume Initialize: pick some controller  0. Iterate for i = 1, 2, … : “Guess” the reward function: Find a reward function such that the teacher maximally outperforms all previously found controllers. Find optimal control policy  i for the current guess of the reward function R w. If, exit the algorithm. Learning through reward functions rather than directly learning policies. There is no reward function for which the teacher significantly outperforms thus-far found policies.

11 Theoretical guarantees Guarantee w.r.t. unrecoverable reward function of teacher. Sample complexity does not depend on complexity of teacher’s policy  *.

12 Related work Prior work: Behavioral cloning. (covered earlier) Utility elicitation / Inverse reinforcement learning, Ng & Russell, 2000. No strong performance guarantees. Closely related later work: Ratliff et al., 2006, 2007; Neu & Szepesvari, 2007; Syed & Schapire, 2008. Work on specialized reward function: trajectories. E.g., Atkeson & Schaal, 1997.

13 Highway driving Teacher in Training WorldLearned Policy in Testing World Input: Dynamics model / Simulator P sa ( s t +1 | s t, a t ) Teacher’s demonstration: 1 minute in “training world” Note: R* is unknown. Reward features: 5 features corresponding to lanes/shoulders; 10 features corresponding to presence of other car in current lane at different distances

14 More driving examples In each video, the left sub-panel shows a demonstration of a different driving “style”, and the right sub-panel shows the behavior learned from watching the demonstration. Driving demonstration Learned behavior

15 Parking lot navigation [Abbeel et al., submitted] Reward function trades off: curvature smoothness, distance to obstacles, alignment with principal directions.

16 Demonstrate parking lot navigation on “train parking lot.” Run our apprenticeship learning algorithm to find a set of reward weights w. Receive “test parking lot” map + starting point and destination. Find a policyfor navigating the test parking lot. Experimental setup Learned reward weights

17 Learned controller

18 Reward function trades off 25 features. Quadruped [Kolter, Abbeel & Ng, 2008]

19 Demonstrate path across the “training terrain” Run our apprenticeship learning algorithm to find a set of reward weights w. Receive “testing terrain”---height map. Find a policy for crossing the testing terrain. Experimental setup Learned reward weights

20 Without learning

21 With learned reward function

22 Learn R Apprenticeship learning Dynamics Model P sa Reward Function R Reinforcement Learning / Optimal Control Controller  Teacher’s flight (s 0, a 0, s 1, a 1, ….)

23 Reinforcement Learning / Optimal Control Learn R Apprenticeship learning Dynamics Model P sa Reward Function R Controller  Teacher’s flight (s 0, a 0, s 1, a 1, ….)

24 Accurate dynamics model P sa Motivating example Textbook model Specification Accurate dynamics model P sa Collect flight data. Textbook model Specification Learn model from data.  How to fly for data collection?  How to ensure that the entire flight envelope is covered?

25 Aggressive (manual) exploration

26 Never any explicit exploration (neither manual nor autonomous). Near-optimal performance (compared to teacher). Small number of teacher demonstrations. Small number of autonomous trials. Desired properties

27 Learn P sa Apprenticeship learning of the model Dynamics Model P sa Reward Function R Reinforcement Learning / Optimal Control Controller  Autonomous flight (s 0, a 0, s 1, a 1, ….) Teacher’s flight (s 0, a 0, s 1, a 1, ….)

28 No explicit exploration required. Theoretical guarantees [Abbeel & Ng, 2005]

29 Learn R Learn P sa Dynamics Model P sa Reward Function R Reinforcement Learning / Optimal Control Controller  Autonomous flight (s 0, a 0, s 1, a 1, ….) Teacher’s flight (s 0, a 0, s 1, a 1, ….) Apprenticeship learning summary

30 Other relevant parts of the story Learning the dynamics model from data Locally weighted models Exploiting structure from physics Simulation accuracy at time-scales relevant for control Reinforcement learning / Optimal control Model predictive control Receding horizon differential dynamic programming [Abbeel et al. 2005, 2006a, 2006b, 2007]

31 Related work Bagnell & Schneider, 2001; LaCivita, Papageorgiou, Messner & Kanade, 2002; Ng, Kim, Jordan & Sastry 2004a (2001); Roberts, Corke & Buskey, 2003; Saripalli, Montgomery & Sukhatme, 2003; Shim, Chung, Kim & Sastry, 2003; Doherty et al., 2004. Gavrilets, Martinos, Mettler and Feron, 2002; Ng et al., 2004b. Maneuvers presented here are significantly more challenging than those flown by any other autonomous helicopter.

32 Autonomous aerobatic flips (attempt) before apprenticeship learning Task description: meticulously hand-engineered Model: learned from (commonly used) frequency sweeps data

33 1.Our expert pilot demonstrates the airshow several times. Experimental setup for helicopter

34 Demonstrations

35 1.Our expert pilot demonstrates the airshow several times. 2.Learn a reward function---trajectory. 3.Learn a dynamics model. Experimental setup for helicopter

36 Learned reward (trajectory)

37 1.Our expert pilot demonstrates the airshow several times. 2.Learn a reward function---trajectory. 3.Learn a dynamics model. 4.Find the optimal control policy for learned reward and dynamics model. 5.Autonomously fly the airshow 6.Learn an improved dynamics model. Go back to step 4. Experimental setup for helicopter

38

39 Accuracy White: target trajectory. Black: autonomously flown trajectory.

40 Summary Apprenticeship learning algorithms Learn to perform a task from observing expert demonstrations of the task. Formal guarantees Running time. Sample complexity. Performance of resulting controller. Enabled us to solve highly challenging, previously unsolved, real-world control problems.

41 Applications: Autonomous helicopters to assist in in wildland fire fighting. Fixed-wing formation flight: Estimated fuel savings for three aircraft formation: 20%. Learning from demonstrations only scratches surface of the potential impact of work at intersection machine learning/control on robotics. Safe autonomous learning. More general advice taking. Current and future work

42

43 Thank you.

44 Chaos (short)

45 Chaos (long)

46 Flips

47 Auto-rotation descent

48 Tic-toc

49 Autonomous nose-in funnel

50 From initial pilot demonstrations, our model/simulator P sa will be accurate for the part of the state space (s,a) visited by the pilot. Our model/simulator will correctly predict the helicopter’s behavior under the pilot’s controller  *. Consequently, there is at least one controller (namely  * ) that looks capable of flying the helicopter well in our simulation. Thus, each time we solve for the optimal controller using the current model/simulator P sa, we will find a controller that successfully flies the helicopter according to P sa. If, on the actual helicopter, this controller fails to fly the helicopter- --despite the model P sa predicting that it should---then it must be visiting parts of the state space that are inaccurately modeled. Hence, we get useful training data to improve the model. This can happen only a small number of times. Model Learning: Proof Idea

51 Apprenticeship Learning via Inverse Reinforcement Learning, Pieter Abbeel and Andrew Y. Ng. In Proc. ICML, 2004. Learning First Order Markov Models for Control, Pieter Abbeel and Andrew Y. Ng. In NIPS 17, 2005. Exploration and Apprenticeship Learning in Reinforcement Learning, Pieter Abbeel and Andrew Y. Ng. In Proc. ICML, 2005. Modeling Vehicular Dynamics, with Application to Modeling Helicopters, Pieter Abbeel, Varun Ganapathi and Andrew Y. Ng. In NIPS 18, 2006. Using Inaccurate Models in Reinforcement Learning, Pieter Abbeel, Morgan Quigley and Andrew Y. Ng. In Proc. ICML, 2006. An Application of Reinforcement Learning to Aerobatic Helicopter Flight, Pieter Abbeel, Adam Coates, Morgan Quigley and Andrew Y. Ng. In NIPS 19, 2007. Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion, J. Zico Kolter, Pieter Abbeel and Andrew Y. Ng. In NIPS 20, 2008.

52 Helicopter dynamics model in auto

53 Performance guarantee intuition Intuition by example: Let If the returned controller  satisfies Then no matter what the values of and are, the controller  performs as well as the teacher’s controller  *.

54 Probabilistic graphical model for multiple demonstrations

55 Full model

56 Step 1: find the time-warping, and the distributional parameters We use EM, and dynamic time warping to alternatingly optimize over the different parameters. Step 2: find the intended trajectory Learning algorithm

57 Teacher demonstration for quadruped Full teacher demonstration = sequence of footsteps. Much simpler to “teach hierarchically”: Specify a body path. Specify best footstep in a small area.

58 Hierarchical inverse RL Quadratic programming problem (QP): quadratic objective, linear constraints. Constraint generation for path constraints.

59 Training: Have quadruped walk straight across a fairly simple board with fixed-spaced foot placements. Around each foot placement: label the best foot placement. (about 20 labels) Label the best body-path for the training board. Use our hierarchical inverse RL algorithm to learn a reward function from the footstep and path labels. Test on hold-out terrains: Plan a path across the test-board. Experimental setup


Download ppt "Apprenticeship Learning for Robotic Control, with Applications to Quadruped Locomotion and Autonomous Helicopter Flight Pieter Abbeel Stanford University."

Similar presentations


Ads by Google