Presentation is loading. Please wait.

Presentation is loading. Please wait.

Control and Decision Making in Uncertain Multi-agent Hierarchical Systems A Case Study in Learning and Approximate Dynamic Programming PI Meeting August.

Similar presentations


Presentation on theme: "Control and Decision Making in Uncertain Multi-agent Hierarchical Systems A Case Study in Learning and Approximate Dynamic Programming PI Meeting August."— Presentation transcript:

1

2 Control and Decision Making in Uncertain Multi-agent Hierarchical Systems A Case Study in Learning and Approximate Dynamic Programming PI Meeting August 1 st, 2002 Shankar Sastry University of California, Berkeley

3 1 Outline  Hierarchical architecture for multiagent operations  Confronting uncertainty  Partial observation Markov games (POMgame)  Model predictive techniques for dynamic replanning

4 2 Partial-observation Probabilistic Pursuit-Evasion Game (PEG) with 4 UGVs and 1 UAV Fully autonomous operation

5 3 Uncertainty pervades every layer! Hierarchy in Berkeley Platform actuator positions inertial positions height over terrain obstacles detected targets detected control signals INSGPS ultrasonic altimeter vision state of agents obstacles detected targets detected obstacles detected agents positions desired agents actions Tactical Planner & Regulation Vehicle-level sensor fusion Strategy PlannerMap Builder position of targets position of obstacles positions of agents Communications Network tactical planner trajectory planner regulation lin. accel. ang. vel. Targets Exogenous disturbance UAV dynamics Terrain actuator encoder s UGV dynamics

6 4 Human Interface Command Current Position, Vehicle Stats Evader location detected by Vision system Ground Station

7 5 Representing and Managing Uncertainty  Uncertainty is introduced in various channels –Sensing -> unable to determine the current state of world –Prediction -> unable to infer the future state of world –Actuation -> unable to make the desired action to properly affect the state of world  Different types of uncertainty can be addressed by different approaches –Nondeterministic uncertainty : Robust Control –Probabilistic uncertainty : (Partially Observable) Markov Decision Processes –Adversarial uncertainty : Game Theory POMGAME

8 6 Markov Games  Framework for sequential multiagent interaction in an Markov environment

9 7 Policy for Markov Games  The policy of agent i at time t is a mapping from the current state to probability distribution over its action set.  Agent i wants to maximize –the expected infinite sum of a reward that the agent will gain by executing the optimal policy starting from that state –where is the discount factor, and is the reward received at time t  Performance measure:  Every discounted Markov game has at least one stationary optimal policy, but not necessarily a deterministic one.  Special case : Markov decision processes (MDP) –Can be solved by dynamic programming

10 8 Partial Observation Markov Games (POMGame)

11 9 Policy for POMGames  The agent i wants to receive at least  Poorly understood: analysis exists only for very specially structured games such as a game with a complete information on one side  Special case : partially observable Markov decision processes (POMDP)

12 10 Acting under Partial Observations  Memory-free policies (mapping from observation to action or probability distribution over action sets) are not satisfactory.  In order to behave truly effectively we need to use memory of previous actions and observations to disambiguate the current state.  The state estimate, or belief state –Posterior probability distribution over states = the likelihood the world is actually in the state x, at time t, given the agent’s past experience (I.e. actions and observation histories). A priori human input on the initial state of world

13 11 Updating Belief State –Can be updated recursively using the estimated world model and Bayes’ rule. New info on the state of world New info on prediction

14 12 Pursuit-Evasion Games  Consider approach in Hespanha, Kim and Sastry –Multiple pursuers catching one single evader –Pursuers can only move to adjacent empty cells –Pursuers have perfect knowledge of current location –Sensor model: false positives (p) and negatives (q) for evader detection –Evader moves randomly to adjacent cells  Extensions in Rashid and Kim –Multiple evaders: assuming each one is recognized individually –Supervisory agents: can “fly” over obstacles and evaders, cannot capture –Sensor model for obstacle detection as well

15 13 BEAR Pursuit-Evasion Scenario Evade!

16 14 Problem Formulation

17 15  Performance measure : capture time  Optimal policy  minimizes the cost Optimal Pursuit Policy

18 16  cost-to-go for policy , when the pursuers start with Y t = Y and a conditional distribution  for the state x(t)  cost of policy  Optimal Pursuit Policy

19 17 Persistent pursuit policies  Optimization using dynamic programming is computationally intensive.  Persistent pursuit policy g

20 18 Persistent pursuit policies  Persistent pursuit policy g with a period T

21 19 Pursuit Policies Greedy Policy –Pursuer moves to the cell with the highest probability of having an evader at the next instant –Strategic planner assigns more importance to local or immediate considerations –u(v) : list of cells that are reachable from the current pursuers position v in a single time step.

22 20 Persistent Pursuit Policies for unconstrained motion Theorem 1, for unconstrained motion  The greedy policy is persistent. ->The probability of the capture time being finite is equal to one ->The expected value of the capture time is finite

23 21 Persistent Pursuit Policies for constrained motion Assumptions 1.For any 2.Theorem 2, for constrained motion  There is an admissible pursuit policy that is persistent on the average with period

24 22 Experimental Results: Pursuit Evasion Games with 4UGVs (Spring’ 01)

25 23 Experimental Results: Pursuit Evasion Games with 4UGVs and 1 UAV (Spring’ 01)

26 24 Pursuit-Evasion Game Experiment PEG with four UGVs Global-Max pursuit policy Simulated camera view (radius 7.5m with 50degree conic view) Pursuer=0.3m/s Evader=0.5m/s MAX

27 25 Pursuit-Evasion Game Experiment PEG with four UGVs Global-Max pursuit policy Simulated camera view (radius 7.5m with 50degree conic view) Pursuer=0.3m/s Evader=0.5m/s MAX

28 26 Experimental Results: Evaluation of Policies for different visibility  Global max policy performs better than greedy, since the greedy policy selects movements based only on local considerations.  Both policies perform better with the trapezoidal view, since the camera rotates fast enough to compensate the narrow field of view. Capture time of greedy and glo-max for the different region of visibility of pursuers 3 Pursuers with trapezoidal or omni-directional view Randomly moving evader

29 27 Experimental Results: Evader’s Speed vs. Intelligence Having a more intelligent evader increases the capture time Harder to capture an intelligent evader at a higher speed The capture time of a fast random evader is shorter than that of a slower random evader, when the speed of evader is only slightly higher than that of pursuers. Capture time for different speeds and levels of intelligence of the evader 3 Pursuers with trapezoidal view & global maximum policy Max speed of pursuers: 0.3 m/s

30 28 Game-theoretic Policy Search Paradigm  Solving very small games with partial information, or games with full information, are sometimes computationally tractable  Many interesting games including pursuit-evasion are a large game with partial information, and finding optimal solutions is well outside the capability of current algorithms  Approximate solution is not necessarily bad. There might be simple policies with satisfactory performances -> Choose a good policy from a restricted class of policies !  We can find approximately optimal solutions from restricted classes, using a sparse sampling and a provably convergent policy search algorithm

31 29 Constructing A Policy Class  Given a mission with specific goals, we –decompose the problem in terms of the functions that need to be achieved for success and the means that are available –analyze how a human team would solve the problem –determine a list of important factors that complicate task performance such as safety or physical constraints  Maximize aerial coverage,  Stay within a communications range,  Penalize actions that lead an agent to a danger zone,  Maximize the explored region,  Minimize fuel usage, …

32 30 Policy Representation  Quantitize the above features and define a feature vector that consists of the estimate of above quantities for each action given agents’ history  Estimate the ‘goodness’ of each action by constructing where is the weighting vector to be learned.  Choose an action that maximizes.  Or choose a randomized action according to the distribution Degree of Exploration

33 31 Policy Search Paradigm  Searching for optimal policies is very difficult, even though there might be simple policies with satisfactory performances.  Choose a good policy from a restricted class of policies !  Policy Search Problem

34 32 PEGASUS (Ng & Jordan, 00)  Given a POMDP,  Assuming a deterministic simulator, we can construct an equivalent POMDP with deterministic transitions.  For each policy  2  for  we can construct an equivalent policy  0 2  0 for  0 such that they have the same value function, i.e. V  (  ) = V  0 (  0 ).  It suffices for us to find a good policy for the transformed POMDP  0.  Value function can be approximated by a deterministic function, and m s samples are taken and reused to compute the value function for each candidate policy. --> Then we can use standard optimization techniques to search for approximately optimal policy.

35 33 Performance Guarantee & Scalability  Theorem  We are guaranteed to have a policy with the value close enough to the optimal value in the class 

36 34 Acting under Partial Observations  Computing the value function is very difficult under partial observations.  Naïve approaches for dealing with partial observations: –State-free deterministic policy : mapping from observation to action  Ignores partial observability (i.e., treat observations as if they were the states of the environment)  Finding an optimal mapping is NP-hard. Even the best policy can have very poor performance or can cause a trap. – State-free stochastic policy : mapping from observation to probability distribution over action  Finding an optimal mapping is still NP-hard.  Agents still cannot learn from the reward or penalty received in the past.

37 35 Example:Abstraction of Pursuit-Evasion Game  Consider a partial-observation stochastic pursuit-evasion game in a 2-D grid world, between (heterogeneous) teams of n e evaders and n p pursuers.  At each time t, –Each evader and pursuer, located at and respectively, –takes the observation over its visibility region –updates the belief state –chooses action from  Goal: capture of the evader, or survival

38 36 Example: Policy Feature  Maximize collective aerial coverage -> maximize the distance between agents where is the location of pursuer that will be landed by taking action from  Try to visit an unexplored region with high possibility of detecting an evader where is a position arrived by the action that maximizes the evader map value along the frontier

39 37  Prioritize actions that are more compatible with the dynamics of agents  Policy representation Example: Policy Feature (Continued)

40 38 Benchmarking Experiments  Performance of two pursuit policies compared in terms of capture time  Experiment 1 : two pursuers against the evader who moves greedily with respect to the pursuers’ location  Experiment 2 : When we supposed the position of evader at each step is detected by the sensor network with only 10% accuracy, two optimized pursuers took 24.1 steps, while the one-step greedy pursuers took over 146 steps in average to capture the evader in 30 by 30 grid. Grid size1-Greedy pursuersOptimized pursuers 10 by 10(7.3, 4.8)(5.1, 2.7) 20 by 20(42.3, 19.2)(12.3, 4.3)

41 39 Modeling RUAV Dynamics Position Spatial velocities Angles Angular rates Servoinputs throttle longitudinal flapping lateral flapping main rotor collective pitch tail rotor collective pitch Body Velocities Angular rates Aerodynamic Analysis Coordinate Transformation Augmented Servodynamics Tractable Nonlinear Model

42 40 Benchmarking Trajectory PD controller Example PD controller fails to achieve nose-in circle type trajectories. Nonlinear, coupled dynamics are intrinsic characteristics in pirouette and nose-in circle trajectories.

43 41 Reinforcement Learning Policy Search Control Design 1.Aerodynamics/kinematics generates a model to identify. 2.Locally weighted Bayesian regression is used for nonlinear stochastic identification: we get the posterior distribution of parameters, and can easily simulate the posterior predictive distribution to check the fit and robustness. 3.A controller class is defined from the identification process and physical insights and we apply policy search algorithm. 4.We obtain approximately optimal controller parameters by reinforcement learning, I.e. training using the flight data and the reward function. 5.Considering the controller performance with a confidence interval of the identification process, we measure the safety and robustness of control system.

44 42 Performance of RL Controller Manual vs. Autonomous Hover Assent & 360° x2 pirouette

45 43 pirouette maneuver2 maneuver1maneuver3 Nose-in During circling Heading kept the same Any variation of the following maneuvers in x-y direction Any combination of the following maneuvers Toughest Maneuvers for Rotorcraft UAVs

46 44 Demo of RL controller doing acrobatic maneuvers (Spring 02)

47 45 More Acrobatic Maneuvers (Spring 02)

48 46 From PEG to More Realistic Battlefield Scenarios  Adversarial attack –Reds just do not evade, but also attack -> Blues cannot blindly pursue reds.  Unknown number/capability of adversary -> Dynamic selection of the relevant red model from unstructured observation  Deconfliction between layers and teams  Increase number of feature -> Diversify possible solutions when the uncertainty is high

49 47 Why General-sum Games? " All too often in OR dealing with military problems, war is viewed as a zero- sum two-person game with perfect information. Here I must state as forcibly as I know that war is not a zero-sum two-person game with perfect information. Anybody who sincerely believes it is a fool. Anybody who reaches conclusions based on such an assumption and then tries to peddle these conclusions without revealing the quicksand they are constructed on is a charlatan....There is, in short, an urgent need to develop positive-sum game theory and to urge the acceptance of its precepts upon our leaders throughout the world." Joseph H. Engel, Retiring Presidential Address to the Operations Research Society of America, October 1969

50 48 General-sum Games  Depending on the cooperation between the players, – Noncooperative –Cooperative  Depending on the least expected payoff that a player is willing to accept- Nash’s special/general bargaining solution  By restricting the blue and red policy class to be the finite size, we reduce the POMGame into the bimatrix game.

51 49 From POMGame To Bimatrix Game Bimatrix game usually has multiple Nash equilibria, with different values.

52 50 Elucidating Adversarial Intention  The model posterior distribution can be used to predict the future observation, or select the model.  Then the blue team can employ the policy such that  Example Implemented : tracking unknown number of evaders with unknown dynamics with noisy sensors

53 51 Dynamic Bayesian Model Selection Dynamic Bayesian model selection (DBMS) is a generalized model selection approach to time series data of which the number of components can vary with time If K is the number of the components at any instance and T is the length of the time series, then there are O(2 KT ) possible models which demands an efficient algorithm The problem is formulated using Bayesian hierarchical modeling and solved using reversible jump MCMC methods suitably adapted.

54 52 DBMS

55 53 DBMS: Graphical Representation   – Dirichlet prior  A – Transition matrix for m t   t – Dirichlet prior  w t – component weights  z t – allocation variable  F – transition dynamics

56 54 DBMS

57 55 DBMS: Multi-target Tracking Example

58 56 Estimated target position + True target trajectory Observation

59 57 Estimated target position + True target trajectory Observation

60 58 Vision-based Landing of an Unmanned Aerial Vehicle Berkeley Researchers: Rene Vidal, Omid Shakernia, Shankar Sastry

61 59 What we have accomplished  Real-time motion estimation algorithms –Algorithms: Linear & Nonlinear two-view, Multi-view  Fully autonomous vision-based control/landing

62 60 Image Processing

63 61 Vision Monitoring Station

64 62 Vision System Hardware  Ampro embedded Little Board PC –Pentium 233MHz running LINUX –Motion estimation, UAV high-level control –Pan/Tilt/Zoom camera tracks target  Motion estimation algorithms –Written C++ using LAPACK –Estimate relative position and orientation at 30 Hz –Sends control to navigation computer at 10 Hz UAVPan/Tilt CameraOnboard Computer

65 63 Flight Control System Experiments Position+Heading Lock (Dec 1999) Position+Heading Lock (May 2000) Landing scenario with SAS (Dec 1999) Attitude control with mu-syn (July 2000)

66 64 Semi-autonomous Landing (8/01)

67 65 Autonomous Landing (3/02)

68 66 Autonomous Landing (3/02)

69 67 Multi-body Motion Estimation and Segmentation Vidal, Soatto, Sastry

70 68 Multi-body Motion Estimation  Motivation –Conflict Detection + Resolution + Formation Flight –Target Tracking  Given a set of image points and their flows obtain: –Number of independently moving objects –Segmentation: object to which each point belongs –Motion: rotation and translation of each object –Structure: depth of each point  Previous work –Orthographic projection camera (Costeira-Kanade’95) –Multiple points moving in straight line (Shashua-Levin’01)  This work considers full perspective projection, with multiple objects undergoing general motion  Motion not fooled by camouflage like other segmentation cues (texture, color, etc.)

71 69 Image Measurements  Form optical flow matrices  n= feature points, m= frames  Optical flow measurements live in a six dimensional space

72 70 Factorization  For one object one can factorize into motion and structure components  One can solve linearly for A and Z from

73 71 Multiple Moving Objects  For multiple independently moving objects  Obtain number of independent motions

74 72  Segmentation of the image points Segmentation

75 73 Experimental Results

76 74 Experimental Results

77 75 Experimental Results

78 76 A Roadmap for Cooperative Operation of Autonomous Vehicles  John Koo, Shannon Zelinski, Shankar Sastry  Department of EECS, UC Berkeley

79 77 Motivation  Multiple Autonomous Vehicle Applications –Unmanned aerial vehicles perform mission collectively –Satellites for distributed sensing –Autonomous underwater vehicles performing exploration –Autonomous cars forming platoons on roads  Enabling Technologies –Hierarchical control of multi-agents –Distributed Sensing and Actuation –Computation –Communication –Embedded Software

80 78 Formation Flight of Aerial Vehicles  Group Level –Formation Control –Conflict Resolution –Collision Avoidance  Vehicle Level –Vehicle Navigation –Envelope Protection Design Challenges Different Levels of Centralization Multiple Modes of Operation Organization of Information Flow

81 79 Possible Formations for a UAV mission Line Formation Diamond Formation Loose Formation

82 80 Components of Formation Flight  Formation Generation –Generate a set of feasible formations where each formation satisfies multiple constraints including vehicle dynamics, communication, and sensing capabilities.  Formation Initialization –Given an initial and a final formation for a group of autonomous vehicles, formation initialization problem is to generate collision-free and feasible trajectories and to derive control laws for the vehicles to track the given trajectories simultaneously in finite time.  Formation Control –Formation control of multiple autonomous vehicles focus on the control of individual agents to keep them in a formation, while satisfying their dynamic equations and inter- agent formation constraints, for an underlying communication protocol being deployed.

83 81 Components of Formation Flight  Formation Generation –Generate a set of feasible formations and each formation satisfies multiple constraints including vehicle dynamics, communication, and sensing capabilities. Leader Trajectory Formation Constraints + Dynamic Constraints

84 82 Components of Formation Flight  Formation Initialization –Given an initial and a final formation for a group of autonomous vehicles, formation initialization problem is to generate collision-free and feasible trajectories and to derive control laws for the vehicles to track the given trajectories simultaneously in finite time. Line Formation Diamond Formation

85 83 Components of Formation Flight  Formation Control –Formation control of multiple autonomous vehicles focus on the control of individual agents to keep them in a formation, while satisfying their dynamic equations and inter-agent formation constraints, for an underlying communication protocol being deployed.

86 84 Formation Initialization Virtual vehicles Actual vehicles

87 85 Elements Of Formation Flight  Information Resources –Wireless network –Global Positioning System –Inertial Navigation System –Radar System (Local and Active) –Vision System (Local and Passive)

88 86 Loose Formation Flight  GPS provides global positioning information to vehicles  Wireless network is used to distribute information between vehicles  Navigation computer on each vehicle calculates relative orientation, distance and velocities GPS signals Wireless Network

89 87 Tight Formation Flight  Vision system equipped with omni-directional camera can track neighboring vehicles  Structure from motion algorithms running on vision system provides estimates of relative orientation, distance and velocities to navigation computer

90 88 Hybrid Control Design for Formation Flight –Construct a Formation Mode Graph by considering dynamic and formation constraints. –For each formation, information about the formation is computed offline and is stored in each node of the graph. Feasible transition between formations are specified by edges. –Given an initial formation, any feasible formations can be efficiently searched on the graph.

91 89 Back Up Slides

92 90 Deconfliction between Layers Each UAV is given a waypoint by high- level planner Shortest trajectories to the waypoints may lead collision How to dynamically replan the trajectory for the UAVs subject to input saturation and state constraints

93 91 (Nonlinear) Model Predictive Control  Find that minimizes  Common choice

94 92 Planning of Feasible Trajectories  State saturation  Collision avoidance  Magnitude of each cost element represents the priority of tasks/functionality, or the authority of layers

95 93 Hierarchy in Berkeley Platform actuator positions inertial positions height over terrain obstacles detected targets detected control signals INSGPS ultrasonic altimeter vision state of agents obstacles detected targets detected obstacles detected agents positions desired agents actions Tactical Planner & Regulation Vehicle-level sensor fusion Strategy PlannerMap Builder position of targets position of obstacles positions of agents Communications Network tactical planner trajectory planner regulation lin. accel. ang. vel. Targets Exogenous disturbance UAV dynamics Terrain actuator encoder s UGV dynamics

96 94 H1 H2 H0 Cooperative Path Planning & Control Trajectories followed by 3 UAVs Coordination based on priority Example: Three UAVs are given straight line trajectories that will lead to collision. |Lin. Vel.| < 16.7ft/s |Ang| < pi/6 rad |Control Inputs| < 1 Constraints supported NMPPC dynamically replans and tracks the safe trajectory of H1 and H2 under input/state constraints.

97 95 Unifying Trajectory Generation and Tracking Control  Nonlinear Model Predictive Planning & Control combines trajectory planning and control into a single problem, using ideas from –Potential-field based navigation (real-time path planning) –Nonlinear model predictive control (optimal control of nonlinear multi-input, multi- output systems with input/state constraints)  We incorporate a tracking performance, potential function, state constraints into the cost function to minimize, and use gradient-descent for on-line optimization.  Removes feasibility issues by considering the UAV dynamics from the trajectory planning  Robust to parameter uncertainties  Optimization can be done real-time

98 96 Modeling and Control of UAVs  A single, computationally tractable model cannot capture nonlinear UAV dynamics throughout the large flight envelope.  Real control systems are partially observed (noise, hidden variables).  It is impossible to have data for all parts of the high-dimensional state-space. -> Model and Control algorithm must be robust to unmodeled dynamics and noise and handle MIMO nonlinearity. Observation: Linear analysis and deterministic robust control techniques fail to do so.


Download ppt "Control and Decision Making in Uncertain Multi-agent Hierarchical Systems A Case Study in Learning and Approximate Dynamic Programming PI Meeting August."

Similar presentations


Ads by Google