Download presentation
Presentation is loading. Please wait.
1
David Shim Omid Shakernia
A Hierarchical Approach to Probabilistic Pursuit-Evasion Games with Unmanned Ground and Aerial Vehicles Jin Kim René Vidal David Shim Omid Shakernia Shankar Sastry UC Berkeley
2
Outline Pursuit Evasion Game Scenario
Previous Work Hierarchical Control Architecture Implementation on Ground/Air Vehicles Experiment/Simulation Platform Evaluation of Game Strategies Speed, Sensing, Intelligence Experimental & Simulation Results Conclusions and Current Research I will begin the talk by describing the pursuit-evasion scenario, and the previous work that has led up to our current contribution. Next I’ll describe a hierarchical control architecture that we proposed to implement the PEG on real UAV/UGVs. I’ll briefly described our fleet of robots, and the control architecture on these robots. Also, I’ll explain a novel experiment/simulation platform that allows us to perform PEG experiments with real robots, simulations with pure software as well as hardware in the loop simulations of PEG Our simulation/experiment platform enabled us to evaluate the performance of different pursuit policies, and how the performance of high level strategies vary with the speed, intelligence and sensing capabilies of the players in the game. Finaly present conclusions and directions for research When we implemented the original pursuit strategies on real robots that have nontrivial sensing (vision) and complex dynamics (helicopter, nonholonomic robot, etc), we found many theoritical issues that were not considered with the original theoretical formulation. That formulation had a simplified discrete jump model of robots, and simplified sensing model.
3
Scenario Evade! This is scenario:
There is an open field, out doors, with an unknown terrain and unknown number of trees and other obstacles. There are a group of unmanned ground vehicles, as well as a group of unmanned aerial vehicles. These ground and air vehicles can communicate with each other and form a TEAM of PURSUERS. The mission of the pursuers is to build a map of this environment and to capture another team of EVADERS The evaders can be actively trying to avoid being captured, (for example, by hiding behind obstacles, trees, etc. This is way we solve the problem: Divide the game arena into disjoint number of cells or an OCCUPANCY GRID Compute a probabilistic map of the arena, such that each cell has an associated probability of containing an evader or obstacle Given this probabilistic map, compute a PURSUIT STRATEGY which guides the pursers to be locations in the arena to maximize the probability of capturing an evader Evade!
4
Probabilistic Map Building
Measurements Step sensor model Prediction step: evader motion model Hespanha, et. al. [CDC ’99, CDC ‘00] Optimal pursuit policies computationally infeasible Greedy Pursuit / random evader A few more details on probabilistic map building on the occupancy grid It is a recursive Bayesian approach, which builds up probabilities of an evader or obstacle occupying each cell in the grid, based on possibly noisy measurements of the pursuers. Given a probabilistic map, a the PREVIOUS time instant Use the measurements and sensor model to compute probabilities of evaders occupying cells at the current time instant Next, based on a evader motion model, predict the locations of evaders at the next time instant In Hespanha [cdc99]: First to proposed the approach of combining probabilistic map building and pursuit-policies This was an abstract theoretical work, which considered discrete world, with discrete-jump dynamics of robots, and highly simplified sensing model It was shown that the optimal pursuit policy, which minimizes the expected capture time of the evaders is in general infeasible to compute in real time. Proposed a greedy pursuit policy, where each pursuer maximized probability of capturing evader at next time instant Very nice theoretical results for this greedy policy Probability of having a finite capture time is 1 Expected value of capture time is finite Hespanha [cdc00] One step nash equilibium game theoretic approach
5
PEG on UAVs and UGVs Vidal et. al. [ICRA ‘01]
Hierarchical architecture Implement regulation layer control Kim et. al. [CDC ’01] Implement high level strategies Global-max pursuit Intelligent evader Evaluate Pursuit Policy All this theory is nice, but is at a very abstract level, Many issues remain when trying to implement on real mobile robots Motion of evader is discrete Dynamics of agents not included Oversimplifies sensor model (false positives/negatives) When you implement in real systems, new theoretical issues appear: Modeling different cameras Continuous dynamics of agents, dynamics Communication issues Sensing model in the original theory was greatly simplified So, for the last year and a half our research thrust has been to implement PEGs on real UAVs and UGVS In Vidal et. Al [icra01] We began to implement the PEG on mobile robots, etc Implementation of test-bed on mobile robots: UAVs and UGVs Proposed a Hierarchical approach Implemented Low level regulation on UAVs and UGVs Implemented sensing elements, INS, GPS, integration, Computer vision, etc In Kim et. Al [CDC01] Implement high level strategy planner Perform full probabilistic PEG on real UAVs and UGVs Evaluate performance of pursuit policies in terms of: Dynamics of mobile robots (speed, maneuverability, etc) Sensing capabilities of pursuers (type of vision sensor, range, field of view, etc) In particular, we got some interesting results on the performance of pursuit policies with respect to different types vision systems, that agree with the vision systems we see in predators and pray in the animal kingdom Predators tend to have narrow field of view, forward looking eyes Prey have wide field of view omni-directional vision
6
Hierarchical Architecture
map builder terrain evader control signals [4n] evaders detected obstacles pursuers positions Desired pursuers state of helicopter & height over terrain obstacles detected tactical planner & regulation actuator lin. accel. & ang. vel. [6n] inertial [3n] height over terrain [n] evaders detected vehicle-level sensor fusion communications network tactical planner trajectory regulation position of evader(s) position of obstacles strategy planner position of pursuers agent dynamics encoders INS GPS ultrasonic altimeter vision Exogenous disturbances The proposed hierarchical architecture for PEG comes from the theory of hierarchical hybrid systems. This architecture has been successfully applied to controlling platoons of cars in automated highway systems, as well as for conflict resolution in air traffic management systems, and flight vehicle management systems for controlling UAVs. The idea is to partition is large and complex control problem into various layers of abstraction. Strategic planner / Map Building: pursuit policy computation Communication layer tactical planner, and sensor fusion path planning, obstacle avoidance, position estimation of evader, Regulation layer real-time control, GPS, vision system
7
Experiment/Simulation Platform
UAV MATLAB/ Simulink Vision, Communication, Path Planning Real-time Control Tactical Planner Navigation Computer Strategic Planner Map Builder UGV Tactical Planner Robot Controller Now I’ll described the unified Experiment/Simulation platform which we have built to perform the pursuit-evasion game REAL-TIME CONTROL At the lowest level of the hierarchy, is the navigation and real-time control for our fleet of UAVs and UGVs. The real-time control of the UAV was part of the PhD work of David Shim.l The UAV regulation layer provides services such as hover, lateral motion, pirouette, etc Also manages the Inertial navigation system and GPS to know exactly the position of UAV The UGV is a Pioneer2AT robot, which comes with software for simple motion control. We have also integrated a GPS system and compass similar to the UAV TACTICAL and SENSOR FUSION The UAV and UGV have integrated Vision systems which they use for sensing the position of obstacles and evaders Currently the vision-based evader position estimation is based on COLOR TRACKING Simple sonar-based obstacle avoidance behaviors for UGVs STRATEGY and MAP BUILDING This is implemented in MATLAB and SIMULINK, and they communicate with the UAVs and UGVs through TCP sockets Strategy is sheltered from details of dynamics of robots: we can apply a GREEDY policy that was designed in that abstract theory directly to any number of UAVs and UGVs
8
Experiment/Simulation Platform
UAV model MATLAB/ Simulink System ID model, Camera model, INS/GPS model UAV Simulator Strategic Planner Map Builder UGV model Robot model, Camera model, Dead reckoning Pioneer Simulator A further benefit is that we can use the same STRATEGY PLANNER and MAP BUILDER in a SIMULATION by just replacing the actual robots with software simulation models of the robots We have a UAV model that was obtained by system ID (same model one used build the real-time controller) The pioneers come with a simulation model that performs dead reckoning We made a simple camera model, where we compute each pursuers region of visibility based on its position and field of view of its camera. In the simulation, if an evader or obstacle is within the region of visibility, then we a probability of false positives and false negatives of detection. Further, this is very flexible platform where we can purform full hardware experiments, full software simulations, and any mixture of hardware in the loop simulations.
9
PEG Experiment PEG with four UGVs Global-Max pursuit policy
Simulated camera view (radius 7.5m with 50o FOV) Pursuer=0.3m/s Evader=0.1m/s
10
Pursuit Policy: Sensing, Intelligence, Speed
Greedy Global-max Visibility Region Forward View Omni-directional View Evasion Policy Random Global-min Evader speed Evaluated policies against different vision capabilities Trapezoidal (narrow FOV) vs. OMNI-directional (wide FOV) Both vision systems covered same number of cells Narrow field of view Can see farther into distance Can sweep a larger area simply by rotation Omni-directional (wide angle) Can see in all angles Rotation does not help see more
11
Pursuit Policy vs. Vision System
Why global max outperforms greedy: Before we did real implementation, both policies were performing about the same Now, with all the dynamics and sensing, global-max 3 times better Reason Greedy policy: more often gives change of direction, pursuers have to spend more time rotating, which effectively reduces the translational speed (in practice you cannot rotate and move at full speed at same time) Global max: changes less frequently: pursuers don’t spend as much time changing directions, effectively faster Why trapezoidal better than omni-directional? Omni camera and trapezoidal camera cover same number of cells By rotating in place pursuers can effectively see a larger area Pursuers see further into distance Agrees with predator/prey situations we find in nature
12
Evader Speed vs. Policy Next we took the best performing pursuit policy, global-max, and the best vision system (forward view), and studying how changing the speed of the evader, and the intelligence of the evader affected capture time. The the case of an intelligent evader, the evader also builds a probabilistic map of pursuers and evaders, and follows a GLOBAL-MIN policy, where it tries to go to the location in the map with minim probability of being captured. Also, we kept the pursuers speed constant at 0.3m/s and did experiments where evaders were slow (0.1m/s) or faster (0.5m/s). It is intuitive to see that a FAST INTELLIGENT evader takes longer to be captured than a SLOW INTELLIGENT evader What is interesting to notice is a FAST RANDOM takes LESS time to capture than a SLOW RANDOM evader This was actually predicted in the original work of Hespanha, where in the extreme case that the evader is staying in place, the probability of capturing in finite time is less that 1 One important point is that the total capture time is some combination of exploration time, where the pursuers are building a map of the environment, as well as pursuit time * There must be some U shaped curve with the optimal speed for each pursuer
13
PEG: 4 UGVs and 1 UAV
14
Conclusions Conclusions Current Research
Hierarchical architecture applied to control multiple agents for pursuit evasion scenario Evaluated strategies vs. speed, sensing and intelligence Global-max outperforms greedy in a real scenario Forward view outperforms Omni-view Vision Agrees with biological predator/prey vision systems Current Research Multi-Body Structure from Motion for Pursuit-Evasion Games [submitted IFAC ’02] Collision Avoidance and UAV Path Planning Monte Carlo based learning of Pursuit Policies In practice, color tracking is not feasible, Need vision system that is able to identify and track each evader using features different than color Whole body of computer vision literature for one moving object: currently generalizing that theory for multiple moving objects
15
THE END
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.