Presentation is loading. Please wait.

Presentation is loading. Please wait.

Probabilistic Pursuit-Evasion Games with UGVs and UAVs

Similar presentations


Presentation on theme: "Probabilistic Pursuit-Evasion Games with UGVs and UAVs"— Presentation transcript:

1 Probabilistic Pursuit-Evasion Games with UGVs and UAVs
René Vidal C. Sharp, D. Shim, O. Shakernia, J. Hespanha, J. Kim, S. Rashid, S. Sastry University of California at Berkeley 04/05/01

2 Outline Introduction Pursuit Evasion Games
Map Building Pursuit Policies Hierarchical Control Architecture Strategic Planner, Tactical Planner, Regulation, Sensing, Control System, Agent and Communication Architectures Architecture Implementation Tactical Layer: UGVs, UAVs, Hardware, Software, Sensor Fusion Strategic Layer: Map Building, Pursuit Policies, Visual Interface Experimental Results Evaluation of Pursuit Policies Pursuit Evasion Games with UGV’s and UAV’s Conclusions and Current Research

3 Introduction: The Pursuit-Evasion Scenario
Evade!

4 Introduction: Theoretical Issues
Probabilistic map building Coordinated multi-agent operation Networking and intelligent data sharing Path planning Identification of vehicle dynamics and control Sensor integration Vision system

5 Pursuit-Evasion Games
Consider approach in Hespanha, Kim and Sastry Multiple pursuers catching one single evader Pursuers can only move to adjacent empty cells Pursuers have perfect knowledge of current location Sensor model: false positives (p) and negatives (q) for evader detection Evader moves randomly to adjacent cells Extensions in Rashid and Kim Multiple evaders: each one is recognized individually Supervisory agents: can “fly” over obstacles and evaders, cannot capture Sensor model for obstacle detection as well

6 Map Building: Map of Obstacles
Sensor model: p = prob of false positive q = prob of false negative For a map, M, If sensor makes positive reading: M (x,y,t) = (1-q)*M(x,y,t-1)/((1-q)*M(x,y,t-1)+p*(1-M(x,y,t)) If sensor make negative reading: M (x,y,t) = q*M(x,y,t-1)/(q*M(x,y,t-1)+(1-p)*(1-M(x,y,t))

7 Map Building: Map of Evaders
At each t, 1. Measurement step + y(t) ={v(t),e(t),o(t)} model for sensor 2. Prediction step model for evader’s motion

8 Pursuit Policies Greedy Policy Global-Max Policy
Pursuer moves to the adjacent cell with the highest probability of having an evader over all maps Strategic planner assigns more importance to local measurements Global-Max Policy Pursuer moves towards the place with the highest probability of having an evader in the map May not take advantage of multiple pursuers (may move to the same place)

9 Pursuit Policies Theorem 1 (Hespanha, Kim, Sastry):
For a greedy policy, The probability of the capture time being finite is equal to one The expected value of the capture time is finite Theorem 2 (Hespanha, Kim, Sastry): For a stay-in-place policy, The expected capture time increases as the speed of the evader decreases If the speed of the evader is zero, then the probability of the capture time being finite is less than one.

10 Hierarchical System Architecture
position of evader(s) position of obstacles strategy planner position of pursuers map builder communications network evaders detected obstacles pursuers positions Desired pursuers positions tactical planner trajectory regulation tactical planner & regulation actuator positions [4n] lin. accel. & ang. vel. [6n] inertial [3n] height over terrain [n] obstacles detected evaders detected vehicle-level sensor fusion state of helicopter & height over terrain obstacles detected control signals [4n] agent dynamics actuator encoders INS GPS ultrasonic altimeter vision Exogenous disturbances terrain evader

11 Agent Architecture Segments the control of each agent into different layers of abstraction The same high-level control strategies can be applied to all agents Strategic Planner Mission planning, high level control, communication Tactical Planner Trajectory planning, Obstacle Avoidance, Regulation Regulation Low level control and sensing

12 Communication Architecture
Map building and Strategic Planner can be Centralized: one agent will receive sensor information, build and broadcast the map Decentralized: each agent build its own map and shares its readings with the rest of the team Communication network can be Perfect: no packet loss, no transmission time, no network delay. Here all pursuers have identical map Imperfect: each agent will update its map and make decisions with the information available to it

13 Architecture Implementation: Part I
Common Platform for UGV and UGV On board computer: Tactical Planner and Sensor Fusion GPS: Positioning Vision system: Obstacle and Evader Detection Wavelan and Ayllu: Communication Specific UGV Platform Pioneer Robot: sonars, dead-reckoning, compass Micro-controller: regulation and low-level control Saphira or Ayllu: Tactical Planning Specific UAV Platform Yamaha R-50: INS, ultrasonic sensors, inertial sensors, compass Navigation Computer: regulation and low-level control David Shim Control System

14 Vision System: PTZ & ACTS
Hardware Onboard Computer: Linux Sony pan/tilt/zoom camera PXC200 frame grabber Camera Control Software in Linux Send PTZ Commands Receive Camera State ACTS System Captures and processes video 32 color channels 10 blobs per channel Extract color information and sends it to a TCP socket Number of blobs, Size and position of each blob

15 Visual based position estimation
Motion Model Image Model Camera position and orientation Helicopter orientation relative to ground Camera orientation relative to helicopter Camera calibration Width, height, zoom Robot position estimate

16 Communication Hardware Network is setup in ad-hoc mode
Lucent Wavelan wireless card (11Mbs) Network is setup in ad-hoc mode TCP/IP sockets Ayllu TBTRF: SRI mobile routing scheme Set of behaviors for distributed control of multiple mobile robots Messages can be passed among behaviors The output of a behavior can be connected to a local or remote input of another behavior

17 Pioneer Ground Robots Hardware Sensors Communication
Micro controller: motion control Onboard computer: communication, video processing, camera control Sensors Sonars: obstacle avoidance, map building GPS & compass: positioning Video camera: map building, navigation, tracking Communication Serial Wave-LAN: communication between robots and base station Radio modem: GPS communication

18 Yamaha Aerial Robots Yamaha R-50 helicopter Navigation Computer
Pentium 233, running QNX Low Level Control - Sensing GPS INS UAV controller David Shim Controller Vehicle Control Language Vision Computer Serial communication to receive state of the helicopter We do not send commands yet

19 Architecture Implementation: Part II
Strategic Planner Navigation Computer Serial Vision Computer Helicopter Control GPS: Position INS: Orientation Camera Control Color Tracking UGV Position Estimation Communication Map Building Pursuit Policies Communication Runs in Simulink Same for Simulation and Experiments UAV Pursuer TCP/IP Serial Robot Micro Controller Robot Computer UGV Pursuer UGV Evader Robot Control DeadReck: Position Compass: Heading Camera Control Color Tracking GPS: Position Communication

20 Pursuit-Evasion Game Experiment using Simulink
PEG with four UGVs Global-Max pursuit policy Simulated camera view (radius 7.5m with 50degree conic view) Pursuer=0.3m/s Evader=0.1m/s MAX

21 Experimental Results: Evaluation of Policies

22 Experimental Results: Evaluation of Policies

23 Experimental Results: Pursuit Evasion Games with 1UAV and 2 UGVs (Summer’ 00)

24 Experimental Results: Pursuit Evasion Games with 4 UGVs and 1 UAV (Spring’ 01)

25 Experimental Results: Pursuit Evasion Games with 4UGVs and 1 UAV (Spring’ 01)

26 Conclusions and Current Research
The proposed architecture has been successfully applied to the control of multiple agents for the pursuit evasion scenario Experimental results confirm theoretical results Global-max outperforms greedy in a real scenario and is robust to changes in evader motion What’s missing: Vision computer controlling the helicopter Current Research Collision Avoidance and UAV Path Planning Montecarlo based learning of Pursuit Policies Communication


Download ppt "Probabilistic Pursuit-Evasion Games with UGVs and UAVs"

Similar presentations


Ads by Google