Probabilistic Pursuit-Evasion Games with UGVs and UAVs

Slides:



Advertisements
Similar presentations
Joshua Fabian Tyler Young James C. Peyton Jones Garrett M. Clayton Integrating the Microsoft Kinect With Simulink: Real-Time Object Tracking Example (
Advertisements

A Survey on Tracking Methods for a Wireless Sensor Network Taylor Flagg, Beau Hollis & Francisco J. Garcia-Ascanio.
Simbeeotic: A Simulator and Testbed for Micro-Aerial Vehicle Swarm Experiments Bryan Kate, Jason Waterman, Karthik Dantu and Matt Welsh Presented By: Mostafa.
Luis Mejias, Srikanth Saripalli, Pascual Campoy and Gaurav Sukhatme.
Bastien DURAND Karen GODARY-DEJEAN – Lionel LAPIERRE Robin PASSAMA – Didier CRESTANI 27 Janvier 2011 ConecsSdf Architecture de contrôle adaptative : une.
Uncertain Multiagent Systems: Games and Learning H. Jin Kim, Songhwai Oh and Shankar Sastry University of California, Berkeley July 17, 2002 Decision-Making.
Systems Wireless EmBedded 1/18/2002WEBS Retreat Breakout1 Proposed Breakout Topics Programming Support for NEST –TinyOS macros, NesC, FSMs, SQLs, Matlab,
MASKS © 2004 Invitation to 3D vision Lecture 11 Vision-based Landing of an Unmanned Air Vehicle.
A Robotic Wheelchair for Crowded Public Environments Choi Jung-Yi EE887 Special Topics in Robotics Paper Review E. Prassler, J. Scholz, and.
Control and Decision Making in Uncertain Multiagent Hierarchical Systems June 10 th, 2002 H. Jin Kim and Shankar Sastry University of California, Berkeley.
Development of NEST Challenge Application: Distributed Pursuit Evasion Games (DPEGs) Bruno Sinopoli, Luca Schenato, Shawn Shaffert and Shankar Sastry With.
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
PEG Breakout Mike, Sarah, Thomas, Rob S., Joe, Paul, Luca, Bruno, Alec.
Pursuit Evasion Games (PEGs) Using a Sensor Network Luca Schenato, Bruno Sinopoli Robotics and Intelligent Machines Laboratory UC Berkeley
Dr. Shankar Sastry, Chair Electrical Engineering & Computer Sciences University of California, Berkeley.
An experiment on squad navigation of human and robots IARP/EURON Workshop on Robotics for Risky Interventions and Environmental Surveillance January 7th-8th,
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
Distributed Robot Agent Brent Dingle Marco A. Morales.
Autonomous Robotics Team Autonomous Robotics Lab: Cooperative Control of a Three-Robot Formation Texas A&M University, College Station, TX Fall Presentations.
POLI di MI tecnicolano VISION-AUGMENTED INERTIAL NAVIGATION BY SENSOR FUSION FOR AN AUTONOMOUS ROTORCRAFT VEHICLE C.L. Bottasso, D. Leonello Politecnico.
Firefighter Indoor Navigation using Distributed SLAM (FINDS) Major Qualifying Project Matthew Zubiel Nick Long Advisers: Prof. Duckworth, Prof. Cyganski.
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
1 DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Second Quarterly IPR Meeting January 13, 1999 P. I.s: Leonidas J. Guibas and.
Mobile Distributed 3D Sensing Sandia National Laboratories Intelligent Sensors and Robotics POC: Chris Lewis
Behavior Based Robotics: A Wall Following Behavior Arun Mahendra - Dept. of Math, Physics & Engineering, Tarleton State University Mentor: Dr. Mircea Agapie.
Flakey Flakey's BackFlakey's Front. Flakey's Control Architecture The following is cited from the SRI web pages: Overview SRI's mobile robot, Flakey,
Multiple Autonomous Ground/Air Robot Coordination Exploration of AI techniques for implementing incremental learning. Development of a robot controller.
Robot Autonomous Perception Model For Internet-Based Intelligent Robotic System By Sriram Sunnam.
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
Computational Mechanics and Robotics The University of New South Wales
Vision-based Landing of an Unmanned Air Vehicle
Sérgio Ronaldo Barros dos Santos (ITA-Brazil)
Boundary Assertion in Behavior-Based Robotics Stephen Cohorn - Dept. of Math, Physics & Engineering, Tarleton State University Mentor: Dr. Mircea Agapie.
Phong Le (EE) Josh Haley (CPE) Brandon Reeves (EE) Jerard Jose (EE)
1 Distributed and Optimal Motion Planning for Multiple Mobile Robots Yi Guo and Lynne Parker Center for Engineering Science Advanced Research Computer.
1 Structure of Aalborg University Welcome to Aalborg University.
Multi-UAV swarming Advisor: Dr. Manish Kumar Mentor: Mr. Balaji Sharma
Control and Decision Making in Uncertain Multi-agent Hierarchical Systems A Case Study in Learning and Approximate Dynamic Programming PI Meeting August.
University of Pennsylvania 7/15/98 Asymmetric Bandwidth Channel (ABC) Architecture Insup Lee University of Pennsylvania July 25, 1998.
Abstract A Structured Approach for Modular Design: A Plug and Play Middleware for Sensory Modules, Actuation Platforms, Task Descriptions and Implementations.
Contents  Teleoperated robotic systems  The effect of the communication delay on teleoperation  Data transfer rate control for teleoperation systems.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
ROBOGRAPHERS FACIAL EXPRESSION RECOGNITION USING SWARMS SPONSORED BY: DR. KATIA SYCARA TEAM : GAURI GANDHI SIDA WANG TIFFANY MAY JIMIT GANDHI ROHIT DASHRATHI.
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Multi-Player Pursuit Evasion Games, Learning, and Sensor Webs Shankar Sastry University of California, Berkeley ATO Novel Approaches to Information Assurance.
Auto-Park for Social Robots By Team Daedalus. Requirements for FVE Functional Receive commands from user via smartphone app Share data with other cars.
Energy-Efficient Protocol for Cooperative Networks.
SWARMS Scalable sWarms of Autonomous Robots and Mobile Sensors Ali Jadbabaie, Daniel E. Koditchek, Vijay Kumar (PI), and George Pappas l.
Heterogeneous Teams of Modular Robots for Mapping and Exploration by Grabowski et. al.
Mobicom ‘99 Per Johansson, Tony Larsson, Nicklas Hedman
David Shim Omid Shakernia
A Vision System for Landing an Unmanned Aerial Vehicle
Pursuit Evasion Games and Multiple View Geometry
Paul Pop, Petru Eles, Zebo Peng
Berkeley UAV / UGV Testbed
Pursuit-Evasion Games with UGVs and UAVs
Vision Based Motion Estimation for UAV Landing
Pursuit Evasion Games and Multiple View Geometry
Autonomous Robots Key questions in mobile robotics What is around me?
Multi-Agent Exploration
Aaron Swenson Samuel Farnsworth Derek Stewart Craig Call.
Mobile ad hoc networking: imperatives and challenges
A New Multipath Routing Protocol for Ad Hoc Wireless Networks
Networks of Autonomous Unmanned Vehicles
Distributed Sensing, Control, and Uncertainty
CajunBot: Tech Challenges
Distributed Control Applications Within Sensor Networks
Presentation transcript:

Probabilistic Pursuit-Evasion Games with UGVs and UAVs René Vidal C. Sharp, D. Shim, O. Shakernia, J. Hespanha, J. Kim, S. Rashid, S. Sastry University of California at Berkeley 04/05/01

Outline Introduction Pursuit Evasion Games Map Building Pursuit Policies Hierarchical Control Architecture Strategic Planner, Tactical Planner, Regulation, Sensing, Control System, Agent and Communication Architectures Architecture Implementation Tactical Layer: UGVs, UAVs, Hardware, Software, Sensor Fusion Strategic Layer: Map Building, Pursuit Policies, Visual Interface Experimental Results Evaluation of Pursuit Policies Pursuit Evasion Games with UGV’s and UAV’s Conclusions and Current Research

Introduction: The Pursuit-Evasion Scenario Evade!

Introduction: Theoretical Issues Probabilistic map building Coordinated multi-agent operation Networking and intelligent data sharing Path planning Identification of vehicle dynamics and control Sensor integration Vision system

Pursuit-Evasion Games Consider approach in Hespanha, Kim and Sastry Multiple pursuers catching one single evader Pursuers can only move to adjacent empty cells Pursuers have perfect knowledge of current location Sensor model: false positives (p) and negatives (q) for evader detection Evader moves randomly to adjacent cells Extensions in Rashid and Kim Multiple evaders: each one is recognized individually Supervisory agents: can “fly” over obstacles and evaders, cannot capture Sensor model for obstacle detection as well

Map Building: Map of Obstacles Sensor model: p = prob of false positive q = prob of false negative For a map, M, If sensor makes positive reading: M (x,y,t) = (1-q)*M(x,y,t-1)/((1-q)*M(x,y,t-1)+p*(1-M(x,y,t)) If sensor make negative reading: M (x,y,t) = q*M(x,y,t-1)/(q*M(x,y,t-1)+(1-p)*(1-M(x,y,t))

Map Building: Map of Evaders At each t, 1. Measurement step + y(t) ={v(t),e(t),o(t)} model for sensor 2. Prediction step model for evader’s motion

Pursuit Policies Greedy Policy Global-Max Policy Pursuer moves to the adjacent cell with the highest probability of having an evader over all maps Strategic planner assigns more importance to local measurements Global-Max Policy Pursuer moves towards the place with the highest probability of having an evader in the map May not take advantage of multiple pursuers (may move to the same place)

Pursuit Policies Theorem 1 (Hespanha, Kim, Sastry): For a greedy policy, The probability of the capture time being finite is equal to one The expected value of the capture time is finite Theorem 2 (Hespanha, Kim, Sastry): For a stay-in-place policy, The expected capture time increases as the speed of the evader decreases If the speed of the evader is zero, then the probability of the capture time being finite is less than one.

Hierarchical System Architecture position of evader(s) position of obstacles strategy planner position of pursuers map builder communications network evaders detected obstacles pursuers positions Desired pursuers positions tactical planner trajectory regulation tactical planner & regulation actuator positions [4n] lin. accel. & ang. vel. [6n] inertial [3n] height over terrain [n] obstacles detected evaders detected vehicle-level sensor fusion state of helicopter & height over terrain obstacles detected control signals [4n] agent dynamics actuator encoders INS GPS ultrasonic altimeter vision Exogenous disturbances terrain evader

Agent Architecture Segments the control of each agent into different layers of abstraction The same high-level control strategies can be applied to all agents Strategic Planner Mission planning, high level control, communication Tactical Planner Trajectory planning, Obstacle Avoidance, Regulation Regulation Low level control and sensing

Communication Architecture Map building and Strategic Planner can be Centralized: one agent will receive sensor information, build and broadcast the map Decentralized: each agent build its own map and shares its readings with the rest of the team Communication network can be Perfect: no packet loss, no transmission time, no network delay. Here all pursuers have identical map Imperfect: each agent will update its map and make decisions with the information available to it

Architecture Implementation: Part I Common Platform for UGV and UGV On board computer: Tactical Planner and Sensor Fusion GPS: Positioning Vision system: Obstacle and Evader Detection Wavelan and Ayllu: Communication Specific UGV Platform Pioneer Robot: sonars, dead-reckoning, compass Micro-controller: regulation and low-level control Saphira or Ayllu: Tactical Planning Specific UAV Platform Yamaha R-50: INS, ultrasonic sensors, inertial sensors, compass Navigation Computer: regulation and low-level control David Shim Control System

Vision System: PTZ & ACTS Hardware Onboard Computer: Linux Sony pan/tilt/zoom camera PXC200 frame grabber Camera Control Software in Linux Send PTZ Commands Receive Camera State ACTS System Captures and processes video 32 color channels 10 blobs per channel Extract color information and sends it to a TCP socket Number of blobs, Size and position of each blob

Visual based position estimation Motion Model Image Model Camera position and orientation Helicopter orientation relative to ground Camera orientation relative to helicopter Camera calibration Width, height, zoom Robot position estimate

Communication Hardware Network is setup in ad-hoc mode Lucent Wavelan wireless card (11Mbs) Network is setup in ad-hoc mode TCP/IP sockets Ayllu TBTRF: SRI mobile routing scheme Set of behaviors for distributed control of multiple mobile robots Messages can be passed among behaviors The output of a behavior can be connected to a local or remote input of another behavior

Pioneer Ground Robots Hardware Sensors Communication Micro controller: motion control Onboard computer: communication, video processing, camera control Sensors Sonars: obstacle avoidance, map building GPS & compass: positioning Video camera: map building, navigation, tracking Communication Serial Wave-LAN: communication between robots and base station Radio modem: GPS communication

Yamaha Aerial Robots Yamaha R-50 helicopter Navigation Computer Pentium 233, running QNX Low Level Control - Sensing GPS INS UAV controller David Shim Controller Vehicle Control Language Vision Computer Serial communication to receive state of the helicopter We do not send commands yet

Architecture Implementation: Part II Strategic Planner Navigation Computer Serial Vision Computer Helicopter Control GPS: Position INS: Orientation Camera Control Color Tracking UGV Position Estimation Communication Map Building Pursuit Policies Communication Runs in Simulink Same for Simulation and Experiments UAV Pursuer TCP/IP Serial Robot Micro Controller Robot Computer UGV Pursuer UGV Evader Robot Control DeadReck: Position Compass: Heading Camera Control Color Tracking GPS: Position Communication

Pursuit-Evasion Game Experiment using Simulink PEG with four UGVs Global-Max pursuit policy Simulated camera view (radius 7.5m with 50degree conic view) Pursuer=0.3m/s Evader=0.1m/s MAX

Experimental Results: Evaluation of Policies

Experimental Results: Evaluation of Policies

Experimental Results: Pursuit Evasion Games with 1UAV and 2 UGVs (Summer’ 00)

Experimental Results: Pursuit Evasion Games with 4 UGVs and 1 UAV (Spring’ 01)

Experimental Results: Pursuit Evasion Games with 4UGVs and 1 UAV (Spring’ 01)

Conclusions and Current Research The proposed architecture has been successfully applied to the control of multiple agents for the pursuit evasion scenario Experimental results confirm theoretical results Global-max outperforms greedy in a real scenario and is robust to changes in evader motion What’s missing: Vision computer controlling the helicopter Current Research Collision Avoidance and UAV Path Planning Montecarlo based learning of Pursuit Policies Communication