Www.aptima.com Woburn, MA ▪ Washington, DC © 2008, Aptima, Inc. Mission Plan Recognition: Developing Smart Automated Opposing Forces for Battlefield Simulations.

Slides:



Advertisements
Similar presentations
UJTL Ontology Effort TMCM Nelson And Marti Hall. Overview Vision for the UJTL and METLs Scenario Mapping Findings Proposed POA&M outline.
Advertisements

Tactical Operations Orders
Lesson 3 ODOT Analysis & Assessment. Analysis & Assessment Learning Outcomes As part of a small group, apply the two- part analysis by generating exposure-
1 C2 Maturity Model Experimental Validation Statistical Analyses of ELICIT Experimentation Data Dr. David S. Alberts.
Chapter 4 Pattern Recognition Concepts: Introduction & ROC Analysis.
Systems Analysis and Design in a Changing World
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Decision Making: An Introduction 1. 2 Decision Making Decision Making is a process of choosing among two or more alternative courses of action for the.
1 CISR-consultancy Challenges “Customer ask us what to do next” Keywords: “Customer ask us what to do next” From Policy to Practise The world is going.
Effective Coordination of Multiple Intelligent Agents for Command and Control The Robotics Institute Carnegie Mellon University PI: Katia Sycara
Computer Science Department Jeff Johns Autonomous Learning Laboratory A Dynamic Mixture Model to Detect Student Motivation and Proficiency Beverly Woolf.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Chapter 6: Database Evolution Title: AutoAdmin “What-if” Index Analysis Utility Authors: Surajit Chaudhuri, Vivek Narasayya ACM SIGMOD 1998.
Lecture 13 Revision IMS Systems Analysis and Design.
Requirements Analysis Concepts & Principles
Aeronautics & Astronautics Autonomous Flight Systems Laboratory All slides and material copyright of University of Washington Autonomous Flight Systems.
Army Doctrine Publication (ADP) 3-37; and Army
What is Business Analysis Planning & Monitoring?
Visual 3. 1 Lesson 3 Risk Assessment and Risk Mitigation.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
Chapter 8 Prediction Algorithms for Smart Environments
Introduction to Discrete Event Simulation Customer population Service system Served customers Waiting line Priority rule Service facilities Figure C.1.
Learning HMM-based cognitive load models for supporting human-agent teamwork Xiaocong Fan, Po-Chun Chen, John Yen 소프트컴퓨팅연구실황주원.
Integrating COIN and Full Spectrum Training LtCol M. B. Barry 23 Sep 2010.
© 2007 Tom Beckman Features:  Are autonomous software entities that act as a user’s assistant to perform discrete tasks, simplifying or completely automating.
2Object-Oriented Analysis and Design with the Unified Process The Requirements Discipline in More Detail  Focus shifts from defining to realizing objectives.
Study on Genetic Network Programming (GNP) with Learning and Evolution Hirasawa laboratory, Artificial Intelligence section Information architecture field.
8th CGF & BR Conference May 1999 Copyright 1999 Institute for Simulation & Training Continuous Planning and Collaboration for Command and Control.
What are the main differences and commonalities between the IS and DA systems? How information is transferred between tasks: (i) IS it may be often achieved.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
Ventures, Inc. 1 Effects-based Coalition Operations Maris “Buster” McCrabb KSCO Symposium 23 April 2002.
Perspectives AAAI Symposium on Agent Mediated Knowledge Management March 26, 2003 Sidney Bailin Knowledge Evolution, Inc. ( Walt Truszkowski.
Conceptual Modelling and Hypothesis Formation Research Methods CPE 401 / 6002 / 6003 Professor Will Zimmerman.
TYPES OF ORDERS ADMINISTRATIVE ORDER: COVERS NORMAL ADMINISTRATIVE OPERATIONS IN GARRISON OR IN THE FIELD. THEY INCLUDE GENERAL, SPECIFIC, & MEMORANDUM.
Selection and Navigation of Mobile sensor Nodes Using a Sensor Network Atul Verma, Hemjit Sawant and Jindong Tan Department of Electrical and Computer.
Command Post of the Future Limited Objective Experiment-1 Presented to: Information Superiority Workshop II: Focus on Metrics March 2000 Presented.
IS Metrics for C2 Processes Working Group 3 Brief Team Leaders: Steve Soules Dr. Mark Mandeles.
Advanced Decision Architectures Collaborative Technology Alliance An Interactive Decision Support Architecture for Visualizing Robust Solutions in High-Risk.
Assessing the Military Benefits of NEC Using a Generic Kill-Chain Approach David Nevell QinetiQ Malvern 21 ISMOR September 2004.
Intelligence Surveillance and Reconnaissance System for California Wildfire Detection Presented by- Shashank Tamaskar Purdue University
Business Analysis. Business Analysis Concepts Enterprise Analysis ► Identify business opportunities ► Understand the business strategy ► Identify Business.
Intelligent Site Selection Models for Asymmetric Threat Prediction and Decision Making Michael D. Porter North Carolina State University.
Pertemuan 16 Materi : Buku Wajib & Sumber Materi :
Boeing-MIT Collaborative Time- Sensitive Targeting Project July 28, 2006 Stacey Scott, M. L. Cummings (PI) Humans and Automation Laboratory
Chapter 2: Development process and organizations
COMMANDER’S INTENT & GUIDANCE
Identifying and Analyzing Patterns of Evasion HM Investigator: Shashi Shekhar (U Minnesota) Collaborators: Renee Laubscher, James Kang Kickoff.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Detecting and Tracking Hostile Plans in the Hats World.
Learning for Physically Diverse Robot Teams Robot Teams - Chapter 7 CS8803 Autonomous Multi-Robot Systems 10/3/02.
Military Intelligence
8th CGF & BR Conference May 1999 Copyright 1999 Institute for Simulation & Training Deriving Priority Intelligence Requirements for Synthetic Command.
Report Intel Information
UNCLASSIFIED 6/24/2016 8:12:34 PM Szymanski UNCLASSIFIED Page 1 of 15 Pages Space Policy Issues - Space Principles of War - 14 June, 2010.
Decision Support and Business Intelligence Systems (9 th Ed., Prentice Hall) Chapter 12: Artificial Intelligence and Expert Systems.
Visual Analytics for Cyber Defense Decision-Making Anita D’Amico, Ph.D. Secure Decisions division of Applied Visions, Inc.
Enabling Team Supervisory Control for Teams of Unmanned Vehicles
OVERVIEW Impact of Modelling and simulation in Mechatronics system
Tactical Decision Games
TechStambha PMP Certification Training
Religious Inputs/Outputs
The MDMP Process MDMP Inputs MDMP Outputs Step 1 MDMP Inputs Step 5
Location Prediction and Spatial Data Mining (S. Shekhar)
Agent Based Learning Systems
Operations Security (OPSEC)
C2 Maturity Model Experimental Validation
TYPES OF ORDERS ADMINISTRATIVE ORDER: COVERS NORMAL ADMINISTRATIVE OPERATIONS IN GARRISON OR IN THE FIELD. THEY INCLUDE GENERAL, SPECIFIC, & MEMORANDUM.
Counter APT Counter APT HUNT operations combine best of breed endpoint detection response technology with an experienced cadre of cybersecurity experts.
Chapter 2: Development process and organizations
Presentation transcript:

Woburn, MA ▪ Washington, DC © 2008, Aptima, Inc. Mission Plan Recognition: Developing Smart Automated Opposing Forces for Battlefield Simulations and Intelligence Analyses Georgiy Levchuk, PhD (Aptima) Darby Grande, PhD (Aptima) Krishna Pattipati, PhD (UCONN) Yuri Levchuk, PhD (Aptima) Alex Kott, PhD (DARPA) Presented at 2008 CCRTS

Ó 2008, Aptima, Inc. 2 Overview  The problem  Solution concept  Model description  Use case and simulation results  Future plans

Ó 2008, Aptima, Inc. 3 The Problem  Adversaries constantly adapt to actions of U.S. forces –Need to embed adaptation in predicting the adversarial behaviors  Current training is standardized, slow, and mismatches the asymmetric requirements of current wars and conflicts –Current models of OPFOR in training simulations are hard to construct and are not adaptive 07/07/loose_interpret/

Ó 2008, Aptima, Inc. 4 Proposed Solution Solution  Develop OPFOR agents that can intelligently learn what BLUEFOR are doing and adapt their behaviors Benefits  Better prediction of adaptive enemy  Faster and more efficient training  Collaborative war-gaming/mission rehearsal  Use of same models for developing BLUE decision alternatives

Ó 2008, Aptima, Inc. 5 The “Big Picture”  Phase I effort: –Models for all components –BLUEFOR ID Prototype –OPFOR Elementary Actions based on Partial BLUEFOR Vulnerability assessment C2 Simulator  observed events by OPFOR Requirements OPFOR Intel Collect Planning BLUEFOR predictions BLUEFOR predictions OPFOR Intelligence Collection OPFOR Action Execution High-Value Targets BLUEFOR Vulnerability Assessment BLUEFOR ID OPFOR Action Planning RealWorld  Phase II effort: –Integrate Intel collection planning and action planning with Identification and Vulnerability assessment –Integrate with C2 virtual environment (RealWorld) OPFOR Collection

Ó 2008, Aptima, Inc. 6 The Focus of Paper Identification of BLUEFOR operations & plans

Ó 2008, Aptima, Inc. 7 Modeling Details: Quantitative Concepts  Asset –An object in the simulation environment –Ex: BLUE Recon Squad, Humwee, MP Squad, etc.  Tasks –Coordinated actions that actors want to execute –Requires often multiple actions for success –Ex: Setup checkpoint, secure site, attack RED positions with fire, etc.  Mission Plan –Collection of tasks with precedence –Ex: Reconnaissance and patrolling  Organization –Connections & interactions between actors –Ex: a pattern of interactions, actions and meetings of enemy terrorist cell task

Ó 2008, Aptima, Inc. 8 Mission and Mission State Mission state is defined as labeled graph  Labels = states of tasks Mission states can be feasible and infeasible BLUEFOR mission is partially observable  BLUEFOR mission is hidden from OPFOR  RED has observation of BLUEFOR’s mission via detected actions  Observed mission state needs to be matched against possible feasible states

Ó 2008, Aptima, Inc. 9 observed mission state most likely mission and state Inputs: Observations Tracked entities Elementary actions/moves Properties Mission Plan ID Task ID M1 M2 M3 M5 M4 M2 Task-Mission Identification knowledge of BLUEFOR missions

Ó 2008, Aptima, Inc. 10 Example of Representing Missions and Tasks Conduct site reconnaissance Conduct patrolling-2 Conduct patrolling-1 Conduct patrolling-3 Conduct site security Mission: Reconnaissance and patrolling TasksSCRENTRSPMANRECINTFIREDTNPTRL Site recon Site security Patrolling BLUE UnitsSCRENTRSPMANRECINTFIREDTNPTRL RFL Squad Tank Rec Squad MP Humwee Tasks and their requirements Units and their capabilities Tasks assignment Rec Sqd(2) Rec Sqd OR RFL Sqd(3) MP Sqd(2)  Each BLUE unit conducts maneuvers and performs the action  Observation of action are made by OPFOR

Ó 2008, Aptima, Inc. 11 Model Property  Allows both temporal and spatial reasoning  Temporal: –Mission state changes over time  Actual evolution is limited significantly by precedence and available resources  Decision-based transitions – selecting what tasks to do, and what resources to use  Spatial: –Tasks have locations Hypothetical Mission

Ó 2008, Aptima, Inc. 12 Environment Observations of Activities time 1. Geo-spatial information:  Location of actions and movements 2. Temporal information:  Time of activities 3. Feature information:  Who  Type/features of actors and action (capability applied) RED Sensors  Action: search  Duration: 20 min  Location: Village residential area  Actors: Rifle Squad Example of Observable Data

Ó 2008, Aptima, Inc. 13 Task ID Solution (1) Cluster tasks according to locations 1 Hypothetical Mission Cluster observations according to areas of tasks 2

Ó 2008, Aptima, Inc. 14 Task ID Solution (2) Observations of Activities time Cluster observations over time Hypothetical Mission t1 t2 t3 t4 t5 Associate tasks with observations in the order of task priority 3b 4 Prioritize tasks according to precedence constraints 3a Perform max-likelihood estimation of task state 5 t1 t2 t3 t4 t5 Observed Mission State: Max Likelihood for Task State Estimates

Ó 2008, Aptima, Inc. 15 Mission Plan Recognition Model  Finding Mission State: max a-posteriori estimator  Finding Mission: max likelihood estimator

Ó 2008, Aptima, Inc. 16 Mission ID Solution t1 t2 t3 t4 t5t1 t2 t3 t4 t5t1 t2 t3 t4 t5t1 t2 t3 t4 t5 Observed Mission State t1 t2 t3 t4 t5 Predicted RED mission & its state Conduct max a- posteriori mission state estimation assuming that task states are uncertain observations 1 t1 t2 t3 t4 t5t1 t2 t3 t4 t5t1 t2 t3 t4 t5t1 t2 t3 t4 t5 Max-posterior Mission States Select the mission plan with highest likelihood score 2 Can be solved by dynamic programming, Hidden Markov Random Fields, etc.

Ó 2008, Aptima, Inc. 17 Experiment Summary  Measures –% correct predictions  Power of the prediction –Entropy (inverse of ambiguity)  Runs –For each ground-truth BLUE mission –Vary the probability of action/event detection by OPFOR elements –Vary time at which predictions are performed

Ó 2008, Aptima, Inc. 18 Use-case Details: BLUEFOR Missions  Tasks (operations) –Site reconnaissance –Patrolling –Site security –Attack OPFOR positions –Detain OPFOR members –Building search –Resupply ops –Checkpoint setup M1: Reconnaissance and patrolling Site reconnaissance(1) Patrolling(3) Site security(1) M2: Ground Stability and Defensive Ops Site reconnaissance(1) Attack OPFOR positions(3) Detain OPFOR members(3) Site security(1) M3: Search Site reconnaissance(1) Building search(3) Site security(1) M4: Security and supplies Site security(3) Resupply ops(3) Checkpoint setup(1) M5: Area and site security Site security(3) Checkpoint setup(3) Patrolling(1)

Ó 2008, Aptima, Inc. 19 Validation Prototype Architecture

Ó 2008, Aptima, Inc. 20 Inference over time-1   –Prediction improves with time as more events are collected P(detection)=.5 Ground Truth Predictions time1 time3END M1 Incorrect mission plan time2 Correct mission plan, wrong state M2: Ground Stability and Defensive Ops Site reconnaissance(1) Attack OPFOR positions(3) Detain OPFOR members(3) Site security(1)

Ó 2008, Aptima, Inc. 21 Inference over time-1 Likelihood Estimates over Time P(detection)=1 Attack (FIRE actions) task completed – start distinguishing hypotheses Ground Truth: BLUE Mission=Ground Stability and Defensive Ops % mission completed (time)

Ó 2008, Aptima, Inc. 22 Simulation Results-2: Recognition over time   –High detection accuracy (>=75%) in the middle of the mission –Detection accuracy data supported by entropy (power of estimator) –Detection improves towards the end of the mission as more actions are detected Entropy over Time % of correct detections (average over different p(detection)) P(detection)=.5

Ó 2008, Aptima, Inc. 23 Simulation Results-3: Effect of Detection Probability   –Improved accuracy of event detection increases accuracy of predictions –Sensitivity to specific mission plan classes is observed Entropy over Time sample=time2 % of correct detections (average over different sample times)

Ó 2008, Aptima, Inc. 24 Using Mission Estimates for RED Planning  Action plans based on org impact: –Effects on organizational resources: enemy attacks for the geographic or functional areas creating resource misutilization and correspondingly a resource shortage at the commander level –Effects on organizational interactions: events and task requirements that force commanders of BLUEFOR to increasingly coordinate and synchronize their activities, resulting in inefficient coordination patterns, loss of information, and correspondingly delayed actions and erroneous decision making by BLUEFOR –Effects on mission/objectives: vignettes and obstacles that prevent BLUE C2 from performing specific individual and mission tasks, thus requiring it to change or abort the mission

Ó 2008, Aptima, Inc. 25 Future Plans: Ideas for Phase II  Automated model building & updating –RED discovers over time the “patterns of BLUEFOR missions/TTPs”  RED adaptation for plan generation –RED learns over time the impact of its actions and responses of BLUEFOR  Integration of predictions with OPFOR intelligence collection/sensor planning –RED designs actions to collect data for highest SA