Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning Agents Center George Mason University Symposium on Reasoning and Learning in Cognitive Systems Stanford, CA, 20-21 May 2004 Gheorghe Tecuci with.

Similar presentations


Presentation on theme: "Learning Agents Center George Mason University Symposium on Reasoning and Learning in Cognitive Systems Stanford, CA, 20-21 May 2004 Gheorghe Tecuci with."— Presentation transcript:

1 Learning Agents Center George Mason University Symposium on Reasoning and Learning in Cognitive Systems Stanford, CA, 20-21 May 2004 Gheorghe Tecuci with Mihai Boicu, Dorin Marcu, Bogdan Stanescu, Cristina Boicu, Marcel Barbulescu

2 Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Overview Agent Development Experiments Teaching and Learning Demo Acknowledgements Research Problem, Approach, and Application Future Directions: Life-long Continuous Learning

3 The expert teaches the agent to perform various tasks in a way that resembles how the expert would teach a person. The agent learns from the expert, building, verifying and improving its knowledge base 1. Mixed-initiative problem solving 2. Teaching and learning 3. Multistrategy learning Interface Problem Solving Learning Ontology + Rules Research Problem and Approach Research Problem: Elaborate a theory, methodology and family of tools for the development of knowledge-base agents by subject matter experts, with limited assistance from knowledge engineers. Approach: Develop a learning agent that can be taught directly by a subject matter expert while solving problems in cooperation.

4 The center of gravity of an entity (state, alliance, coalition, or group) is the foundation of capability, the hub of all power and movement, upon which everything depends, the point against which all the energies should be directed. Carl Von Clausewitz, On War, 1832. If a combatant eliminates or influences the enemys strategic center of gravity, then the enemy will lose control of its power and resources and will eventually fall to defeat. If the combatant fails to adequately protect his own strategic center of gravity, he invites disaster. Giles and Galvin, USAWC 1996. Sample Domain: Center of Gravity Analysis The center of gravity of an entity is its primary source of moral or physical strength, power or resistance. Joe Strange, Centers of Gravity & Critical Vulnerabilities, 1996.

5 Government Military People Economy Alliances Etc. Which are the critical capabilities? Are the critical requirements of these capabilities satisfied? If not, eliminate the candidate. If yes, do these capabilities have any vulnerability? Approach to center of gravity analysis based on the concepts of critical capabilities, critical requirements and critical vulnerabilities, which have been recently adopted into the joint military doctrine. Application to current war scenarios (e.g. War on terror 2003, Iraq 2003) with state and non-state actors (e.g. Al Qaeda). Identify potential primary sources of moral or physical strength, power and resistance from: Test each identified COG candidate to determine whether it has all the necessary critical capabilities: Identify COG candidatesTest COG candidates First computational approach to COG analysis

6 Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Overview Agent Development Experiments Teaching and Learning Demo Acknowledgements Research Problem, Approach, and Application Future Directions: Life-long Continuous Learning

7 S 1 S 11a S 1n S 11b1 S 11bm T 11bm T 11b1 T 1n T 11a … … T1T1 Q1Q1 S 11b T 11b A 1n S 11 A 11 … … A 11b1 A 11bm S 11b Q 11b Let T1 be the problem solving task to be performed. Finding a solution is an iterative process where, at each step, we consider some relevant information that leads us to reduce the current task to a simpler task or to several simpler tasks. The question Q associated with the current task identifies the type of information to be considered. The answer A identifies that piece of information and leads us to the reduction of the current task. A complex problem solving task is performed by: successively reducing it to simpler tasks; finding the solutions of the simplest tasks; successively composing these solutions until the solution to the initial task is obtained. Problem Solving: Task Reduction

8 Identify and test a strategic COG candidate for Allied_Forces_1943 which is a multi_member_force What type of strategic COG candidate should I consider for this multi_member_force? Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 I consider a candidate corresponding to a member of the multi_member_force Therefore we need to US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943? Therefore we need to Identify and test a strategic COG candidate for Sicily_1943 We need to Which is an opposing_force in the Sicily_1943 scenario? Allied_Forces_1943 Identify and test a strategic COG candidate for Allied_Forces_1943 Therefore we need to Is Allied_Forces_1943 a single_member_force or a multi_member_force? Allied_Forces_1943 is a multi_member_force Therefore we need to COG Analysis: World War II at the time of Sicily 1943

9 Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Overview Agent Development Experiments Teaching and Learning Demo Acknowledgements Research Problem, Approach, and Application Future Directions: Life-long Continuous Learning

10 Knowledge Base: Object Ontology + Rules A hierarchical representation of the objects and types of objects. A hierarchical representation of the types of features. Object Ontology

11 Question Which is a member of ?O1 ? Answer ?O2 INFORMAL STRUCTURE IF Identify and test a strategic COG candidate corresponding to a member of the ?O1 THEN Identify and test a strategic COG candidate for ?O2 US_1943 Which is a member of Allied_Forces_1943? We need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Therefore we need to EXAMPLE OF REASONING STEP FORMAL STRUCTURE IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1ismulti_member_force has_as_member ?O2 ?O2 isforce Plausible Lower Bound Condition ?O1isequal_partners_multi_state_alliance has_as_member ?O2 ?O2issingle_state_force Identify and test a strategic COG candidate for US_1943 Knowledge Base: Object Ontology + Rules LEARNED RULE

12 Universe of Instances Concept Plausible Upper Bound Plausible Lower Bound Learnable knowledge representation Use of the object ontology as an incomplete and evolving generalization hierarchy. Plausible version space (PVS) Use of plausible version spaces to represent and use partially learned knowledge: Rules with PVS conditions Tasks with PVS conditions Object features with PVS concept Task features with PVS concept Feature Domain: PVS concept Range: PVS concept

13 Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Overview Agent Development Experiments Teaching and Learning Demo Acknowledgements Research Problem, Approach, and Application Future Directions: Life-long Continuous Learning

14 Integrated modeling, learning and problem solving Input Task Generated Reduction Mixed-Initiative Problem Solving Ontology + Rules Reject Reduction Accept Reduction Rule Generalization Rule Specialization Modeling Rule Learning Specified Reduction

15 We need to US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943? Therefore we need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Provides an example 1 Rule_15 Learns 2 Rule_15 ? Applies Germany_1943 Identify and test a strategic COG candidate for Germany_1943 Which is a member of European_Axis_1943? Therefore we need to 3 Accepts the example 4 Rule_15 Refines 5 We need to Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 … Disciple uses the learned rules in problem solving, and refines them based on experts feedback. Modeling Learning Problem Solving Refining

16 Rule learning method Example of a task reduction step Plausible version space rule analogy PLB PUB Knowledge Base Incomplete explanation Analogy and Hint Guided Explanation Analogy-based Generalization

17 Find an explanation of why the example is correct US_1943 has_as_member Allied_Forces_1943 The explanation is the best possible approximation of the question and the answer, in the object ontology. US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943? We need to Therefore we need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943

18 Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943 Identify and test a strategic COG candidate for a force The force is US_1943 We need to Therefore we need to Generate the PVS rule Condition ?O1 is Allied_Forces_1943 has_as_member ?O2 ?O2 is US_1943 Most general generalization IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1ismulti_member_force has_as_member ?O2 ?O2 isforce Plausible Lower Bound Condition ?O1isequal_partners_multi_state_alliance has_as_member ?O2 ?O2issingle_state_force explanation ?O1 has_as_member ?O2 Most specific generalization US_1943 has_as_member Allied_Forces_1943 Rewrite as has_as_member domain: multi_member_force range: force

19 Rule refinement method Failure explanation PVS Rule Example of task reductions generated by the agent Incorrect example Correct example Learning from Explanations Learning by Analogy And Experimentation Learning from Examples Knowledge Base IF THEN … PVS Condition PVS Except When Condition … PVS Except When Condition

20 Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Overview Agent Development Experiments Teaching and Learning Demo Acknowledgements Research Problem, Approach, and Application

21 Modeling the problem solving process of the subject matter expert and development of the object ontology of the agent. Teaching of the agent by the subject matter expert. Agent Development Methodology

22 Disciple Agent KB Problem solving Disciple was taught based on the expertise of Prof. Comello in center of gravity analysis. Disciple helps the students to perform a center of gravity analysis of an assigned war scenario. Teaching Learning The use of Disciple is an assignment that is well suited to the course's learning objectives Disciple should be used in future versions of this course Use of Disciple at the US Army War College 319jw Case Studies in Center of Gravity Analysis Disciple helped me to learn to perform a strategic COG analysis of a scenario Global evaluations of Disciple by officers from the Spring 03 course

23 Use of Disciple at the US Army War College 589jw Military Applications of Artificial Intelligence course Students teach Disciple their COG analysis expertise, using sample scenarios (Iraq 2003, War on terror 2003, Arab-Israeli 1973) Students test the trained Disciple agent based on a new scenario (North Korea 2003) I think that a subject matter expert can use Disciple to build an agent, with limited assistance from a knowledge engineer Spring 2001 COG identification Spring 2002 COG identification and testing Spring 2003 COG testing based on critical capabilities Global evaluations of Disciple by officers during three experiments

24 Extended KB stay informed be irreplaceable communicate be influential Integrated KB Initial KB have support be protected be driving force 432 concepts and features, 29 tasks, 18 rules For COG identification for leaders 37 acquired concepts and features for COG testing COG identification and testing (leaders) Domain analysis and ontology development (KE+SME) Parallel KB development (SME assisted by KE) KB merging (KE) Knowledge Engineer (KE) All subject matter experts (SME) DISCIPLE-COG Training scenarios: Iraq 2003 Arab-Israeli 1973 War on Terror 2003 Team 1 Team 2Team 3Team 4Team 5 5 features 10 tasks 10 rules Learned features, tasks, rules 14 tasks 14 rules 2 features 19 tasks 19 rules 35 tasks 33 rules 3 features 24 tasks 23 rules Unified 2 features Deleted 4 rules Refined 12 rules Final KB: +9 features 478 concepts and features +105 tasks 134 tasks +95 rules 113 rules DISCIPLE-COG Testing scenario: North Korea 2003 Correctness = 98.15% 5h 28min average training time / team 3.53 average rule learning rate / team Parallel development and merging of KBs

25 Disciple-WA (1997-1998): Estimates the best plan of working around damage to a transportation infrastructure, such as a damaged bridge or road. Demonstrated that a knowledge engineer can use Disciple to rapidly build and update a knowledge base capturing knowledge from military engineering manuals and a set of sample solutions provided by a subject matter expert. Disciple-COA (1998-1999): Identifies strengths and weaknesses in a Course of Action, based on the principles of war and the tenets of Army operations. Demonstrated the generality of its learning methods that used an object ontology created by another group (TFS/Cycorp). Demonstrated that a knowledge engineer and a subject matter expert can jointly teach Disciple. Other Disciple agents

26 Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Overview Agent Development Experiments Teaching and Learning Demo Acknowledgements Research Problem, Approach, and Application Future Directions: Life-long Continuous Learning

27 Learning Agent Modeling Non-disruptive Learning Ontology Elicitation Rule & Ontology Learning Rule & Ontology Refining User Model Learning Exception Handling KB Maintenance Implicit reasoning of human expert Explicit reasoning in natural language Ontology extensions Learned rules, ontology Refined rules, ontology User modelCases, rules Rules w/o exceptions 1. Multistrategy teaching and learning 2. Mixed-initiative problem solving and learning 3. Autonomous (and interactive) multistrategy learning Analogy based methods Explanation based methods Natural Language based methods Abstraction based methods Plausible version spaces Learning from instruction Learning from examples Learning from explanations Learning by analogy Mixed-initiative learning Routine, innovative, inventive, and creative reasoning Automatic inductive learning Case-based learning Abductive learning Ontology discovery KB optimization KB maintenance Life-Long Continuous Agent Learning 4. KB maintenance and optimization

28 Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Overview Agent Development Experiments Teaching and Learning Demo Acknowledgements Research Problem, Approach, and Application Future Directions: Life-long Continuous Learning

29 This research was sponsored by the Defense Advanced Research Projects Agency, Air Force Research Laboratory, Air Force Material Command, USAF under agreement number F30602-00-2-0546, by the Air Force Office of Scientific Research under grant number F49620-00-1-0072 and by the US Army War College. Acknowledgements


Download ppt "Learning Agents Center George Mason University Symposium on Reasoning and Learning in Cognitive Systems Stanford, CA, 20-21 May 2004 Gheorghe Tecuci with."

Similar presentations


Ads by Google