Presentation is loading. Please wait.

Presentation is loading. Please wait.

 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 11.

Similar presentations


Presentation on theme: " 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 11."— Presentation transcript:

1  2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 11 Learning and problem solving agents

2  2003, G.Tecuci, Learning Agents Laboratory 2 Overview Control of modeling, learning and problem solving Knowledge base: object ontology + rules Control of the problem solving process An agent for center of gravity analysis Rule-based problem solving Learning and problem solving agents: Disciple Modeling of problem solving through task reduction Multistrategy rule refinement Multistrategy rule learning

3 3 Intelligent Agent user/ environment output/ sensors effectors input/ An intelligent agent is a system that: perceives its environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or other complex environment); reasons to interpret perceptions, draw inferences, solve problems, and determine actions; and acts upon that environment to realize a set of goals or tasks for which it was designed. What are intelligent agents

4 4 The architecture of an intelligent agent Problem Solving Engine Intelligent Agent User/ Environment Output/ Sensors Effectors Input/ Learning Engine Implements learning methods for extending and refining the knowledge in the knowledge base. Implements a general problem solving method that uses the knowledge from the knowledge base to interpret the input and provide an appropriate output. Data structures that represent the objects from the application domain, general laws governing them, actions that can be performed with them, etc. Ontology Rules/Cases/… Knowledge Base

5 5 The knowledge engineer attempts to understand how the subject matter expert reasons and solves problems and then encodes the acquired expertise into the agent's knowledge base. This modeling and representation of expert’s knowledge is long, painful and inefficient (known as the “knowledge acquisition bottleneck”). How are agents built and why it is hard Knowledge Engineer Domain Expert Knowledge Base Inference Engine Intelligent Agent Programming Dialog Results

6 6 We are using the term agent as a metaphor for a modern Artificial Intelligence system. Let us consider the problem of developing an agent that exhibits the problem solving expertise of a subject matter expert. This diagram illustrates the current practice of building such an agent. It involves a subject matter expert and a knowledge engineer. The knowledge engineer attempts to understand how the subject matter expert reasons and solves problems and then encodes the acquired expertise into the agent's knowledge base. This modeling and representation of expert’s knowledge is long, painful and inefficient, being well-known as the “knowledge acquisition bottleneck” of agent development. Moreover, any future adaptation and development of the agent requires an even greater effort from the part of the subject matter expert and the knowledge engineer. One of the main causes of this situation is that the knowledge engineer, who is not an expert in the application domain, needs to play a critical role in every step of the creation and maintenance of agent’s knowledge. A solution to this problem is to develop more capable intelligent agents that can be instructed directly by the subject matter experts.

7 7 Another approach to agent building Knowledge Engineer Domain Expert Knowledge Base Inference Engine Intelligent Agent Programming Dialog Results Domain Expert Dialog Knowledge Base Inference Engine Instructable Agent Learning Engine Agent training directly by the subject matter expert.

8 8 The expert teaches the agent to perform various tasks in a way that resembles how the expert would teach a person. Disciple approach: Develop learning and problem solving agents that can be taught by subject matter experts to become knowledge-based assistants. The agent learns from the expert, building, verifying and improving its knowledge base. Disciple approach to agent building Rapid agent development and maintenance Mixed-initiative problem solving Teaching and learning Multistrategy learning LALAB has developed a theory, a methodology, and a family of tools for the rapid building of knowledge bases and agents by subject matter experts, with limited assistance from knowledge engineers, to overcome the knowledge acquisition bottleneck.

9 9 LALAB has developed a theory, a methodology, and a family of tools for the rapid building of knowledge bases and agents by subject matter experts, with limited assistance from knowledge engineers, to overcome the knowledge acquisition bottleneck. The main idea of this approach, called Disciple, consists in developing an instructable (learning) agent that can be taught directly by a subject matter expert to become a knowledge-based assistant. The expert will teach the agent how to perform problem solving tasks in a way that is similar to how the expert would teach a person. That is, the expert will teach the agent by providing it examples on how to solve specific problems, helping it to understand the solutions, and supervising and correcting the problem solving behavior of the agent. The agent will learn from the expert by generalizing the examples and building its knowledge base. In essence, the goal is to create a synergism between the expert that has the knowledge to be formalized and the agent that knows how to formalize it. This is based on methods for mixed-initiative problem solving, integrated teaching and learning, and multistrategy learning. The end result is an approach for rapid agent development and maintenance. The Disciple approach has been successful applied to several military domains (workaround planning, course of action critiquing, and center of gravity analysis) proving to be superior to competing agent development approaches, as discussed in the following viewgraphs.

10 10 Disciple’s vision on the future of software development Mainframe Computers Software systems developed and used by computer experts Personal Computers Software systems developed by computer experts and used by persons that are not computer experts Learning Agents Software systems developed and used by persons that are not computer experts

11 11 First let us introduce two long-term research visions for Disciple that guide our work: one related to the future of software development, and the other related to education. We think that the Disciple approach contributes directly to a new age in the software systems development process, as illustrated in this viewgraph. In the mainframe computers age, the software systems were both built and used by computer science experts. In the current age of personal computers, these systems are still being built by computer science experts, but many of them (such as text processors, email programs, or Internet browsers) are now used by persons that have no formal computer education. Continuing this trend, we think that the next age will be that of the personal agents, where typical computer users will be able to both develop and use special types of software agents. The Disciple approach attempts to change the way intelligent agents are built, from “being programmed” by a knowledge engineer to “being taught” by a user who does not have prior knowledge engineering or computer science experience. This approach would allow a typical computer user, who is not a trained knowledge engineer, to build by himself an intelligent assistant as easily as he now uses a word processor to write a paper. It is enough to consider how current personal computers have changed our work and life, to imagine how much more the learning agents will impact us.

12 12 Vision on the use of Disciple in Education teaches Disciple Agent KB The expert/teacher teaches Disciple through examples and explanations, in a way that is similar to how the expert would teach a student. teaches Disciple Agent KB teaches Disciple Agent KB … Disciple tutors the student in a way that is similar to how the expert/teacher has taught it. teaches Disciple Agent KB

13 13 The Disciple approach is particularly relevant to education, this viewgraph illustrating our long term research vision in this area. As shown in the left-hand side part, a teacher teaches a Disciple agent through examples and explanations, in a way that is similar to how the teacher would teach a student. After that, the Disciple agent can be used as a personal tutor, teaching the students in a way that is similar to how it was taught by the teacher.

14 14 Disciple-WA (1997-1998): Estimates the best plan of working around damage to a transportation infrastructure, such as a damaged bridge or road. DARPA’s HPKB program: evaluation Disciple-WA demonstrated that a knowledge engineer can use Disciple to rapidly build and update a knowledge base capturing knowledge from military engineering manuals and a set of sample solutions provided by a subject matter expert. Evolution of KB coverage and performance from the pre-repair phase to the post-repair phase. Development of Disciple’s KB during evaluation. 72% increase of KB size in 17 days High knowledge acquisition rate; High problem solving performance (including unanticipated solutions). Demonstrated at EFX’98 as part of an integrated application led by Alphatech. Disciple-WA features:

15 15 An earlier version of the Disciple approach has been developed as part of the DARPA’s High Performance Knowledge Bases Program and has been evaluated in the context of the workaround challenge problem. This problem consists of the rapid development and use of a planning agent that can estimate enemy’s best way of working around damage to a transportation infrastructure, such as a damaged bridge or road. The knowledge base of the Disciple-WA agent was developed by a knowledge engineer who has captured knowledge from military engineering manuals and from sample solutions provided by a subject matter expert. The graph in the left hand side shows the high rate of knowledge acquisition achieved by the George Mason University (GMU) team during the DARPA evaluation. The graph in the right hand side, elaborated by Alphatech, the evaluator for this challenge problem, shows that the GMU team has achieved the highest problem solving performance and the second highest coverage of the problem space. Disciple-WA surprised the evaluators by finding several workaround plans that were not anticipated. Disciple-WA was the system selected by DARPA to be integrated into a larger system for target selection, which was demonstrated by Alphatech at EFX’98, the Air Force’s show case of the most promising new technologies.

16 16 Disciple-COA (1998-1999): Identifies strengths and weaknesses in a Course of Action, based on the principles of war and the tenets of army operations. DARPA’s HPKB program: evaluation Development of Disciple’s KB during evaluation. 46% increase of KB size in 8 days Evolution of KB coverage and performance from the pre-repair phase to the post-repair phase for the final 3 evaluation items. Disciple-COA demonstrated the generality of its learning methods that used an object ontology created by another group (TFS/Cycorp). It also demonstrated that a knowledge engineer and a subject matter expert can jointly teach Disciple. High knowledge acquisition rate; Better performance than the other evaluated systems; Better performance than the evaluating experts (many unanticipated solutions). Disciple-COA features: 100%

17 17 The second challenge problem in the DARPA’s HPKB program was to rapidly develop a course of action critiquer. The GMU team has developed an agent that identifies strengths and weaknesses in a course of action, based on the principles of war and the tenets of Army operations, by answering questions such as: “To what extend does this course of action conform to the principle of surprise?” The knowledge base of the Disciple-COA agent was developed by a knowledge engineer and a subject matter expert who have jointly taught Disciple by critiquing specific courses of action. The graphs from the right hand side shows the results of the evaluations performed by Alphatech. The GMU team has achieved both the highest performance and the highest coverage of the problem space. A remarkable feature is that many of the scores of the GMU team were above 100% because Disciple-COA identified many strengths and weaknesses that were missed by the evaluating experts.

18  2003, G.Tecuci, Learning Agents Laboratory 18 Overview Control of modeling, learning and problem solving Knowledge base: object ontology + rules Control of the problem solving process An agent for center of gravity analysis Rule-based problem solving Learning and problem solving agents: Disciple Modeling of problem solving through task reduction Multistrategy rule refinement Multistrategy rule learning

19 19 The center of gravity of an entity (state, alliance, coalition, or group) is the foundation of capability, the hub of all power and movement, upon which everything depends, the point against which all the energies should be directed. Carl Von Clausewitz, “On War,” 1832. If a combatant eliminates or influences the enemy’s strategic center of gravity, then the enemy will lose control of its power and resources and will eventually fall to defeat. If the combatant fails to adequately protect his own strategic center of gravity, he invites disaster. Giles and Galvin, 1996 Develop an intelligent agent that is able to identify and test strategic Center of Gravity candidates for a war scenario. Center of gravity challenge problem

20 20 Joe Strange, Centers of Gravity & Critical Vulnerabilities, 1996. Centers of Gravity: Primary sources of moral or physical strength, power or resistance. Critical Capabilities: Primary abilities which merit a Center of Gravity to be identified as such in the context of a given scenario, situation or mission. Critical Requirements: Essential conditions, resources and means for a Critical capability to be fully operative. Critical Vulnerabilities: Critical Requirements or components thereof which are deficient, or vulnerable to neutralization, interdiction or attack (moral/physical harm) in a manner achieving decisive results – the smaller the resources and effort applied and the smaller the risk and cost, the better. Approach to center of gravity analysis

21 21 Government Military People Economy Alliances Etc. Which are the critical capabilities? Are the critical requirements of these capabilities satisfied? If not, eliminate the candidate. If yes, do these capabilities have any vulnerability? Approach to center of gravity analysis based on the concepts of critical capabilities, critical requirements and critical vulnerabilities, which have been recently adopted into the joint military doctrine. Application to current war scenarios (e.g. War on terror 2003, Iraq 2003) with state and non-state actors (e.g. Al Qaeda). Identify potential primary sources of moral or physical strength, power and resistance from: Test each identified COG candidate to determine whether it has all the necessary critical capabilities: Identification of COG candidatesTesting of COG candidates First computational approach to COG analysis

22 22 Working with subject matter experts from the US Army War College, and building on previous work done there (e.g. Giles, P.K., and Galvin, T.P. 1996. Center of Gravity: Determination, Analysis and Application. CSL, U.S. Army War College, PA: Carlisle Barracks) and at the US Marine Corp War College (Strange, J. 1996. Centers of Gravity & Critical Vulnerabilities: Building on the Clausewitzian Foundation So That We Can All Speak the Same Language. Quantico, VA: Marine Corps University Foundation) we have developed an advanced approach to center of gravity determination, based on the concepts of critical capabilities, critical requirements and critical vulnerabilities, which have been recently adopted into the joint doctrine. We have developed knowledge bases and agents that are used by high-ranking officers at the US Army War College to analyze historic, current and hypothetical war scenarios with state and non-state actors.

23 23 leader people military communicate stay informed be protected have support be a driving force be influential be irreplaceable support the goal communicate desires to the highest level leadership receive communication from the highest level leadership have a positive impact support the highest level leadership be influential be indispensable exert power be deployable industrial capacity financial capacity ideology external support will of multi member force Critical capabilities needed to be a COG

24 24 Critical capability to be protected Corresponding critical requirement Have means to be protected from all threats Means Republican Guard Protection Unit  loyalty not based on conviction and can be influenced by US-led coalition Vulnerabilities Complex of Iraqi Bunkers  location known to US led coalition  design known to US led coalition  can be destroyed by US-led coalition Iraqi Military  loyalty can be influenced by US-led coalition  can be destroyed by US-led coalition System of Saddam Doubles  loyalty of Saddam Doubles to Saddam can be influenced by US-led coalition  Illustration: Saddam Hussein (Iraq 2003)

25 25 Disciple Agent KB Problem solving Disciple was taught based on the expertise of Prof. Comello in center of gravity analysis. Disciple helps the students to perform a center of gravity analysis of an assigned war scenario. Teaching Learning The use of Disciple is an assignment that is well suited to the course's learning objectives Disciple should be used in future versions of this course Use of Disciple at the US Army War College 319jw Case Studies in Center of Gravity Analysis Disciple helped me to learn to perform a strategic COG analysis of a scenario Global evaluations of Disciple by officers from the Spring 03 course

26 26 This viewgraph shows the use of Disciple in “319jw Case Studies in Center of Gravity Analysis” (the COG course) at the US Army War College. First Disciple was taught how to analyze a scenario, based on the expertise of Prof. Jerome Comello, the course’s instructor. Then, the students used Disciple as an intelligent assistant that helps them to develop a Center of Gravity analysis of a war scenario. In the last three years, Disciple was used in six successive sessions of this course (attended by a total of 55 high-ranking US officers, national reserve and international fellows), becoming part of the regular syllabus. The scenarios analyzed during Spring 2003 were: War on Terror 2003, Iraq 2003, Israel–PLO 2003, and North Korea 2003. On average, a scenario was represented by around 900 facts. The knowledge base of the Disciple agent contained 586 objects and features, 409 tasks and 419 task reduction and solution composition rules. The bottom part of this viewgraph shows the results of some of the global evaluations of the last version of Disciple, performed by the officers from the Spring 2003 COG course. The officers have been asked to express their disagreement or agreement with the statements from the viewgraph. These results demonstrate that the Disciple approach can be used to develop agents that have been found to be useful for a complex military task.

27 27 Is guided by Disciple to describe the relevant aspects of a strategic environment. Studies the logic behind COG identification and testing. Critiques Disciple’s analysis and finalizes the analysis report. Develops a formal representation of the scenario.Identifies and tests strategic COG candidates. Generates a COG analysis report. Disciple Student Student – Disciple collaboration

28 28 The student is guided by Disciple to describe the relevant aspects of a strategic scenario. Disciple Disciple represents the scenario as instances in its object ontology.

29 29 This is a typical screen of the Scenario Elicitation tool. The left hand side is a tree of titles and subtitles. It is similar to a table of contents. Each title (or node) corresponds to a certain type of information. When the user clicks on such a node, the agent displays relevant questions about that node in the right hand side of the screen. If the user has previously provided answers to some of those questions, these answers are also displayed and the user can revise them. Otherwise the user is asked to provide the corresponding answers. Some answers provided by the user may generate additional nodes in the left hand side. For instance, if the user has indicated that an opposing force is a multi-state alliance, the agent will ask the user to indicate the member states. Then, for each such state the agent will create additional nodes in the left hand side. Clicking on such a node will initiate the dialog for providing relevant information about that state. This interaction may continue until the user has answered all the agent’s questions.

30 30 Scenario Force Opposing_force subconcept-of instance-of Sicily_1943 Script type: Elicit the feature Has_as_opposing_force for an instance Controls: Question: Name the opposing forces in Answer variable: Control type: multiple-line, height 4 Ontology actions: instance-of Opposing_force Has_as_opposing_force Script calls: Elicit properties of the instance in new window … Execution of the elicitation scripts Allied_Forces_1943 Has_as_opposing_force instance-of European_Axis_1943 Has_as_opposing_force instance-of

31 31 This slide illustrates the scenario specification process. The objects and the features from the object ontology have elicitation scripts associated with them, which have been defined by a knowledge engineer. The left hand side of the slide shows an example of a script associated with a feature. This script specifies that in order to elicit the feature “has_as_opposing_force” for an instance of any concept, the agent should ask the expert to provide the names of the opposing forces, create an instance for each given opposing force, and create the relationship “has_as_opposing_force” between the initial instance and each opposing force. After that, the agent should elicit the properties of each opposing force. When this script is executed a pane is created on the screen where the expert can type the names of the opposing forces. When the expert enters a name, for instance Allied_Forces_1943, the agent creates the instance Allied_Forces_1943 and its relationship with the instance Sicily_1943. When the expert enters European_Axis_1943, the agent creates the instance European_Axis_1943 and its relationship with Sicily_1943.

32 32 has_as_primary_ force_element Scenario Strategic_COG_relevant_factor Force Force_goal has as opposing force Sicily_1943 “WWII Allied invasion of Sicily in 1943” brief_description Opposing_force Single_state_force Single_group_force Multi_group_force Multi_state_force Allied_Forces_1943 US_1943 Britain_1943 component_ state Group Allied_forces_operation_Husky type_of_operations has_as_subgroup US_7 th _Army_ (Force_343) “combined and joint operations” has_as_subgroup Br_8 th _Army_ (Force_545) has_as_subgroup Western_Naval_TF has_as_subgroup Eastern_Naval_TF has_as_subgroup US_9 th _Air_Force has_as_subgroup Northwest_Africa_Air_Force instance_of subconcept_of instance_of Sample object ontology Multi_state_alliance Equal_partners_ multi_state_ alliance subconcept_of Resource_or_ infrastructure_ element Strategically_ essential_ goods_or_ materiel War_materiel_ and_transports Product War_materiel_ and_transports _of_US_1943 Economic_ factor Industrial_ capacity Industrial_ factor has_as_industrial_factor industrial_ capacity_ of_ US_1943 is_a_major_ generator_of instance_of subconcept_of … … … … … War_scenario subconcept_of

33 33 Disciple identifies and tests COG candidates The students study the logic behind COG identification and testing

34 34 Disciple can be asked, at any time, to identify and test the strategic center of gravity candidates for the current specification of the scenario. The previous viewgraph shows the COG solution viewer. Its left hand side contains the list of the center of gravity candidates identified by Disciple for each of the opposing forces in the Sicily_1943 scenario. For US_1943 they are: the will of the people of US, President Roosevelt, the military of US, and the industrial capacity of US. When a candidate is selected in the left hand side of the viewer, its justification for identification or/and for testing will be displayed in the right hand side of the viewer. Disciple uses the task reduction paradigm to generate these justifications, as will be presented later.

35 35 Disciple generates a COG analysis report for the student to finalize

36 36 As illustrated before, Disciple guides the student to identify, study and describe the relevant aspects of the opposing forces in a particular scenario. Then Disciple identifies and tests the strategic center of gravity candidates. After that Disciple generates a draft analysis report, a fragment of which is shown in the previous viewgraph. The first part of this report contains a description of the scenario, being generated by Disciple based on the information elicited from the student. The second part of the report includes all the center of gravity candidates identified by Disciple, together with their justifications for identification and testing. The student must now finalize this report by examining each of the center of gravity candidates and their justifications, completing, correcting, or even rejecting Disciple’s reasoning, and providing an alternative line of reasoning. This is productive for several reasons. First, the agent generates its proposed solutions by applying general reasoning rules and heuristics learned previously from the course’s instructor, to a new scenario described by the student. Secondly, center of gravity analysis is influenced by personal experiences and subjective judgments, and the student (who has unique military experience and biases) may have a different interpretation of certain facts. This requirement for the critical analysis of the solutions generated by the agent is an important educational component of military commanders that mimics military practice. Commanders have to critically investigate several courses of actions proposed by their staff and to make the final decision on which one to use.

37 37 Demonstration Disciple Disciple as a strategic leader assistant

38  2003, G.Tecuci, Learning Agents Laboratory 38 Overview Control of modeling, learning and problem solving Knowledge base: object ontology + rules Control of the problem solving process An agent for center of gravity analysis Rule-based problem solving Learning and problem solving agents: Disciple Modeling of problem solving through task reduction Multistrategy rule refinement Multistrategy rule learning

39 39 Problem solving approach: Task reduction A complex problem solving task is performed by: successively reducing it to simpler tasks; finding the solutions of the simplest tasks; successively composing these solutions until the solution to the initial task is obtained. S 1 S 11 S 1n S 111 S 11m T 11m T 111 T 1n T 11 T1T1 … … Disciple uses the task-reduction paradigm

40 40 Question-answering guided task reduction S 1 S 11a S 1n S 11b1 S 11bm T 11bm T 11b1 T 1n T 11a … … T1T1 Q1Q1 S 11b T 11b A 1n S 11 A 11 … … A 11b1 A 11bm S 11b Q 11b Finding a solution is an iterative process where, at each step, we consider some relevant information that leads us to reduce the current task to several simpler tasks. The question Q associated with the current task identifies the type of information to be considered. The answer A identifies that piece of information and leads us to the reduction of the current task. Let T1 be the problem solving task to be performed.

41 41 Identify and test a strategic COG candidate for Sicily_1943 which is a war scenario The will_of_Allied_Forces_1943 is a COG candidate with respect to the cohesion of Allied_Forces_1943 The will of Allied Forces 1943 is a strategic COG candidate that cannot be eliminated because it has all the necessary critical capabilities... The will_of_Allied_Forces_1943 is a COG candidate with respect to the cohesion of Allied_Forces_1943 The will of Allied Forces 1943 is a strategic COG candidate that cannot be eliminated because it has all the necessary critical capabilities... What kind of scenario is Sicily_1943? Sicily_1943 is a war scenario Identify and test a strategic COG candidate for Sicily_1943 Modeling COG analysis through task reduction (and solution composition) Modeling COG analysis through task reduction (and solution composition) Task reduction Solution composition

42 42 Identify and test a strategic COG candidate for Allied_Forces_1943 which is a multi-member force Is Allied_Forces_1943 a single- member force or a multi-member force? Allied_Forces_1943 is a multi- member force Identify and test a strategic COG candidate for Allied_Forces_1943 The will_of_Allied_Forces_1943 is a COG candidate with respect to the cohesion of Allied_Forces_1943 The will of Allied Forces 1943 is a strategic COG candidate that cannot be eliminated because it has all the necessary critical capabilities... The will_of_Allied_Forces_1943 is a COG candidate with respect to the cohesion of Allied_Forces_1943 The will of Allied Forces 1943 is a strategic COG candidate that cannot be eliminated because it has all the necessary critical capabilities... Which is an opposing force in the Sicily_1943 scenario? Allied_Forces_1943 Identify and test a strategic COG candidate for Sicily_1943 which is a war scenario

43 43 Identify and test a strategic COG candidate for Allied_Forces_1943 which is a multi-member force Identify and test a strategic COG candidate corresponding to the multi-member nature of Allied_Forces_1943 I consider a candidate corresponding to the multi-member nature of the force I consider a candidate corresponding to a member of the multi- member force Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 What type of strategic COG candidate should I consider for this multi-member force?... The will_of_Allied_Forces_1943 is a COG candidate with respect to the cohesion of Allied_Forces_1943 The will of Allied Forces 1943 is a strategic COG candidate that cannot be eliminated because it has all the necessary critical capabilities... The will_of_Allied_Forces_1943 is a COG candidate with respect to the cohesion of Allied_Forces_1943 The will of Allied Forces 1943 is a strategic COG candidate that cannot be eliminated because it has all the necessary critical capabilities

44 44 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943? US_1943 Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 The will of people of US 1943 is a strategic COG candidate that cannot be eliminated because it has all the necessary critical capabilities... The will_of_the_people_of_US_1943 is a strategic COG candidate with respect to the people_of_US_1943 The will of people of US 1943 is a strategic COG candidate that cannot be eliminated because it has all the necessary critical capabilities... The will_of_the_people_of_US_1943 is a strategic COG candidate with respect to the people_of_US_1943 What kind of force is US 1943? US_1943 is a single- member force Identify and test a strategic COG candidate for US_1943 which is a single-member force...

45 45 Identify and test a strategic COG candidate for US 1943 which is a single-member force I consider a strategic COG candidate with respect to the people of US 1943 I consider a strategic COG candidate with respect to the government of US 1943 I consider a strategic COG candidate with respect to the armed forces of US 1943 I consider a candidate corresponding to the economy US 1943 Identify and test a strategic COG candidate corresponding to the economy of US 1943 Identify and test a strategic COG candidate with respect to the armed forces of US 1943 Identify and test a strategic COG candidate with respect to the government of US 1943 Identify and test a strategic COG candidate with respect to the people of US 1943 I consider a candidate corresponding to other sources of moral or physical strength, power and resistance of US 1943 Identify and test a strategic COG candidate with respect to other sources of moral or physical strength, power and resistance of US 1943 What type of strategic COG candidate should I consider for this single-member force?...

46 46 Identify President Roosevelt as a strategic COG candidate with respect to the government_of_US_1943 Who or what is a main controlling element of the government_of_US_1943? President Roosevelt that has a critical role in setting objectives and making decisions Test whether President Roosevelt is a viable strategic COG candidate President Roosevelt is a strategic COG candidate with respect to the government_of_US_1943 President Roosevelt is a strategic COG candidate that can be eliminated because it does not have all the necessary critical capabilities President Roosevelt is a strategic COG candidate with respect to the government_of_US_1943 President Roosevelt is a strategic COG candidate that can be eliminated because it does not have all the necessary critical capabilities Identify and test a strategic COG candidate with respect to the government of US 1943

47 47 President Roosevelt is a strategic COG candidate that can be eliminated Test whether President Roosevelt has the critical capability to be protected Test whether President Roosevelt has the critical capability to stay informed Test whether President Roosevelt has the critical capability to communicate President Roosevelt has the critical capability to communicate through executive orders, through military orders, and through the Mass Media of US 1943. These communication means have no significant vulnerabilities Test whether President Roosevelt has the critical capability to have support Test whether President Roosevelt has the critical capability to be a driving force Test whether President Roosevelt has the critical capability to be influential Test whether President Roosevelt is a viable strategic COG candidate Does President Roosevelt have all the necessary critical capabilities? No. The necessary critical capabilities are: be protected, stay informed, communicate, be influential, be a driving force, have support and be irreplaceable Which are the critical capabilities that President Roosevelt should have to be a COG candidate? Test whether President Roosevelt has the critical capability to be irreplaceable President Roosevelt has the critical capability to be protected. President Roosevelt is protected by US Service 1943 which has no significant vulnerability President Roosevelt has the critical capability to stay informed. President Roosevelt receives essential intelligence from intelligence agencies which have no significant vulnerability President Roosevelt has the critical capability to be influential because he is the head of the government of US 1943, the commander in chief of the military of US 1943, and is a trusted leader who can use the Mass Media of US 1943. These influence means have no significant vulnerabilities. President Roosevelt has the critical capability to have support because he is the head of a democratic government with a history of good decisions, a trusted commander in chief of the military, and the people are willing to make sacrifices for unconditional surrender of European Axis. The means to secure continuous support have no significant vulnerability. President Roosevelt has the critical capability to be a driving force. The main reason for President Roosevelt to pursue the goal of unconditional surrender of European Axis is “preventing separate peace by the members of the Allied Forces”. Also, “the western democratic values” provides President Roosevelt with determination to persevere in this goal. There is no significant vulnerability in the reason and determination. President Roosevelt does not have the critical capability to be irreplaceable. US 1943 would maintain the goal of unconditional surrender of European Axis irrespective of its leader because “the goal was established and the country was committed to it”. There is no significant vulnerability resulted from the replacement of President Roosevelt

48 48 Test whether President Roosevelt has means to influence the government Test whether President Roosevelt has means to influence the military Test whether President Roosevelt has means to influence the people President Roosevelt has the critical capability to be influential because he is the head of the government of US 1943, the commander in chief of the military of US 1943, and is a trusted leader who can use the Mass Media of US 1943. These influence means have no significant vulnerabilities. Test whether President Roosevelt has the critical capability to be influential Which are the critical requirements for President Roosevelt to be influential? President Roosevelt needs means to influence the government, means to influence the military and means to influence the people Does President Roosevelt satisfy the critical requirements to be influential? Yes. President Roosevelt can influence the government of US 1943 because he is the head of the government of US 1943. The influence means have no significant vulnerability. President Roosevelt can influence the military of US 1943 because he is the commander in chief of the military of US 1943. The influence means have no significant vulnerability. The influence of President Roosevelt over the people of US 1943, as a trusted leader using the Mass Media of US 1943, has no significant vulnerability

49 49 What is a means for President Roosevelt to influence the government of US 1943? President Roosevelt is the head of the government of US 1943 Test whether the influence of President Roosevelt over the government of US 1943, as the head of the government of US 1943, has any significant vulnerability The influence of President Roosevelt over the government of US 1943, as the head of the government of US 1943, has no significant vulnerability President Roosevelt can influence the government of US 1943 because he is the head of the government of US 1943. The influence means have no significant vulnerability. Test whether President Roosevelt has means to influence the government Does the influence of President Roosevelt over the government of US 1943 have any significant vulnerability? No

50 50 What is a means for President Roosevelt to influence the military of US 1943? President Roosevelt is the commander in chief of the military of US 1943 Test whether the influence of President Roosevelt over the military of US 1943, as the commander in chief of the military of US 1943, has any significant vulnerability Test whether President Roosevelt has means to influence the military No The influence of President Roosevelt over the military of US 1943, as the commander in chief of the military of US 1943, has no significant vulnerability Does the influence of President Roosevelt over the military of US 1943 have any significant vulnerability? The influence of President Roosevelt over the military of US 1943, as the commander in chief of the military of US 1943, has no significant vulnerability

51 51 What is a means for President Roosevelt to influence the people of US 1943? President Roosevelt is trusted by the people of US 1943 and can use Mass Media of US 1943 to influence them Test whether the influence of President Roosevelt over the people of US 1943, as a trusted leader using the Mass Media of US 1943, has any significant vulnerability Test whether President Roosevelt has means to influence the people No The influence of President Roosevelt over the people of US 1943, as a trusted leader using the Mass Media of US 1943, has no significant vulnerability Does the influence of President Roosevelt over the people of US 1943 have any significant vulnerability? The influence of President Roosevelt over the people of US 1943, as a trusted leader using the Mass Media of US 1943, has no significant vulnerability

52  2003, G.Tecuci, Learning Agents Laboratory 52 Overview Control of modeling, learning and problem solving Knowledge base: object ontology + rules Control of the problem solving process An agent for center of gravity analysis Rule-based problem solving Learning and problem solving agents: Disciple Modeling of problem solving through task reduction Multistrategy rule refinement Multistrategy rule learning

53  2003, G.Tecuci, Learning Agents Laboratory 53 The structure of the knowledge base The object ontology is a hierarchical description of the objects from the domain, specifying their properties and relationships. It includes both descriptions of types of objects (called concepts) and descriptions of specific objects (called instances). The task reduction rules specify generic problem solving steps of reducing complex tasks to simpler tasks. They are described using the objects from the ontology. Knowledge Base = Object ontology + Task reduction rules

54  2003, G.Tecuci, Learning Agents Laboratory 54 Fragment of the object ontology feudal_god_ king_government totalitarian_ government democratic_ government theocratic_ government state_government military_ dictatorship police_ state religious_ dictatorship representative_ democracy parliamentary_ democracy theocratic_ democracy monarchy governing_body other_state_ government dictator deity_figure chief_and_ tribal_council autocratic_ leader democratic_ council_ or_board group_governing_body other_ group_ governing_ body government_ of_Italy_1943 government_ of_Germany_1943 government_ of_US_1943 government_ of_Britain_1943 ad_hoc_ governing_body established_ governing_body other_type_of_ governing_body fascist_ state communist_ dictatorship religious_ dictatorship government_ of_USSR_1943

55  2003, G.Tecuci, Learning Agents Laboratory 55 The instances and the concepts are organized into generalization hierarchies like this hierarchy of governing bodies. Notice, however, that the generalization hierarchies are not always as strict as this one, where each concept is a subconcept of only one concept. For instance, the concept “strategic_raw_material” is both a subconcept of “raw_material” and a subconcept of “strategically_essential_resource_or_infrastructure_element”.

56  2003, G.Tecuci, Learning Agents Laboratory 56 Fragment of feature ontology has_as_controlling_leader D: agent R: person has_as_monarch D: governing_body R: person has_as_god_king D: governing_body R: person has_as_military_leader D: governing_body R: person has_as_political_leader D: governing_body R: person has_as_religious_leader D: governing_body R: person has_as_commander_in_chief D: force R: person has_as_head_of_government D: governing_body R: person has_as_head_of_state D: governing_body R: person

57  2003, G.Tecuci, Learning Agents Laboratory 57 An object feature is itself defined as a subconcept of another object feature, as illustrated in the previous slide. Therefore, the object features are also hierarchically organized. Notice that if feature1 is a subconcept of feature2, than the domain of feature1 should be less general than or at most as general as the domain of feature2. The same condition should hold between the ranges of the two features. For instance, “has_as_political_leader” is a subconcept of “has_as_controling_leader”. The domain of the first feature is “governing_body” which is less general than the domain of the second feature, which is “agent.” Also, the range of “has_as_political_leader” is the same as the range of “has_as_controling_leader”.

58  2003, G.Tecuci, Learning Agents Laboratory 58 Sample task Identify and test a strategic COG candidate corresponding to the ?O1 Condition ?O1 is type_of_economy Identify and test a strategic COG candidate corresponding to the economy of a force The economy is ?O1 INFORMAL STRUCTURE OF THE TASK FORMAL STRUCTURE OF THE TASK Identify and test a strategic COG candidate corresponding to the economy_of_US_1943 Identify and test a strategic COG candidate corresponding to the economy of a force The economy is economy_of_US_1943 Instantiated task: INFORMAL STRUCTURE OF THE TASK FORMAL STRUCTURE OF THE TASK General task: A task is a representation of anything that an agent may be asked to perform.

59  2003, G.Tecuci, Learning Agents Laboratory 59 Exercise How could the agent generate plausible formalizations? Identify and test a strategic COG candidate for Sicily_1943 which is a war scenario What kind of scenario is Sicily_1943? Sicily_1943 is a war scenario Identify and test a strategic COG candidate for Sicily_1943

60  2003, G.Tecuci, Learning Agents Laboratory 60 A task is a representation of anything that an agent may be asked to perform. The informal structure of a task is a phrase in free-form English with variables. The formal structure of a task contains a task name and several task features. The task name is an abstract English phrase with no variables. The task features are also phrases, but they may contain variables, such as ?O1. The formal structure of the task contains also a condition that restricts the values that the variable can take. For example, in the case of the task from this slide, ?O1 has to be an instance of the concept type_of_economy. Replacing the variables with objects that satisfy the condition leads to the creation of specific tasks, as illustrated at the bottom of this slide.

61  2003, G.Tecuci, Learning Agents Laboratory 61 Sample task reduction rule A rule is an ontology-based representation of an elementary problem solving step. IF Identify and test a strategic COG candidate corresponding to the economy of a force which is an industrial economy The industrial economy is ?O1 THEN Identify a strategically critical element as a COG candidate with respect to an industrial economy The strategically critical element is ?O2 The industrial economy is ?O1 Test a strategically critical element which is a strategic COG candidate with respect to an industrial economy The strategically critical element is ?O2 The industrial economy is ?O1 Condition ?O1isindustrial_economy ?O2isindustrial_capacity generates_essential_war_materiel_from_ the_strategic_perspective_of ?O3 ?O3is multi_state_force has_as_member ?O4 ?O4is force has_as_economy ?O1 has_as_industrial_factor ?O2 IF Identify and test a strategic COG candidate corresponding to the ?O1 which is an industrial_economy Question Who or what is a strategically critical element with respect to the ?O1 ? Answer ?O2 because it is an essential generator of war_materiel for ?O3 from the strategic perspective THEN Identify ?O2 as a COG candidate with respect to the ?O1 Test ?O2 which is a strategic COG candidate with respect to the ?O1 INFORMAL STRUCTURE OF THE RULE FORMAL STRUCTURE OF THE RULE

62  2003, G.Tecuci, Learning Agents Laboratory 62 A rule has an informal structure and a formal structure. The informal structure should be read as follows: IF I have to perform the task Identify and test a strategic COG candidate corresponding to the ?O1 which is an industrial economy And the question “Who or what is a strategically critical element with respect to the ?O1 ?” Has the answer “?O2 because it is an essential generator of war materiel for ?O3 from the strategic perspective” THEN I should perform the following two tasks Identify ?O2 as a COG candidate with respect to the ?O1 Test ?O2 which is a strategic COG candidate with respect to the ?O1 The formal structure should be read as follows: IF I have to perform the task Identify and test a strategic COG candidate corresponding to the economy of a force which is an industrial economy, where the industrial economy is ?O1 And the following condition is satisfied: ?O1 is an industrial economy, and ?O2 is an industrial capacity that generates essential war materiel from the strategic perspective of ?O3, and ?O3 is a multi-state force that has ?O4 as one of its members, and ?O4 is a force that has as economy ?O1, and as industrial factor ?O2 THEN I should perform the following two tasks Identify a strategically critical element as a COG candidate with respect to an industrial economy The strategically critical element is ?O2 The industrial economy is ?O1 Test a strategically critical element which is a strategic COG candidate with respect to an industrial economy The strategically critical element is ?O2 The industrial economy is ?O1 Notice that the elements of the condition are concepts and relationships from the object ontology.

63  2003, G.Tecuci, Learning Agents Laboratory 63 Overview Control of modeling, learning and problem solving Knowledge base: Object ontology + Rules Control of the problem solving process An agent for center of gravity analysis Rule-based problem solving Learning and problem solving agents: Disciple Modeling of problem solving through task reduction Multistrategy rule refinement Multistrategy rule learning

64  2003, G.Tecuci, Learning Agents Laboratory 64 ?O1  economy_of_US_1943 Rule condition Identify and test a strategic COG candidate corresponding to the economy of a force which is an industrial economy The industrial economy is economy_of_US_1943 IF Identify and test a strategic COG candidate corresponding to the economy of a force which is an industrial economy The industrial economy is ?O1 THEN Identify a strategically critical element as a COG candidate with respect to an industrial economy The strategically critical element is ?O2 The industrial economy is ?O1 Test a strategically critical element which is a strategic COG candidate with respect to an industrial economy The strategically critical element is ?O2 The industrial economy is ?O1 Condition ?O1isindustrial_economy ?O2isindustrial_capacity generates_essential_war_materiel_from_ the_strategic_perspective_of ?O3 ?O3is multi_state_force has_as_member ?O4 ?O4is force has_as_economy ?O1 has_as_industrial_factor ?O2 ?O4 has_as_industrial_factor instance-of industrial_capacity force has_as_economy economy_of_US_1943 industrial_economy ?O2 generates_essential_ war_materiel_from_ the_ strategic_perspective_of ?O3 instance-of multi_state_force instance-of has_as_member Illustration of rule-based task reduction

65  2003, G.Tecuci, Learning Agents Laboratory 65 Let us consider that the current problem solving task is: Identify and test a strategic COG candidate corresponding to the economy of a force which is an industrial economy The industrial economy is economy_of_US_1943 The agent will look in its knowledge base for a rule that has this type of task in the IF part. Such a rule is shown in the right hand side of the slide. As one can see, the IF task becomes identical with the task to be performed if ?O1 is replaced with economy_of_US_1943. Next the agent has to check that the condition of the rule is satisfied for this value of ?O1. The left hand side of the slide shows what conditions need to be satisfied by economy_of_US_1943, ?O2, ?O3 and ?O4. This condition is satisfied if there are instances of ?O2, ?O3 and ?O4 in the object ontology that satisfy all the relationships specified in the left hand side of the slide.

66  2003, G.Tecuci, Learning Agents Laboratory 66 Rule condition ?O4 has_as_industrial_factor instance-of industrial_capacity force has_as_economy economy_of_US_1943 industrial_economy ?O2 generates_essential_ war_materiel_from_ the_ strategic_perspective_of ?O3 instance-of multi_state_force instance-of has_as_member Matchings US_1943 has_as_industrial_factor instance-of industrial_capacity force has_as_economy economy_of_US_1943 industrial_economy Industrial_capacity_ of_US_1943 generates_essential_ war_materiel_from_ the_ strategic_perspective_of Allied_forces_1943 instance-of multi_state_force instance-of has_as_member single_member_force single_state_force equal_partner_ multi_state_ alliance multi_state_ alliance subconcept-of Object ontology ?O2  industrial_capacity_of_US_1943 ?O3  Allied_forces_1943 ?O4  US_1943

67  2003, G.Tecuci, Learning Agents Laboratory 67 The partially instantiated condition of the rule, shown in the left hand side of the slide, is matched successfully with the object ontology fragment shown in the right hand side of the slide. ?O4 matches US_1943 because both have the same features and the corresponding values of these features also match. Both ?O4 and US_1943 are forces. Indeed, US_1943 is an instance of a single-state force, which is a subconcept of a single-member force, which is a subconcept of a force. Therefore, using the transitivity rule discussed above, US_1943 is a force. Both ?O4 and US_1943 have the feature has_as_economy with the value economy_of_US_1943. Finally, both ?O4 and US_1943 have the feature has_as_industrial_factor and the corresponding values are ?O2 and industrial_capacity_of_US_1943, respectively. Now one has to show that ?O2 and industrial_capacity_of_US_1943 match. ?O2 is an industrial capacity, and industrial_capacity_of_US_1943 is an industrial capacity. Both ?O2 and industrial_capacity_of_US_1943 have the feature generates_essential_war_materiel_from_the_strategic_perspective_of, with the values ?O3 and Allied_forces_1943, respectively. Therefore one has to show that ?O3 and Allied_forces_1943 match. ?O3 is a multi_state_force. Allied_forces_1943 is an equal_partner_multi_state_alliance, which is a multi_state_alliance, which is a multi_state_force. Therefore Allied_forces_1943 is also a multi_state_force. Finally, both ?O3 and Allied_forces_1943 have the feature has_as_member with the values ?O4 and US_1943, respectively. Moreover, ?O4 and US_1943 have already matched. Therefore the entire matching was successful. As the result of these matching, the rule’s variables are instantiated as follows: ?O2  industrial_capacity_of_US_1943 ?O3  Allied_forces_1943 ?O4  US_1943

68  2003, G.Tecuci, Learning Agents Laboratory 68 ?O1  economy_of_US_1943 Rule condition Identify and test a strategic COG candidate corresponding to the economy of a force which is an industrial economy The industrial economy is economy_of_US_1943 IF Identify and test a strategic COG candidate corresponding to the economy of a force which is an industrial economy The industrial economy is ?O1 THEN Identify a strategically critical element as a COG candidate with respect to an industrial economy The strategically critical element is ?O2 The industrial economy is ?O1 Test a strategically critical element which is a strategic COG candidate with respect to an industrial economy The strategically critical element is ?O2 The industrial economy is ?O1 Condition ?O1isindustrial_economy ?O2isindustrial_capacity generates_essential_war_materiel_from_ the_strategic_perspective_of ?O3 ?O3is multi_state_force has_as_member ?O4 ?O4is force has_as_economy ?O1 has_as_industrial_factor ?O2 ?O4 has_as_industrial_factor instance-of industrial_capacity force has_as_economy economy_of_US_1943 industrial_economy ?O2 generates_essential_ war_materiel_from_ the_ strategic_perspective_of ?O3 instance-of multi_state_force instance-of has_as_member ?O2  industrial_capacity_of_US_1943 ?O3  Allied_forces_1943 ?O4  US_1943 Identify a strategically critical element as a COG candidate with respect to an industrial economy The strategically critical element is industrial_capacity_of_US_1943 The industrial economy is economy_of_US_1943 Test a strategically critical element which is a strategic COG candidate with respect to an industrial economy The strategically critical element is industrial_capacity_of_US_1943 The industrial economy is economy_of_US_1943 ?O1  economy_of_US_1943

69  2003, G.Tecuci, Learning Agents Laboratory 69 The rule’s condition is satisfied for the following instantiations of the variables: ?O1  economy_of_US_1943 ?O2  industrial_capacity_of_US_1943 ?O3  Allied_forces_1943 ?O4  US_1943 Therefore the IF task can be reduced to the following THEN tasks: Identify a strategically critical element as a COG candidate with respect to an industrial economy The strategically critical element is industrial_capacity_of_US_1943 The industrial economy is economy_of_US_1943 Test a strategically critical element which is a strategic COG candidate with respect to an industrial economy The strategically critical element is industrial_capacity_of_US_1943 The industrial economy is economy_of_US_1943 Disciple uses the informal structure of this rule to generate the sentences to be shown to the user, as illustrated in the next slide.

70  2003, G.Tecuci, Learning Agents Laboratory 70 Who or what is a strategically critical element with respect to the economy_of_US_1943? industrial_capacity_of_US_1943 because it is an essential generator of war materiel for Allied_forces_1943 from the strategic perspective Generating the informal reduction Identify and test a strategic COG candidate corresponding to the economy_of_US_1943 which is an industrial_economy Identify industrial_capacity_of_US_1943 as a COG candidate with respect to the economy_of_US_1943 Test industrial_capacity_of_US_1943 which is a strategic COG candidate with respect to the economy_of_US_1943 ?O1  economy_of_US_1943 IF Identify and test a strategic COG candidate corresponding to the ?O1 which is an industrial_economy Question Who or what is a strategically critical element with respect to the ?O1 ? Answer ?O2 because it is an essential generator of war_materiel for ?O3 from the strategic perspective THEN Identify ?O2 as a COG candidate with respect to the ?O1 Test ?O2 which is a strategic COG candidate with respect to the ?O1 ?O2  industrial_capacity_of_US_1943 ?O3  Allied_forces_1943 ?O4  US_1943 ?O1  economy_of_US_1943

71  2003, G.Tecuci, Learning Agents Laboratory 71 Rule_1 Rule_2 Who or what is a strategically critical element with respect to the economy_of_US_1943? industrial_capacity_of_US_1943 because it is an essential generator of war materiel for Allied_forces_1943 from the strategic perspective Identify and test a strategic COG candidate corresponding to the economy_of_US_1943 which is an industrial_economy Identify industrial_capacity_of_US_1943 as a COG candidate with respect to the economy_of_US_1943 Test industrial_capacity_of_US_1943 which is a strategic COG candidate with respect to the economy_of_US_1943 What is the type of economy_of_US_1943 ? industrial_economy Identify and test a strategic COG candidate corresponding to the economy_of_US_1943 Successive rule applications

72  2003, G.Tecuci, Learning Agents Laboratory 72 Task reduction rule with “Except when” conditions IF THEN … Condition Except when condition In addition to the regular rule condition that needs to be satisfied, a rule may contain one or several except when conditions that should not be satisfied for the rule to be applicable.

73  2003, G.Tecuci, Learning Agents Laboratory 73 Plausible version space rule IF Identify and test a strategic COG candidate corresponding to the economy of a force which is an industrial economy The industrial economy is ?O1 THEN Identify a strategically critical element as a COG candidate with respect to an industrial economy The strategically critical element is ?O2 The industrial economy is ?O1 Test a strategically critical element which is a strategic COG candidate with respect to an industrial economy The strategically critical element is ?O2 The industrial economy is ?O1 Plausible upper bound condition ?O1istype_of_economy ?O2iseconomic_factor generates_essential_war_materiel_from_the_strategic_perspective_of ?O3 ?O3is multi_state_force has_as_member ?O4 ?O4is force has_as_economy ?O1 has_as_industrial_factor ?O2 Plausible lower bound condition ?O1isindustrial_economy ?O2isindustrial_capacity generates_essential_war_materiel_from_the_strategic_perspective_of ?O3 ?O3is multi_state_alliance has_as_member ?O4 ?O4is single_state_force has_as_economy ?O1 has_as_industrial_factor ?O2

74  2003, G.Tecuci, Learning Agents Laboratory 74 A rule may be partially learned. In this case it will have two applicability conditions, a plausible upper bound condition that is likely to be more general than the exact condition, and a plausible lower bound condition, that is likely to be less general than the exact condition. The plausible upper bound condition allows the rule to be applicable in many analogous situations, but the result may not be correct. The plausible lower bound condition allows the rule to be applicable fewer situations but the result is very likely to be correct. The agent will apply this rule to solve new problems and its success or failure will be used to further refine the rule. In essence, the two conditions will converge toward one another (usually through the specialization of the plausible upper bound condition and the generalization of the plausible lower bound condition), both approaching the exact applicability condition of the rule. Rule refinement could lead to a complex task reduction rule, with additional Except- When conditions which should not be satisfied in order for the rule to be applicable.

75  2003, G.Tecuci, Learning Agents Laboratory 75 Overview Control of modeling, learning and problem solving Knowledge base: Object ontology + Rules Control of the problem solving process An agent for center of gravity analysis Rule-based problem solving Learning and problem solving agents: Disciple Modeling of problem solving through task reduction Multistrategy rule refinement Multistrategy rule learning

76  2003, G.Tecuci, Learning Agents Laboratory 76 The search space for problem solving Let us consider the problem solving task 'Pa‘ and let R1, R2, and R3 be the applicable rules which indicate the reduction of 'Pa' to ‘C(Pb,Pc)', to 'Pd', and to ‘C(Pe,Pf,Pg)', respectively. Therefore, to solve the problem 'Pa', one may either: - solve the problems 'Pb' and 'Pc', or - solve the problem 'Pd', or - solve the problems 'Pe', 'Pf' and 'Pg'. One may represent all these alternatives in the form of an AND/OR tree.

77  2003, G.Tecuci, Learning Agents Laboratory 77 The search space for problem solving (cont.) The node 'Pa' is called an OR node since for solving the problem 'Pa' it is enough to solve ‘C(Pb, Pc)' OR to solve 'Pd' OR to solve ‘C(Pe, Pf, Pg)'. The node ‘C(Pb, Pc)' is called an AND node since for solving it one must solve both 'Pb' AND 'Pc'. The AND/OR tree may be further developed by considering all the rules applicable to its leaves (Pb, Pc, Pd, Pe, Pf, Pg), building the entire search space for the problem 'Pa'. This space contains all the solutions to 'Pa'.

78  2003, G.Tecuci, Learning Agents Laboratory 78 Solution tree To find a solution one needs only to build enough of the tree to demonstrate that 'Pa' is solved. Such a tree is called a solution tree. A node is solved in one of the following cases: - it is a terminal node (a primitive task with known solution); - it is an AND node whose successors are solved; - it is an OR node which has at least one solved successor.

79  2003, G.Tecuci, Learning Agents Laboratory 79 Solution tree (cont.) solved Once the problem solver detects that a node is solved it sends this information to the ancestors of the node. When the node 'Pa' becomes solved, one has found a solution to 'Pa'.

80  2003, G.Tecuci, Learning Agents Laboratory 80 Solution tree (cont.) A node is unsolvable in one of the following cases: -it is a nonterminal node that has no successors (i.e. a nonprimitive problem to which no rule applies); -it is an AND node which has at least one unsolvable successor; -it is an OR node which has all the successors unsolvable.

81  2003, G.Tecuci, Learning Agents Laboratory 81 Solution tree (cont.) Once the problem solver detects that a node is unsolvable it sends this information to the ancestors of the node. If the node 'Pa' becomes unsolvable, then no solution to 'Pa' exists. solvedunsolvable

82  2003, G.Tecuci, Learning Agents Laboratory 82 General search strategies The presented method assumes an exhaustive search of the solution space. Usually, however, the real world problems are characterized by huge search spaces and one has to use heuristic methods in order to limit the search. What types of search control decisions can you identify? Attention focusing: What problem, among the leaves of the problem solving tree, to reduce next? Meta-rule: What rule, among the applicable ones, to use for reducing the current problem?

83  2003, G.Tecuci, Learning Agents Laboratory 83 Overview Control of modeling, learning and problem solving Knowledge base: Object ontology + Rules Control of the problem solving process An agent for center of gravity analysis Rule-based problem solving Learning and problem solving agents: Disciple Modeling of problem solving through task reduction Multistrategy rule refinement Multistrategy rule learning

84  2003, G.Tecuci, Learning Agents Laboratory 84 Use of Disciple at the US Army War College 589jw Military Applications of Artificial Intelligence course Students teach Disciple their COG analysis expertise, using sample scenarios (Iraq 2003, War on terror 2003, Arab-Israeli 1973) Students test the trained Disciple agent based on a new scenario (North Korea 2003) I think that a subject matter expert can use Disciple to build an agent, with limited assistance from a knowledge engineer Spring 2001 COG identification Spring 2002 COG identification and testing Spring 2003 COG testing based on critical capabilities Global evaluations of Disciple by officers during three experiments

85  2003, G.Tecuci, Learning Agents Laboratory 85 This viewgraph shows the use of Disciple in “589jw Military Applications of Artificial Intelligence” (the MAAI course) at the US Army War College. In this course the students teach personal Disciple agents their own expertise in Center of Gravity determination, and then evaluate, both the developed agents, and the development process. In the last three years, we have performed three unique experiments of agent training and knowledge base development by subject matter experts, with limited assistance from knowledge engineers, as part of three successive versions of the MAAI course). These courses have been attended by a total of 38 US and international officers from all the military services and the national reserve. In the 2001 experiment, subject matter experts have used historic scenarios with state actors (such as Okinawa 1943), to teach personal Disciple agents how to identify center of gravity candidates. In the 2002 experiment, subject matter experts have used historic scenarios and a hypothetical scenario with state actors, to teach personal Disciple agents how to identify center of gravity candidates and to eliminate those candidates that do not pass certain tests. In the 2003 experiment, subject matter experts have used historic, current and hypothetical scenarios, with both state and non-state actors, to teach personal Disciple agents how to test center of gravity candidates based on the concepts of critical capabilities, critical requirements and critical vulnerabilities. At the end of each of the three agent training experiments (performed in Spring 2001, Spring 2002, and Spring 2003), the subject matter experts have been asked to express their disagreement or agreement with the following statement: “I think that a subject matter expert can use Disciple to build an agent, with limited assistance from a knowledge engineer.” Notice that each successive experiment addressed a more complex training task, but was also based on an improved version of Disciple which resulted in improved results. These results demonstrate that we have made a significant progress in developing the technology that will allow subject matter experts to build their own intelligent assistants.

86  2003, G.Tecuci, Learning Agents Laboratory 86 Control of modeling, learning and problem solving Input Task Generated Reduction Mixed-Initiative Problem Solving Ontology + Rules Reject Reduction Accept Reduction New Reduction Rule Refinement Task Refinement Rule Refinement Modeling Formalization Learning Solution

87  2003, G.Tecuci, Learning Agents Laboratory 87 This slide shows the interaction between the expert and the agent when the agent has already learned some rules. 1.This interaction is governed by the mixed-initiative problem solver. 2.The expert formulates the initial task. 3.Then the agent attempts to reduce this task by using the previously learned rules. Let us assume that the agent succeeded to propose a reduction to the current task. 4.The expert has to accept it if it is correct, or he has to reject it, if it is incorrect. 5.If the reduction proposed by the agent is accepted by the expert, the rule that generated it and its component tasks are generalized. Then the process resumes, the agent attempting to reduce the new task. 6.If the reduction proposed by the agent is rejected, then the agent will have to specialize the rule, and possibly its component tasks. 7.In this case the expert will have to indicate the correct reduction, going through the normal steps of modeling, formalization, and learning. Similarly, when the agent cannot propose a reduction of the current task, the expert will have to indicate it, again going through the steps of modeling, formalization and learning. The control of this interaction is done by the mixed-initiative problem solver tool.

88  2003, G.Tecuci, Learning Agents Laboratory 88 US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943? I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Provides an example 1 Rule_15 Learns 2 Rule_15 ? Applies Germany_1943 Identify and test a strategic COG candidate for Germany_1943 Which is a member of European_Axis_1943? Therefore I need to 3 Accepts the example 4 Rule_15 Refines 5 I need to Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 …

89  2003, G.Tecuci, Learning Agents Laboratory 89 Initially the agent does not contain any task or rule in its knowledge base. The expert is teaching the agent to reduce the task: Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 To the task Identify and test a strategic COG candidate corresponding to a member of the US_1943 From this task the agent learns a plausible version space task reduction rule. Now the agent can use this rule in problem solving. Therefore, it will be able to propose to reduce the task Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 To the task Identify and test a strategic COG candidate for Germany_1943 The expert accepts this reduction as correct, and the agent refines the rule. In the following we will show the internal reasoning of the agent that corresponds to this behavior.

90  2003, G.Tecuci, Learning Agents Laboratory 90 Overview Control of modeling, learning and problem solving Knowledge base: Object ontology + Rules Control of the problem solving process An agent for center of gravity analysis Rule-based problem solving Learning and problem solving agents: Disciple Modeling of problem solving through task reduction Multistrategy rule refinement Multistrategy rule learning

91  2003, G.Tecuci, Learning Agents Laboratory 91 The rule learning problem: definition GIVEN: an example of a problem solving episode; a knowledge base that includes an object ontology and a set of problem solving rules; an expert that understands why the given example is correct and may answer agent’s questions. DETERMINE: a plausible version space rule that is an analogy-based generalization of the specific problem solving episode.

92  2003, G.Tecuci, Learning Agents Laboratory 92 Input example US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943? I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 This is an example of a problem solving step from which the agent will learn a general problem solving rule.

93  2003, G.Tecuci, Learning Agents Laboratory 93 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1ismulti_member_force has_as_member ?O2 ?O2 isforce Plausible Lower Bound Condition ?O1isequal_partners_multi_state_alliance has_as_member ?O2 ?O2issingle_state_force explanation ?O1 has_as_member ?O2 Learned PVS rule IF Identify and test a strategic COG candidate corresponding to a member of the ?O1 Question Which is a member of ?O1 ? Answer ?O2 THEN Identify and test a strategic COG candidate for ?O2 INFORMAL STRUCTURE OF THE RULE FORMAL STRUCTURE OF THE RULE

94  2003, G.Tecuci, Learning Agents Laboratory 94 This is the rule that is learned from the input example. It has both a formal structure (used for formal reasoning), and an informal structure (used to communicate more naturally with the user). Let us consider the formal structure of the rule. This is an IF-THEN structure that specifies the condition under which the task from the IF part can be reduced to the task from the THEN part. This rule, however, is only partially learned. Indeed, instead of a single applicability condition, it has two conditions: 1) a plausible upper bound condition which is more general than the exact (but not yet known) condition, and 2) a plausible lower bound condition which is less general than the exact condition. Completely learning the rule means learning an exact condition. However, for now we will show how the agent learns this rule from the input example shown on a previous slide. The basic steps of the learning method are those from the next side.

95  2003, G.Tecuci, Learning Agents Laboratory 95 Basic steps of the rule learning method 3. Generalize the example and the explanation into a plausible version space rule. 1. Formalize and learn the tasks 2. Find a formal explanation of why the example is correct. This explanation is the best possible approximation of the question and the answer, in the object ontology.

96  2003, G.Tecuci, Learning Agents Laboratory 96 1. Formalize the tasks Identify and test a strategic COG candidate for US_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Identify and test a strategic COG candidate for a force The force is US_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943

97  2003, G.Tecuci, Learning Agents Laboratory 97 Because the tasks from the modeling are in unrestricted English Disciple cannot reason with them. We need to formalize these tasks. For each task we need to define an abstract phrase that indicates what this task is about (the task name), and a list of specific phrases that give all the details about the task (the task features). The task name should not contain any instance (such as Allied_Forces_1943). All these instances should appear in the task features. In general, the task name may be obtained from the English expression in the left hand side by simply replacing each specific object with a more abstract concept. Then we will add a corresponding task feature that specifies the value for this abstract concept.

98  2003, G.Tecuci, Learning Agents Laboratory 98 Task learning Plausible upper bound condition ?O1 is force Plausible lower bound condition ?O1 is single_state_force object force opposing_force Single_state_force multi_state_force Allied_Forces_1943 US_1943 has_as_member instance_of subconcept_of instance_of multi_state_alliance equal_partners_ multi_state_ alliance subconcept_of FORMAL STRUCTURE OF THE TASK Identify and test a strategic COG candidate for ?O1 INFORMAL STRUCTURE OF THE TASK Identify and test a strategic COG candidate for US_1943 Identify and test a strategic COG candidate for a force The force is US_1943 Identify and test a strategic COG candidate for a force The force is ?O1 single_member_force multi_member_force subconcept_of

99  2003, G.Tecuci, Learning Agents Laboratory 99 The top part of this slide shows the English expression and the formalized expression of a specific task. From the English expression of the specific task the agent learns the informal structure of the general task by replacing the specific instance US_1943, with the variable ?O1. From the formalized expression of the specific task, the agent learns the formal structure of the general task. The formal structure also specifies the conditions that ?O1 should satisfy. However, the agent cannot formulate the exact condition, but only two bounds for the exact condition that will have to be learned. The plausible lower bound condition is more restrictive, allowing ?O1 to only be a single-state force. This condition is obtained by replacing US_1943 with its most specific generalization in the object ontology. The plausible upper bound condition is less restrictive. ?O1 could be any force. This condition is obtained by replacing US_1943 with the most general sub-concept of which is more general than US_1943. The plausible upper bound condition allows the agent to generate more tasks, because now ?O1 can be replaced with any instance of force. However, there is no guarantee that the generated task is a correct expression. The agent will continue to improve the learned task, generalizing the plausible lower bound condition and specializing the plausible upper bound condition until they become identical and each object that satisfies the obtained condition leads to a correct task expression.

100  2003, G.Tecuci, Learning Agents Laboratory 100 2. Find an explanation of why the example is correct US_1943 has_as_member Allied_Forces_1943 The explanation is the best possible approximation of the question and the answer, in the object ontology. US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943? I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943

101  2003, G.Tecuci, Learning Agents Laboratory 101 The expert has defined the example during the modeling process. During the task formalization process, the expert and the agent have collaborated to formalize the tasks. Now the expert and the agent have to collaborate to also formalize the question and the answer. This formalization is the explanation from the bottom of this slide. It consists of a relation between two elements from the agent's ontology: “Allied_Forces_1943 has_as_member US_1943” It states, in Disciple’s language, that US_1943 is a member of Allied_Forces_1943. An expert can understand such formal expressions because they actually correspond to his own explanations. However, he cannot be expected to be able to define them because he is not a knowledge engineer. For one thing, he would need to use the formal language of the agent. But this would not be enough. He would also need to know the names of the potentially many thousands of concepts and features from the agent’s ontology (such as “has_as_member”). While defining the formal explanation of this task reduction step is beyond the individual capabilities of the expert and the agent, it is not beyond their joint capabilities. Finding such explanation pieces is a mixed-initiative process involving the expert and the agent. In essence, the agent will use analogical reasoning and help from the expert to identify and propose a set of plausible explanation pieces from which the expert will have to select the correct ones. Once the expert is satisfied with the identified explanation pieces, the agent will generate a general rule.

102  2003, G.Tecuci, Learning Agents Laboratory 102 3. Generate the PVS rule Condition ?O1 is Allied_Forces_1943 has_as_member ?O2 ?O2 is US_1943 Most general generalization IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1ismulti_member_force has_as_member ?O2 ?O2 isforce Plausible Lower Bound Condition ?O1isequal_partners_multi_state_alliance has_as_member ?O2 ?O2issingle_state_force explanation ?O1 has_as_member ?O2 Most specific generalization US_1943 has_as_member Allied_Forces_1943 Rewrite as

103  2003, G.Tecuci, Learning Agents Laboratory 103 Notice that the explanation is first re-written as a task condition, and then two generalizations of this condition are created: a most conservative one (the plausible lower bound condition) and a most aggressive one (the plausible upper bound condition). The plausible lower bound is the minimal generalization of the condition from the left hand side of the slide. Similarly, the most general generalization is the plausible upper bound.

104  2003, G.Tecuci, Learning Agents Laboratory 104 Analogical reasoning similar example explains? similar Identify and test a strategic COG candidate for a force The force is Germany_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is European_Axis_1943 initial example explanation explains Identify and test a strategic COG candidate for a force The force is US_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943 US_1943 has_as_ member Allied_Forces_1943 similar explanation less general than Analogy criterion less general than ?O2 has_as_ member ?O1 forcemulti_member_force instance_of Germany_1943 has_as_ member European_Axis_1943 similar

105  2003, G.Tecuci, Learning Agents Laboratory 105 The agent uses analogical reasoning to generalize the example and its explanation into a plausible version space rule. This slide provides a justification for the generalization procedure used by the agent. Let us consider that the expert has provided to the agent the task reduction example from the bottom left of this slide. This reduction is correct because “Allied_Forces_1943 has_as_member US_1943”. Now let us consider the European_Axis_1943 which has as member Germany_1943. Using the same logic as above, one can create the task reduction example from the bottom right of the slide. This is a type of analogical reasoning that the agent performs. The explanation from the left hand side of this slide explains the task reduction from the left hand side. This explanation is similar with the explanation from the right hand side of this slide (they have the same structure, being both less general than the analogy criterion from the top of this slide). Therefore one could expect that this explanation from the right hand side of the slide would explain an example that would be similar with the initial example. This example is the one from the right hand side of the slide. To summarize: The expert provided the example from the left hand side of this slide and helped the agent to find its explanation. Using analogical reasoning the agent can perform by itself the reasoning from the bottom right hand side of the slide.

106  2003, G.Tecuci, Learning Agents Laboratory 106 Generalization by analogy Any value of ?O1 should be an instance of: DOMAIN(has_as_member)  RANGE(The force is) = multi_member_force  force = multi_member_force Any value of ?O2 should be an instance of: RANGE(has_as_member)  RANGE(The force is) = force  force = force Knowledge-base constraints on the generalization: initial example explains Identify and test a strategic COG candidate for a force The force is US_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943 US_1943 has_as_ member Allied_Forces_1943 generalization explains ?O2 has_as_ member ?O1 forcemulti_member_force instance_of Identify and test a strategic COG candidate for a force The force is ?O2 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1

107  2003, G.Tecuci, Learning Agents Laboratory 107 Notice that in the previous illustration we could have used any other forces ?O1 and ?O2 instead of European_Axis_1943 and Germany_1943. As long as ?O1 has as member ?O2, the agent would hypothesize that in order to identify and test a strategic COG for ?O1 one could identify and test a strategic COG for ?O2. The agent uses various constraints from the knowledge base to restrict the values that the variables ?O1 and ?O2 could take. For instance, ?O1 should have the feature “has_as_member” and the domain of this feature (i.e. the set of objects that may have this feature) is multi_member_force. Therefore ?O1 should be a multi_member_force. Also, ?O1 is the value of the task feature “The force is” the range of which is “force”. Therefore ?O1 should also be a force. From these two restrictions, we conclude that ?O1 should be a multi_member_force. Using this kind of reasoning, the agent generalizes the example from the left hand side of this slide to the expression from the right hand side of this slide.

108  2003, G.Tecuci, Learning Agents Laboratory 108 Overview Control of modeling, learning and problem solving Knowledge base: Object ontology + Rules Control of the problem solving process An agent for center of gravity analysis Rule-based problem solving Learning and problem solving agents: Disciple Modeling of problem solving through task reduction Multistrategy rule refinement Multistrategy rule learning

109  2003, G.Tecuci, Learning Agents Laboratory 109 4. The rule refinement problem (definition) GIVEN: a plausible version space rule; a positive or a negative example of the rule (i.e. a correct or an incorrect problem solving episode); a knowledge base that includes an object ontology and a set of problem solving rules; an expert that understands why the example is positive or negative, and can answer agent’s questions. DETERMINE: an improved rule that covers the example if it is positive, or does not cover the example if it is negative; an extended object ontology (if needed for rule refinement).

110  2003, G.Tecuci, Learning Agents Laboratory 110 Version space rule learning and refinement UB + The agent learns a rule with a very specific lower bound condition (LB) and a very general upper bound condition (UB). _ + + + + LB UB LB _ + + UB=LB _ _ + + … Let E2 be a new task reduction generated by the agent and accepted as correct by the expert. Then the agent generalizes LB as little as possible to cover it. Let E3 be a new task reduction generated by the agent which is rejected by the expert. Then the agent specialize UB as little as possible to uncover it and to remain more general than LB. After several iterations of this process LB may become identical with UB and a rule with an exact condition is learned. Let E1 be the first task reduction from which the rule is learned. E1 E2 E3

111  2003, G.Tecuci, Learning Agents Laboratory 111 Rule refinement with a positive example Condition satisfied by positive example ?O1 is European_Axis_1943 has_as_member ?O3 ?O2 is Germany_1943 less general than Positive example that satisfies the upper bound explanation European_Axis_1943 has_as_member Germany_1943 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1ismulti_member_force has_as_member ?O2 ?O2 isforce Plausible Lower Bound Condition ?O1isequal_partners_multi_state_alliance has_as_member ?O2 ?O2issingle_state_force explanation ?O1 has_as_member ?O2 Identify and test a strategic COG candidate for Germany_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943

112  2003, G.Tecuci, Learning Agents Laboratory 112 The upper right side of this slide shows an example generated by the agent. This example is generated because it satisfies the plausible upper bound condition of the rule (as shown by the red arrows). This example is accepted as correct by the expert. Therefore the plausible lower bound condition is generalized to cover it as shown in the following.

113  2003, G.Tecuci, Learning Agents Laboratory 113 Condition satisfied by the positive example ?O1 is European_Axis_1943 has_as_member ?O2 ?O2 is Germany_1943 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Plausible Lower Bound Condition (from rule) ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force Minimal generalization of the plausible lower bound minimal generalization less general than (or at most as general as) New Plausible Lower Bound Condition ?O1 is multi_state_alliance has_as_member ?O2 ?O2 is single_state_force

114  2003, G.Tecuci, Learning Agents Laboratory 114 The lower left side of this slide shows the plausible lower bound condition of the rule. The lower right side of this slide shows the condition corresponding to the generated positive example. These two conditions are generalized as shown in the middle of this slide, by using the climbing generalization hierarchy rule. Notice, for instance, that equal_partners_multi_state_alliance and European_Axis_1943 are generalized to multi_state_alliance. This generalization is based on the object ontology, as illustrated in the following slide. Indeed, multi_state_alliance is the minimal generalization of equals_partners_multi_state_alliance that covers European_Axis_1943.

115  2003, G.Tecuci, Learning Agents Laboratory 115 single_state_force single_group_forcemulti_state_forcemulti_group_force multi_state_alliance multi_state_coalition equal_partners_ multi_state_alliance dominant_partner_ multi_state_alliance equal_partners_ multi_state_coalition dominant_partner_ multi_state_coalition composition_of_forces multi_member_force single_member_force Forces Allied_Forces_1943 European_Axis_1943 force … Germany_1943 US_1943 multi_state_alliance is the minimal generalization of equals_partners_multi_state_alliance that covers European_Axis_1943

116  2003, G.Tecuci, Learning Agents Laboratory 116 Refined rule generalization IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1ismulti_member_force has_as_member ?O2 ?O2 isforce Plausible Lower Bound Condition ?O1isequal_partners_multi_state_alliance has_as_member ?O2 ?O2issingle_state_force explanation ?O1 has_as_member ?O2 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1ismulti_member_force has_as_member ?O2 ?O2 isforce Plausible Lower Bound Condition ?O1ismulti_state_alliance has_as_member ?O2 ?O2issingle_state_force explanation ?O1 has_as_member ?O2

117  2003, G.Tecuci, Learning Agents Laboratory 117 Demonstration Disciple Teaching Disciple to test leaders who are COG candidates

118  2003, G.Tecuci, Learning Agents Laboratory 118 Recommended reading Tecuci G., Building Intelligent Agents: A Theory, Methodology, Tool and Case Studies, Academic Press, 1998. Tecuci G., Boicu M., Bowman M., and Marcu M., with a commentary by Burke M.: An Innovative Application from the DARPA Knowledge Bases Programs: Rapid Development of a High Performance Knowledge Base for Course of Action Critiquing, in AI Magazine, 22, 2, 2001, pp. 43-61. AAAI Press, Menlo Park, California, 2001. http://lalab.gmu.edu/publications/default.htm Describes the course of action domain. Tecuci G., Boicu M., Marcu D., Stanescu B., Boicu C. and Comello J., Training and Using Disciple Agents: A Case Study in the Military Center of Gravity Analysis Domain, in AI Magazine, AAAI Press, Menlo Park, California, 2002. http://lalab.gmu.edu/publications/default.htm


Download ppt " 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 11."

Similar presentations


Ads by Google