Presentation is loading. Please wait.

Presentation is loading. Please wait.

 G.Tecuci, Learning Agents Laboratory Learning Agents Laboratory Department of Computer Science George Mason University George Tecuci

Similar presentations


Presentation on theme: " G.Tecuci, Learning Agents Laboratory Learning Agents Laboratory Department of Computer Science George Mason University George Tecuci"— Presentation transcript:

1  G.Tecuci, Learning Agents Laboratory Learning Agents Laboratory Department of Computer Science George Mason University George Tecuci tecuci@cs.gmu.edu http://lalab.gmu.edu/ CS 785, Fall 2001

2  G.Tecuci, Learning Agents Laboratory Overview 2. Artificial Intelligence and intelligent agents 3. Sample intelligent agent: presentation and demo 4. Agent development: Knowledge acquisition and problem solving 5. Overview of the course 1. Course objective and Class introduction

3  G.Tecuci, Learning Agents Laboratory 1. Course Objective Present principles and major methods of knowledge acquisition for the development of knowledge bases and problem solving agents. Major topics include: overview of knowledge engineering, general problem solving methods, ontology design and development, modeling of the problem solving process, learning strategies, rule learning and rule refinement. The course will emphasize the most recent advances in this area, such as: knowledge reuse, agent teaching and learning, knowledge acquisition directly from subject matter experts, and mixed- initiative knowledge base development. It will also discuss open issues and frontier research. The students will acquire hands-on experience with a complex, state-of-the-art methodology and tool for the end-to-end development of knowledge-based problem-solving agents.

4  G.Tecuci, Learning Agents Laboratory 2. Artificial Intelligence and intelligent agents What is Artificial Intelligence What is an intelligent agent Characteristic features of intelligent agents Sample tasks for intelligent agents

5  G.Tecuci, Learning Agents Laboratory Artificial Intelligence is the Science and Engineering that is concerned with the theory and practice of developing systems that exhibit the characteristics we associate with intelligence in human behavior: perception, natural language processing, reasoning, planning and problem solving, learning and adaptation, etc. What is Artificial Intelligence

6  G.Tecuci, Learning Agents Laboratory Central goals of Artificial Intelligence Understand the principles that make intelligence possible (in humans, animals, and artificial agents) Developing intelligent machines or agents (no matter whether they operate as humans or not) Formalizing knowledge and mechanizing reasoning in all areas of human endeavor Making the working with computers as easy as working with people Developing human-machine systems that exploit the complementariness of human and automated reasoning

7  G.Tecuci, Learning Agents Laboratory What is an intelligent agent Intelligent Agent user/ environment output/ sensors effectors input/ An intelligent agent is a system that: perceives its environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or other complex environment); reasons to interpret perceptions, draw inferences, solve problems, and determine actions; and acts upon that environment to realize a set of goals or tasks for which it was designed.

8  G.Tecuci, Learning Agents Laboratory Humans, with multiple, conflicting drives, multiple senses, multiple possible actions, and complex sophisticated control structures, are at the highest end of being an agent. At the low end of being an agent is a thermostat. It continuously senses the room temperature, starting or stopping the heating system each time the current temperature is out of a pre-defined range. The intelligent agents we are concerned with are in between. They are clearly not as capable as humans, but they are significantly more capable than a thermostat. What is an intelligent agent (cont.)

9  G.Tecuci, Learning Agents Laboratory An intelligent agent interacts with a human or some other agents via some kind of agent-communication language and may not blindly obey commands, but may have the ability to modify requests, ask clarification questions, or even refuse to satisfy certain requests. It can accept high-level requests indicating what the user wants and can decide how to satisfy each request with some degree of independence or autonomy, exhibiting goal-directed behavior and dynamically choosing which actions to take, and in what sequence. What is an intelligent agent (cont.)

10  G.Tecuci, Learning Agents Laboratory An intelligent agent can : collaborate with its user to improve the accomplishment of his or her tasks; carry out tasks on user’s behalf, and in so doing employs some knowledge of the user's goals or desires; monitor events or procedures for the user; advise the user on how to perform a task; train or teach the user; help different users collaborate. What an intelligent agent can do

11  G.Tecuci, Learning Agents Laboratory Characteristic features of intelligent agents Knowledge representation and reasoning Transparency and explanations Ability to communicate Use of huge amounts of knowledge Exploration of huge search spaces Use of heuristics Reasoning with incomplete or conflicting data Ability to learn and adapt

12  G.Tecuci, Learning Agents Laboratory If an object is on top of another object that is itself on top of a third object then the first object is on top of the third object. RULE  x,y,z  OBJECT, (ON x y) & (ON y z)  (ON x z) represents ON CUP1 BOOK1 ON TABLE1 CUP BOOK TABLE INSTANCE-OF OBJECT SUBCLASS-OF Application Domain Model of the Domain An intelligent agent contains an internal representation of its external application domain, where relevant elements of the application domain (objects, relations, classes, laws, actions) are represented as symbolic expressions. This mapping allows the agent to reason about the application domain by performing reasoning processes in the domain model, and transferring the conclusions back into the application domain. Knowledge representation and reasoning ONTOLOGY

13  G.Tecuci, Learning Agents Laboratory Separation of knowledge from control Ontology Rules/Cases/Methods Problem Solving Engine Intelligent Agent User/ Environment Output/ Sensors Effectors Input/ Knowledge Base Implements a general method of interpreting the input problem based on the knowledge from the knowledge base Data structures that represent the objects from the application domain, general laws governing them, action that can be performed with them, etc.

14  G.Tecuci, Learning Agents Laboratory The knowledge possessed by the agent and its reasoning processes should be understandable to humans. The agent should have the ability to give explanations of its behavior, what decisions it is making and why. Without transparency it would be very difficult to accept, for instance, a medical diagnosis performed by an intelligent agent. The need for transparency shows that the main goal of artificial intelligence is to enhance human capabilities and not to replace human activity. Transparency and explanations

15  G.Tecuci, Learning Agents Laboratory An agent should be able to communicate with its users or other agents. The communication language should be as natural to the human users as possible. Ideally, it should be free natural language. The problem of natural language understanding and generation is very difficult due to the ambiguity of words and sentences, the paraphrases, ellipses and references which are used in human communication. Ability to communicate

16  G.Tecuci, Learning Agents Laboratory Visiting relatives can be boring. To visit relatives can be boring. The relatives that visit us can be boring. Ambiguity of natural language Diamond a mineral consisting of nearly pure carbon in crystalline form, usually colorless, the hardest natural substance known; a gem or other piece cut from this mineral; a lozenge-shaped plane figure (  ); in Baseball, the infield or the whole playing field. Words and sentences have multiple meanings She told the man that she hated to run alone. She told the man: I hate to run alone ! She told the man whom she hated: run alone !

17  G.Tecuci, Learning Agents Laboratory Other difficulties with natural language processing Ann gave Bob a cat.Ann gave a cat to Bob. Bob was given a cat by Ann.A cat was given to Bob by Ann. What Ann gave Bob was a cat.Bob received a cat from Ann. Paraphrase: The same meaning may be expressed by many sentences. Bob:What is the length ofJohn: 1072 the ship USS J.F.Kennedy ? Bob: The beam ?John: 130 Ellipsis: Use of sentences that appear ill-formed because they are incomplete. Typically the parts that are missing have to be extracted from the previous sentences. Reference: Entities may be referred to without giving their names. Bob:What is the length ofJohn: 1072 the ship USS J.F.Kennedy ? Bob:Who is her commander ?John: Captain Nelson.

18  G.Tecuci, Learning Agents Laboratory In order to solve "real-world" problems, an intelligent agent needs a huge amount of domain knowledge in its memory (knowledge base). Example of human-agent dialog: User:The toolbox is locked. Agent:The key is in the drawer. In order to understand such sentences and to respond adequately, the agent needs to have a lot of knowledge about the user, including the goals the user might want to achieve. Use of huge amounts of knowledge

19  G.Tecuci, Learning Agents Laboratory User: The toolbox is locked. Agent: Why is he telling me this? I already know that the box is locked. I know he needs to get in. Perhaps he is telling me because he believes I can help. To get in requires a key. He knows it and he knows I know it. The key is in the drawer. If he knew this, he would not tell me that the toolbox is locked. So he must not realize it. To make him know it, I can tell him. I am supposed to help him. The key is in the drawer. Use of huge amounts of knowledge (example)

20  G.Tecuci, Learning Agents Laboratory An intelligent agent usually needs to search huge spaces in order to find solutions to problems. Example 1: A search agent on the internet Example 2: A checkers playing agent Exploration of huge search spaces

21  G.Tecuci, Learning Agents Laboratory Determining the best move with minimax: Exploration of huge search spaces: illustration win lose win lose win draw lose win lose win lose win lose Opponent I I

22  G.Tecuci, Learning Agents Laboratory A complete game tree for checkers has been estimated as having 10 40 nonterminal nodes. If one assumes that these nodes could be generated at a rate of 3 billion per second, the generation of the whole tree would still require around 10 21 centuries ! Checkers is far simpler than chess which, in turn, is generally far simpler than business competitions or military games. Size of the search space Exploration of huge search spaces: illustration The tree of possibilities is far too large to be fully generated and searched backward from the terminal nodes, for an optimal move.

23  G.Tecuci, Learning Agents Laboratory A heuristic is a rule of thumb, strategy, trick, simplification, or any other kind of device which drastically limits the search for solutions in large problem spaces. Heuristics do not guarantee optimal solutions. In fact they do not guarantee any solution at all. A useful heuristic is one that offers solutions which are good enough most of the time. Use of heuristics Intelligent agents generally attack problems for which no algorithm is known or feasible, problems that require heuristic methods.

24  G.Tecuci, Learning Agents Laboratory Heuristic function for board position evaluation: w 1. f 1 + w 2. f 2 + w 3. f 3 + … where w i are real-valued weights and f i are board features (e.g. center control, total mobility, relative exchange advantage. Use of heuristics: illustration

25  G.Tecuci, Learning Agents Laboratory The ability to provide some solution even if not all the data relevant to the problem is available at the time a solution is required. Example: The reasoning of a physician in an intensive care unit. Planning a military course of action. Reasoning with incomplete data If the EKG test results are not available, but the patient is suffering chest pains, I might still suspect a heart problem.

26  G.Tecuci, Learning Agents Laboratory Reasoning with conflicting data The ability to take into account data items that are more or less in contradiction with one another (conflicting data or data corrupted by errors). Example: The reasoning of a military intelligence analyst that has to cope with the deception actions of the enemy.

27  G.Tecuci, Learning Agents Laboratory The ability to improve its competence and efficiency. An agent is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. An agent is improving its efficiency if it learns to solve more efficiently (for instance, by using less time or space resources) the problems from its area of competence. Ability to learn

28  G.Tecuci, Learning Agents Laboratory Illustration: concept learning ((1 ? ) (? dark)) Learned concept ((1 light) (2 dark)) ((1 dark) (2 dark)) ((1 light) (2 light)) ((1 dark) (2 light)) ((1 dark) (1 dark)) ++ + __ Concept examples Learn the concept of ill cell by comparing examples of ill cells with examples of healthy cells, and by creating a generalized description of the similarities between the ill cells :

29  G.Tecuci, Learning Agents Laboratory “Ill cell” concept ((1 ?) (? dark)) This is an example of reasoning with incomplete information. ((1 light) (1 light)) ((1 dark) (1 light)) Is this cell ill? Is this cell ill? NoYes Ability to learn: classification The learned concept is used to diagnose other cells

30  G.Tecuci, Learning Agents Laboratory Extended agent architecture Ontology Rules/Cases/Methods Problem Solving Engine Intelligent Agent User/ Environment Output/ Sensors Effectors Input/ Knowledge Base Learning Engine The learning engine implements methods for extending and refining the knowledge in the knowledge base.

31  G.Tecuci, Learning Agents Laboratory Sample tasks for intelligent agents Example: Determine the actions that need to be performed in order to repair a bridge. Planning: Finding a set of actions that achieve a certain goal. Example: Critiquing a military course of action (or plan) based on the principles of war and the tenets of army operations. Critiquing: Expressing judgments about something according to certain standards. Example: Interpreting gauge readings in a chemical process plant to infer the status of the process. Interpretation: Inferring situation description from sensory data.

32  G.Tecuci, Learning Agents Laboratory Sample tasks for intelligent agents (cont.) Examples: Predicting the damage to crops from some type of insect. Estimating global oil demand from the current geopolitical world situation. Prediction: Inferring likely consequences of given situations. Examples: Determining the disease of a patient from the observed symptoms. Locating faults in electrical circuits. Finding defective components in the cooling system of nuclear reactors. Diagnosis: Inferring system malfunctions from observables. Example: Designing integrated circuits layouts. Design: Configuring objects under constraints.

33  G.Tecuci, Learning Agents Laboratory Sample tasks for intelligent agents (cont.) Examples: Monitoring instrument readings in a nuclear reactor to detect accident conditions. Assisting patients in an intensive care unit by analyzing data from the monitoring equipment. Monitoring: Comparing observations to expected outcomes. Examples: Suggesting how to tune a computer system to reduce a particular type of performance problem. Choosing a repair procedure to fix a known malfunction in a locomotive. Debugging: Prescribing remedies for malfunctions. Example: Tuning a mass spectrometer, i.e., setting the instrument's operating controls to achieve optimum sensitivity consistent with correct peak ratios and shapes. Repair: Executing plans to administer prescribed remedies.

34  G.Tecuci, Learning Agents Laboratory Sample tasks for intelligent agents (cont.) Examples: Teaching students a foreign language. Teaching students to troubleshoot electrical circuits. Teaching medical students in the area of antimicrobial therapy selection. Instruction: Diagnosing, debugging, and repairing student behavior. Example: Managing the manufacturing and distribution of computer systems. Control: Governing overall system behavior. Any useful task: Information fusion. Information assurance. Travel planning. Email management.

35  G.Tecuci, Learning Agents Laboratory 2. Sample intelligent agent: Presentation and demo Agent task: Course of action critiquing Knowledge representation Problem solving Demo Why are intelligent agents important

36  G.Tecuci, Learning Agents Laboratory Critiquing Critiquing means expressing judgments about something according to certain standards. Example: Critique various aspects of a military Course of Action, such as its viability (its suitability, feasibility, acceptability and completeness), its correctness (which considers the array of forces, the scheme of maneuver, and the command and control), and its strengths and weaknesses with respect to the principles of war and the tenets of army operations.

37  G.Tecuci, Learning Agents Laboratory Sample agent: Course of Action critiquer Source: Challenge problem for the DARPA’s High Performance Knowledge Base (HPKB) program (FY97-99). Background: A military course of action (COA) is a preliminary outline of a plan for how a military unit might attempt to accomplish a mission. After receiving orders to plan for a mission, a commander and staff analyze the mission, conceive and evaluate potential COAs, select a COA, and prepare a detailed plans to accomplish the mission based on the selected COA. The general practice is for the staff to generate several COAs for a mission, and then to make a comparison of those COAs based on many factors including the situation, the commander’s guidance, the principles of war, and the tenets of army operations. The commander makes the final decision on which COA will be used to generate his or her plan based on the recommendations of the staff and his or her own experience with the same factors considered by the staff. Agent task: Identify strengths and weaknesses in a COA, based on the principles of war and the tenets of army operations.

38  G.Tecuci, Learning Agents Laboratory COA Example – the sketch Graphical depiction of a preliminary plan. It includes enough of the high level structure and maneuver aspects of the plan to show how the actions of each unit fit together to accomplish the overall purpose.

39  G.Tecuci, Learning Agents Laboratory COA Example – the statement Explains what the units will do to accomplish the assigned mission.

40  G.Tecuci, Learning Agents Laboratory COA critiquing task Answer each of the following questions: Provide general guidance for the conduct of war at the strategic, operational and tactical levels. Describe characteristics of successful operations.

41  G.Tecuci, Learning Agents Laboratory Strike the enemy at a time or place or in a manner for which he is unprepared. Surprise can decisively shift the balance of combat power. By seeking surprise, forces can achieve success well out of proportion to the effort expended. Rapid advances in surveillance technology and mass communication make it increasingly difficult to mask or cloak large-scale marshaling or movement of personnel and equipment. The enemy need not be taken completely by surprise but only become aware too late to react effectively. Factors contributing to surprise include speed, effective intelligence, deception, application of unexpected combat power, operations security (OPSEC), and variations in tactics and methods of operation. Surprise can be in tempo, size of force, direction or location of main effort, and timing. Deception can aid the probability of achieving surprise. The Principle of Surprise (from FM100-5)

42  G.Tecuci, Learning Agents Laboratory Knowledge representation: object ontology The ontology defines the objects from an application domain.

43  G.Tecuci, Learning Agents Laboratory R$ASWCER-001 IF the task to accomplish is: ASSESS-SECURITY-WRT-COUNTERING-ENEMY-RECONNAISSANCE FOR-COA ?O1 Question: Is an enemy recon unit present in ?O1 ? Answer: Yes, the enemy unit ?O2 is performing the action ?O3 which is a reconnaissance action. Then accomplish the task: ASSESS-SECURITY-WHEN-ENEMY-RECON-IS-PRESENT FOR-COA ?O1 FOR-UNIT ?O2 FOR-RECON-ACTION ?O3 Condition: ?O1 IS COA-SPECIFICATION-MICROTHEORY ?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3 ?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK ?O4 IS RED--SIDE Knowledge representation: problem solving rules A rule is an ontology-based representation of an elementary problem solving process.

44  G.Tecuci, Learning Agents Laboratory Report strength in surprise because of countering enemy recon for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 for-action DESTROY1 with-importance high Assess surprise when enemy recon is present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 Yes, RED-CSOP1 which is performing the reconnaissance action SCREEN1 Is an enemy reconnaissance unit present? Is the enemy reconnaissance unit destroyed? Assess surprise wrt countering enemy reconnaissance for-coa COA411 I consider enemy recon Does the COA assign appropriate surprise and deception actions? Assess COA wrt Principle of Surprise for-coaCOA411 Illustration of the problem solving process There is a strength with respect to surprise in COA411 because it contains aggressive security / counter-reconnaissance plans, destroying enemy intelligence collection units and activities. Intelligence collection by RED- CSOP1 will be disrupted by its destruction by DESTROY1. To what extent does COA411 conform to the Principle of Surprise? Yes, RED-CSOP1 is destroyed by DESTROY1

45  G.Tecuci, Learning Agents Laboratory COA Critiquing Demo COA critiquing demo

46  G.Tecuci, Learning Agents Laboratory Why are intelligent agents important Humans have limitations that agents may alleviate (e.g. memory for the details that isn’t effected by stress, fatigue or time constraints). Humans and agents could engage in mixed-initiative problem solving that takes advantage of their complementary strengths and reasoning styles.

47  G.Tecuci, Learning Agents Laboratory Why are intelligent agents important (cont) The evolution of information technology makes intelligent agents essential components of our future systems and organizations. Our future computers and most of the other systems and tools will gradually become intelligent agents. We have to be able to deal with intelligent agents either as users, or as developers, or as both.

48  G.Tecuci, Learning Agents Laboratory Intelligent agents are systems which can perform tasks requiring knowledge and heuristic methods. Intelligent agents: Conclusion Intelligent agents are helpful, enabling us to do our tasks better. Intelligent agents are necessary to cope with the increasing challenges of the information society.

49  G.Tecuci, Learning Agents Laboratory Recommended reading G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 1-12. Tecuci G., Boicu M., Bowman M., and Dorin Marcu, with a commentary by Murray Burke, “An Innovative Application from the DARPA Knowledge Bases Programs: Rapid Development of a High Performance Knowledge Base for Course of Action Critiquing,” invited paper for the special IAAI issue of the AI Magazine, Volume 22, No, 2, Summer 2001, pp. 43-61. http://lalab.gmu.edu/publications/data/2001/COA-critiquer.pdf


Download ppt " G.Tecuci, Learning Agents Laboratory Learning Agents Laboratory Department of Computer Science George Mason University George Tecuci"

Similar presentations


Ads by Google