Expert System Seyed Hashem Davarpanah

Slides:



Advertisements
Similar presentations
Modelling with expert systems. Expert systems Modelling with expert systems Coaching modelling with expert systems Advantages and limitations of modelling.
Advertisements

Chapter 11: Models of Computation
Science as a Process Chapter 1 Section 2.
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
Artificial Intelligence: Chapter 2
ECE457 Applied Artificial Intelligence R. Khoury (2007)Page 1 Please pick up a copy of the course syllabus from the front desk.
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
Rule Based Systems Alford Academy Business Education and Computing
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
Chapter 11 Artificial Intelligence and Expert Systems.
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
Plans for Today Chapter 2: Intelligent Agents (until break) Lisp: Some questions that came up in lab Resume intelligent agents after Lisp issues.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Marakas: Decision Support Systems, 2nd Edition © 2003, Prentice-Hall Chapter Chapter 7: Expert Systems and Artificial Intelligence Decision Support.
Intelligent Agents revisited.
Chapter 12: Intelligent Systems in Business
1 Chapter 19 Intelligent Agents. 2 Chapter 19 Contents (1) l Intelligence l Autonomy l Ability to Learn l Other Agent Properties l Reactive Agents l Utility-Based.
Science and Engineering Practices
Leroy Garcia 1.  Artificial Intelligence is the branch of computer science that is concerned with the automation of intelligent behavior (Luger, 2008).
Artificial Intelligence Lecture No. 15 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
1 AI and Agents CS 171/271 (Chapters 1 and 2) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Human-Centered Information Visualization Jiajie Zhang, Kathy Johnson, Jack Smith University of Texas at Houston Jane Malin NASA Johnson Space Center July.
Artificial Intelligence CIS 479/579 Bruce R. Maxim UM-Dearborn.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Chapter 6 Supplement Knowledge Engineering and Acquisition Chapter 6 Supplement.
1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 1. Introduction.
 Knowledge Acquisition  Machine Learning. The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
11 C H A P T E R Artificial Intelligence and Expert Systems.
How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally.
Artificial Intelligence Introductory Lecture Jennifer J. Burg Department of Mathematics and Computer Science.
CSC 423 ARTIFICIAL INTELLIGENCE Intelligence Agents.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
A RTIFICIAL I NTELLIGENCE Intelligent Agents 30 November
Data Structures and Algorithms Dr. Tehseen Zia Assistant Professor Dept. Computer Science and IT University of Sargodha Lecture 1.
Introduction of Intelligent Agents
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
Artificial Intelligence, Expert Systems, and Neural Networks Group 10 Cameron Kinard Leaundre Zeno Heath Carley Megan Wiedmaier.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
What is Artificial Intelligence?
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Of An Expert System.  Introduction  What is AI?  Intelligent in Human & Machine? What is Expert System? How are Expert System used? Elements of ES.
ITEC 1010 Information and Organizations Chapter V Expert Systems.
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
Artificial Intelligence, simulation and modelling.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
 System Requirement Specification and System Planning.
The Agent and Environment Presented By:sadaf Gulfam Roll No:15156 Section: E.
Decision Support and Business Intelligence Systems (9 th Ed., Prentice Hall) Chapter 12: Artificial Intelligence and Expert Systems.
Done by Fazlun Satya Saradhi. INTRODUCTION The main concept is to use different types of agent models which would help create a better dynamic and adaptive.
Intelligent Agents (Ch. 2)
ECE 448 Lecture 3: Rational Agents
Lecture #1 Introduction
CHAPTER 1 Introduction BIC 3337 EXPERT SYSTEM.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
© James D. Skrentny from notes by C. Dyer, et. al.
Intro to Expert Systems Paula Matuszek CSC 8750, Fall, 2004
Course Instructor: knza ch
Introduction Artificial Intelligent.
AI and Agents CS 171/271 (Chapters 1 and 2)
전문가 시스템(Expert Systems)
Presentation transcript:

Expert System Seyed Hashem Davarpanah Davarpanah@usc.ac.ir University of Science and Culture

Agent: anything that perceives and acts on its environment Agents Agent: anything that perceives and acts on its environment AI: study of rational agents A rational agent carries out an action with the best outcome after considering past and current percepts

Agents 3

What is an intelligent agent An intelligent agent is a system that: perceives its environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or other complex environment); reasons to interpret perceptions, draw inferences, solve problems, and determine actions; and acts upon that environment to realize a set of goals or tasks for which it was designed. input/ sensors Intelligent Agent output/ user/ environment effectors

What is an intelligent agent (cont.) Humans, with multiple, conflicting drives, multiple senses, multiple possible actions, and complex sophisticated control structures, are at the highest end of being an agent. At the low end of being an agent is a thermostat. It continuously senses the room temperature, starting or stopping the heating system each time the current temperature is out of a pre-defined range. The intelligent agents we are concerned with are in between. They are clearly not as capable as humans, but they are significantly more capable than a thermostat.

Why are intelligent agents important Humans have limitations that agents may alleviate (e.g. memory for the details that isn’t effected by stress, fatigue or time constraints). Humans and agents could engage in mixed-initiative problem solving that takes advantage of their complementary strengths and reasoning styles.

Why are intelligent agents important (cont) The evolution of information technology makes intelligent agents essential components of our future systems and organizations. Our future computers and most of the other systems and tools will gradually become intelligent agents. We have to be able to deal with intelligent agents either as users, or as developers, or as both.

خصوصیات عاملها هوشمندی خودمختاری خودمختاري توانايي عمل و اخذ تصميم مستقل از برنامه‌نويس يا كاربر عامل مي‌باشد. يادگيري چنين عامل‌هايي قادرند اطلاعات جديد را در يك قالب مفيد ذخيره نمايند. به عنوان مثال، عاملها مي‌توانند از يك كاربر از طريق مشاهده فعاليت‌هايش و يا از طريق دريافت راهنمايي‌هايش ياد بگيرند. يادگيري به عاملها اجازه مي‌دهد تا كارايي خود را در طي زمان به منظور انجام بعضي وظايف بهبود ببخشند. اگر كاربر به يك عامل بگويد كه يك وظيفه را خوب انجام نداده است، عامل از آن استفاده كرده و ياد مي‌گيرد كه دومرتبه چنين اشتباهي را در آينده تكرار نكند.

خصوصیات عاملها همكاري در سيستم‌هاي چندعامله، عامل‌ها معمولاً با يكديگر همكاري مي‌نمايند. اين همكاري دلالت بر وجود نوعي ارتباطات اجتماعي در بين عامل‌ها دارد. يك عامل چند كاره قادر است تا چندين كار مختلف انجام دهد. اغلب عاملها تك‌كاره مي‌باشند.

Characteristic features of intelligent agents Knowledge representation and reasoning Transparency and explanations Ability to communicate Use of huge amounts of knowledge Exploration of huge search spaces Use of heuristics Reasoning with incomplete or conflicting data Ability to learn and adapt

Knowledge representation and reasoning An intelligent agent contains an internal representation of its external application domain, where relevant elements of the application domain (objects, relations, classes, laws, actions) are represented as symbolic expressions. This mapping allows the agent to reason about the application domain by performing reasoning processes in the domain model, and transferring the conclusions back into the application domain. ONTOLOGY ON CUP1 BOOK1 TABLE1 CUP BOOK TABLE INSTANCE-OF OBJECT SUBCLASS-OF represents If an object is on top of another object that is itself on top of a third object then the first object is on top of the third object. RULE  x,y,z  OBJECT, (ON x y) & (ON y z)  (ON x z) Application Domain Model of the Domain

User/ Environment Separation of knowledge from control Implements a general method of solving the input problem based on the knowledge from the knowledge base Intelligent Agent Input/ Problem Solving Engine Sensors User/ Environment Ontology Rules/Cases/Methods Knowledge Base Output/ Effectors There are two distinct components of an agent: the knowledge base and the problem solving engine. The knowledge base contain data structures that represent the application domain. It includes representations of objects and their relations (the ontology), but also representations of laws, actions, rules, cases or elementary problem solving methods. The problem solving engine implements a general problem solving method that manipulates the data structures in the knowledge base to reason about the input problem, to solve it, and to determine the actions to perform next. That is, the problem solving engine is only an interpreter (or general reasoning mechanism). That is, there is a clear separation between knowledge (which is contained into the knowledge base) and control (represented by the problem solving engine). This separation allows the development of general tools, or shells, that do not contain any domain specific knowledge in the knowledge base. By defining this knowledge, one can develop a specific agent. The idea of these tools is to re-use the problem solving engine for a new application by defining the appropriate content of the knowledge base. Data structures that represent the objects from the application domain, general laws governing them, action that can be performed with them, etc.

There are two basic components of an agent: the knowledge base and the problem solving engine. The knowledge base contains data structures that represent the application domain. It includes representations of objects and their relations (the object ontology), but also representations of laws, actions, rules, cases or elementary problem solving methods. The problem solving engine implements a problem solving method that manipulates the data structures in the knowledge base to reason about the input problem, to solve it, and to determine the actions to perform next. That is, there is a clear separation between knowledge (which is contained into the knowledge base) and control (represented by the problem solving engine). This separation allows the development of general tools, or shells, that do not contain any domain specific knowledge in the knowledge base. By defining this knowledge, one can develop a specific agent. The idea of these tools is to re-use the problem solving engine for a new application by defining the appropriate content of the knowledge base.

Transparency and explanations The knowledge possessed by the agent and its reasoning processes should be understandable to humans. The agent should have the ability to give explanations of its behavior, what decisions it is making and why. Without transparency it would be very difficult to accept, for instance, a medical diagnosis performed by an intelligent agent.

Ability to communicate An agent should be able to communicate with its users or other agents. The communication language should be as natural to the human users as possible. Ideally, it should be free natural language. The problem of natural language understanding and generation is very difficult due to the ambiguity of words and sentences, the paraphrases, ellipses and references which are used in human communication.

Use of huge amounts of knowledge In order to solve "real-world" problems, an intelligent agent needs a huge amount of domain knowledge in its memory (knowledge base). Example of human-agent dialog: User: The toolbox is locked. Agent: The key is in the drawer. In order to understand such sentences and to respond adequately, the agent needs to have a lot of knowledge about the user, including the goals the user might want to achieve.

Use of huge amounts of knowledge (example) User: The toolbox is locked. Agent (reasoning): Why is he telling me this? I already know that the box is locked. Maybe he needs to get in. Perhaps he is telling me because he believes I can help. To get in requires a key. He knows it and he knows I know it. The key is in the drawer. If he knew this, he would not tell me that the toolbox is locked. So he must not realize it. To make him know it, I can tell him. I am supposed to help him.

Exploration of huge search spaces An intelligent agent usually needs to search huge spaces in order to find solutions to problems. Example 1: A search agent on the Internet Example 2: A checkers playing agent The game of checkers: There are two players (Grey and White), each having 12 men. They alternatively move one of their men. A man could be moved forward diagonally from one black square to another, or it could jump over an opponent's man, if the square behind it is vacant. In such a case the opponent's man is captured. Any number of men could be jumped (and captured) if the square behind each is vacant. If a man reaches the opponent's last row, it is transformed into a king by placing another man on top of it. The king could move both forward and backward (as opposed to the men which could move only forward). The winning player is the one who succeeds in blocking all the men of its opponent (so that they cannot move) or succeeds in capturing all of them.

Examples: The reasoning of a physician in an intensive care unit Reasoning with incomplete data The ability to provide some solution even if not all the data relevant to the problem is available at the time a solution is required. Examples: The reasoning of a physician in an intensive care unit If the EKG test results are not available, but the patient is suffering chest pains, I might still suspect a heart problem. Planning a military course of action with incomplete knowledge about the enemy.

Reasoning with conflicting data The ability to take into account data items that are more or less in contradiction with one another (conflicting data or data corrupted by errors). Example: The reasoning of a military intelligence analyst that has to cope with the deception actions of the enemy.

The ability to improve its competence and efficiency. Ability to learn The ability to improve its competence and efficiency. An agent is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. An agent is improving its efficiency if it learns to solve more efficiently (for instance, by using less time or space resources) the problems from its area of competence.

Types of Agents Reflex Agent Reflex Agent with State Goal-based Agent Utility-Based Agent Learning Agent

Reflex Agent

Reflex Agent with State

State Management Reflex agent with state Incorporates a model of the world Current state of its world depends on percept history Rule to be applied next depends on resulting state state’  next-state( state, percept ) action  select-action( state’, rules )

Goal-based Agent

Incorporating Goals Rules and “foresight” Essentially, the agent’s rule set is determined by its goals Requires knowledge of future consequences given possible actions Can also be viewed as an agent with more complex state management Goals provide for a more sophisticated next-state function

Utility-based Agent

Incorporating Performance May have multiple action sequences that arrive at a goal Choose action that provides the best level of “happiness” for the agent Utility function maps states to a measure May include tradeoffs May incorporate likelihood measures

Learning Agent

Incorporating Learning Can be applied to any of the previous agent types Agent <-> Performance Element Learning Element Causes improvements on agent/ performance element Uses feedback from critic Provides goals to problem generator

هر لایه براي كنترل بخشي از رفتار عامل طراحي شده است. معماري سلسله مراتبي اين معماري يك معماري لايه‌اي است كه به منظور پياده‌سازي ربات‌هاي فيزيكي طراحي شده است. در اين ربات‌ها، هوش مركزي يا مكانيزم كنترلي وجود ندارد. هر لایه براي كنترل بخشي از رفتار عامل طراحي شده است. EXPLORE WANDER AVOID OBSTACLES Inputs Actions

Horizontal Architecture معماري سلسله مراتبي Outputs and actions Inputs Layer n … Layer 2 Layer 1 Horizontal Architecture

Vertical Architecture معماري سلسله مراتبي Inputs Outputs and actions Layer n … Layer 2 Layer 1 Vertical Architecture

Properties of Environments Fully versus partially observable Deterministic versus stochastic Episodic versus sequential Static versus dynamic Discrete versus continuous Single agent versus multiagent

How are agents built A knowledge engineer attempts to understand how a subject matter expert reasons and solves problems and then encodes the acquired expertise into the agent's knowledge base. The expert analyzes the solutions generated by the agent (and often the knowledge base itself) to identify errors, and the knowledge engineer corrects the knowledge base.

Why it is hard The knowledge engineer has to become a kind of subject matter expert in order to properly understand expert’s problem solving knowledge. This takes time and effort. Experts express their knowledge informally, using natural language, visual representations and common sense, often omitting essential details that are considered obvious. This form of knowledge is very different from the one in which knowledge has to be represented in the knowledge base (which is formal, precise, and complete). This transfer and transformation of knowledge, from the domain expert through the knowledge engineer to the agent, is long, painful and inefficient (and is known as "the knowledge acquisition bottleneck“ of the AI systems development process).

Intelligent agents: Conclusion Intelligent agents are systems that can perform tasks requiring knowledge and heuristic methods. Intelligent agents are helpful, enabling us to do our tasks better. Intelligent agents are necessary to cope with the increasing challenges of the information and knowledge society.