Presentation is loading. Please wait.

Presentation is loading. Please wait.

Advance Artificial Intelligence

Similar presentations


Presentation on theme: "Advance Artificial Intelligence"— Presentation transcript:

1 Advance Artificial Intelligence
Introduction to Artificial Intelligence Dr. Seemab Latif Dr. Seemab Latif 1 1

2 Instructional Objectives
Understand the definition of artificial intelligence Understand the different faculties involved with intelligent behavior Examine the different ways of approaching AI Look at some example systems that use AI Have a fair idea of the types of problems that can be currently solved by computers and those that are as yet beyond its ability. Dr. Seemab Latif 2

3 Module 1 We will introduce the following entities:
An agent An intelligent agent A rational agent We will explain the notions of rationality and bounded rationality. We will discuss different types of environment in which the agent might operate. We will also talk about different agent architectures. Dr. Seemab Latif 3 3

4 Module 1 Goals On completion of this lesson the student will be able to Understand what an agent is and how an agent interacts with the environment. Given a problem situation, the student should be able to identify the percepts available to the agent and the actions that the agent can execute. Understand the performance measures used to evaluate an agent Dr. Seemab Latif 4

5 Module 1 Goals The student will become familiar with different agent architectures Stimulus response agents State based agents Deliberative / goal-directed agents Utility based agents The student should be able to analyze a problem situation and be able to identify the characteristics of the environment Recommend the architecture of the desired agent Dr. Seemab Latif 5

6 Lesson 1: Introduction to AI
Definition of AI Example System Approaches to AI Brief History Dr. Seemab Latif 6

7 What is AI? Artificial Intelligence is Term coined by McCarthy in 1956
Concerned with the design of intelligence in artificial device Term coined by McCarthy in 1956 Dr. Seemab Latif 7

8 What is AI? Artificial Intelligence is concerned with the design of intelligence in an artificial device. What is Intelligence? Humans? Behave as intelligently as a human Behave in the best possible manner Thinking? Acting? Dr. Seemab Latif 8

9 Definitions of AI What to look at:
thought processes/reasoning vs. behavior How to measure performance: human -like performance vs. ideal performance Dr. Seemab Latif 9

10 Definitions of AI thought processes/reasoning vs. behavior
human -like performance vs. ideal performance thought/reasoning ideal performance rationality Human-like performance behavior Dr. Seemab Latif 10

11 Approaches to AI thought/reasoning ideal performance Human-like
Systems that think like humans Systems that think rationally Laws of thought/logic ideal performance rationality Human-like performance Turing Test Cognitive Science Rational Agents Acting Humanly: The Turing Test approach To act humanly, the computer would need to possess the following capabilities: natural language processing to enable it to communicate successfully in English; knowledge representation to store what it knows or hears; automated reasoning to use the stored information to answer questions and to draw new conclusions; machine learning to adapt to new circumstances and to detect and extrapolate patterns. computer vision to perceive objects robotics to manipulate objects and move about. Thinking Humanly: The cognitive modeling approach If we are going to say that a given program thinks like a human, we must have some way of determining how humans think. There are three ways to do this: Introspection: trying to catch our own thoughts as they go by Psychological experiments: observing a person in action Brain imaging: observing the brain in action The interdisciplinary field of cognitive science brings together computer models from AI and experimental techniques from psychology to construct precise and testable theories of the human mind. Thinking rationally: The "laws of thought" approach The Greek philosopher Aristotle was one of the first to attempt to codify "right thinking," that is, irrefutable reasoning processes. His syllogisms provided patterns for argument structures that always yielded correct conclusions when given correct premises—for example, "Socrates is a man; all men are mortal; therefore, Socrates is mortal." These laws of thought were supposed to govern the operation of the mind; their study initiated the field called logic. Acting Rationally: The rational agent approach An agent is just something that acts. A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome. In the "laws of thought" approach to Al, the emphasis was on correct inferences. Making correct inferences is sometimes part of being a rational agent, because one way to act rationally is to reason logically to the conclusion that a given action will achieve one's goals and then to act on that conclusion. On the other hand, correct inference is not all of rationality; in some situations, there is no provably correct thing to do, but something must still be done. There are also ways of acting rationally that cannot be said to involve inference. Systems that act rationally Systems that act like humans behavior Dr. Seemab Latif 11

12 Typical AI Problems Intelligent entities (or “agents”) need to be able to do both “mundane” and “expert” tasks: Mundane Tasks: Planning route, activity Recognizing (thought/vision) people, object Communicating (through natural language) Navigating round obstacles on the street Expert Tasks: Medical diagnosis Mathematical problem solving Playing games like chess Dr. Seemab Latif 12

13 What’s Easy and What’s Hard
Its been easier to mechanize many of the high-level tasks we usually associate with “intelligence” in people Symbolic integration, Proving theorems, Playing chess, Medical diagnosis Dr. Seemab Latif 13

14 What’s Easy and What’s Hard
Its been very hard to mechanize tasks that lots of animals can do Walking around without running into things Catching prey and avoiding predators Interpreting complex sensory information Modeling the internal states if other animals from their behavior Dr. Seemab Latif 14

15 Approaches to AI Strong AI aims to build machines that can truly reason and solve problems, which are self aware and their overall intellectual ability needs to be indistinguishable from that of a human being. Human-like Non-human-like Excessive optimism in the 1950s and 1960s concerning strong AI has given way to an appreciation of the extreme difficulty of the problem. Dr. Seemab Latif 15

16 Approaches to AI Weak AI deals with the creation of some form of computer-based artificial intelligence that cannot truly reason and solve problems, but can act as if it were intelligent. Weak AI holds that suitably programmed machines can simulate human cognition. Dr. Seemab Latif 16

17 Approaches to AI Applied AI aims to produce commercially viable "smart" systems such as, for example, a security system that is able to recognize the faces of people who are permitted to enter a particular building. Applied AI has already enjoyed considerable success. Cognitive AI computers are used to test theories about how the human mind works--for example, theories about how we recognize faces and other objects, or about how we solve abstract problems. Dr. Seemab Latif 17

18 AI Topics General algorithm Core areas Application Perception
Search Planning Constraint satisfaction Application Game playing AI and education Distributed agents Decision theory Reasoning with symbolic data Core areas Knowledge representation Reasoning Machine learning Perception Vision Natural language Robotics Uncertainty Probabilistic approaches Dr. Seemab Latif 18

19 Limits of AI Today Today’s successful AI systems
operate in well-defined domains employ narrow, specialized knowledge. Common sense knowledge needed to function in complex, open-ended worlds. understand unconstrained natural language. Dr. Seemab Latif 19

20 Questions Dr. Seemab Latif Define Intelligence.
What are different approaches in defining artificial intelligence? Suppose you design a machine to pass the Turing Test. What are the capabilities such a machine must have? Design ten questions to pose to a man/machine that is taking Turing test? Will building an artificial intelligent computer automatically shed light on the nature of natural intelligence? List five tasks that you will think computer should be able to do within next 5 years. List five tasks that computers are unlikely to be able to do in next 10 years. Dr. Seemab Latif 20

21 Module 1: Introduction Lesson 2: Intelligent Agents Dr. Seemab Latif
21

22 Agents and Rational Agents
An agent is something that acts A rational agent is one that acts so as to achieve the best outcome or when there is uncertainty, the expected outcome. Dr. Seemab Latif 22

23 Agent and Environment An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. For example, A human agent has eyes, ears and other organs for sensors and hand, legs, mouth and other body parts for actuators. A robotic agent might have cameras and infrareds for sensors and various motors for actuators. A software agent receives keystrokes, file contents and network packets as sensory inputs and acts on the environment by displaying on the screen, writing files and sending network packets. Dr. Seemab Latif 23

24 Agent Operating in an Environment
 Dealing with the Ubiquity of Uncertainty Environment, Sensing, Action and Knowledge  Dealing with Limited Resources Computation, memory, communication bandwidth, etc. Dr. Seemab Latif 24

25 Good Behavior: Concept of Rationality
A rational agent is one that does the right thing. The right action is the one that will cause the agent to be most successful. Performance Measure: embodies the criterion for success of an agent’s behavior. As a general rule, it is better to design performance measure according to what one actually wants in the environment, rather that according to how one thinks the agent should behave. Dr. Seemab Latif 25

26 Rationality What is rational at any given time depends on four things: The performance measure that defines the criterion of success The agent’s prior knowledge of the environment The action that the agent can perform The agent’s percept sequence to date. Formal definition of rational agent: “For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance, given the evidence provided by the percept sequence and whatever built-in-knowledge the agent has” Dr. Seemab Latif 26

27 Basics: Properties of an Agent
PROPERTY MEANING Situated Sense and act in dynamic/uncertain environments Flexible Reactive (responds to changes in the environment) Pro-active (acting ahead of time) Autonomous Exercises control over its own actions Goal-oriented Purposeful Learning Adaptive Persistent Continuously running process Social Interacts with other agents/people Mobile Able to transport itself Personality Character, Emotional state Dr. Seemab Latif 27

28 Application of Agents Applications: Information gathering, integration
Distributed sensors E-commerce Distributed virtual organization Virtual humans for training, entertainment Rapidly growing area Dr. Seemab Latif 28

29 The Task Environment The task specifies the goals the agent must achieve The environment (and agent) jointly determine: the information the agent can obtain (percepts) the actions the agent can perform Dr. Seemab Latif 29

30 The Task Environment Task environments are the ‘problems’ to which rational agents are the ‘solutions’. Task environments consists of PEAS: Performance Measure Environment Actuators Sensors In designing an agent, first step must always be to specify the task environments as fully as possible. Dr. Seemab Latif 30

31 The Task Environment: Examples
Agent Type Performance Measure Environment Actuators Sensors Taxi Driver Safe, Fast, Legal, comfortable trip, maximize profit Roads, other traffic, pedestrians, customers Steering, accelerator, brake, indicator, horn, display Camera, sonar, speedometer, GPS, engine, sensor keyboard, accelerometer Medical Diagnosis System Healthy patients, minimize cost, law suits Patient, hospital, staff Display questions, test, diagnosis, treatment, referrals Keyboard entry of symptoms, findings, patients’ answers Dr. Seemab Latif 31

32 Properties of Task Environment-1
Discrete / continuous: if there are a limited number of distinct, clearly defined, states of the environment, the environment is discrete; otherwise it is continuous Observable / partially observable: if it is possible in principle to determine the complete state of the environment at each time point it is observable; otherwise it is only partially observable Static / dynamic: if the environment only changes as a result of the agent’s actions, it is static; otherwise it is dynamic Deterministic / nondeterministic: if the future state of the environment can be predicted in principle given the current state and the set of actions which can be performed it is deterministic; otherwise it is nondeterministic Dr. Seemab Latif 32

33 Properties of Task Environment-2
Single agent / multiple agents: the environment may contain other agents which may be of the same kind as the agent, or of different kinds Known / unknown: In a known environment, the outcomes for all actions are given. If the environment is unknown, the agent will have to learn how it works in order to make good decisions. Episodic / sequential: In an episodic task environment, the agent's experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. The next episode does not depend on the actions taken in previous episodes. In sequential environments, on the other hand, the current decision could affect all future decisions. Dr. Seemab Latif 33

34 Examples of task environments and their characteristics
Dr. Seemab Latif 34

35 Agent = Architecture + Program + State.
Structure of Agents Agent Program State Architecture Agent = Architecture + Program + State. Program: implements the agent function mapping from goals & percepts to actions (& results) State: includes all the internal representations on which the agent program operates Architecture: computing device with sensors and actuators that runs the agent program Dr. Seemab Latif 35

36 Agent Programs Take the current percept as input from the sensors and return an action to the actuators. Agent programs are described in simple pseudo code language. The agent program for a simple reflex agent in the two-state vacuum environment. function REFLEX-VACUUM-AGENT( location,status) returns an action if stratus = Dirty then return Suck else if location = A then return Right else if location = B then return Left Dr. Seemab Latif 36

37 Agent architecture One way of making the agent programming problem more tractable is make use of the notion of an agent architecture The notion of “agent architecture” is ubiquitous in the agent literature but is not well analysed often discussed in the context of an agent programming language or platform The architecture is viewed as some sort of computing device with physical sensors and actuators Dr. Seemab Latif 37

38 The Architecture as a Virtual Machine
An architecture: defines a (real or virtual) machine which runs the agent program defines the atomic operations of the agent program and implicitly determines the components of the agent determines which operations happen automatically, without the agent program having to do anything e.g., the interaction between memory, learning and reasoning Dr. Seemab Latif 38

39 Properties of Architectures
Architectures have higher-level properties that determine their suitability for a task environment Choosing an appropriate architecture can make it much easier to develop an agent program for a particular task environment Dr. Seemab Latif 39

40 Types of Agents There are five basic kinds of agent programs:
Simple Reflexive Agents Model-based Reflex Agents Goal-based Agents Utility-bases Agents Learning Agents Dr. Seemab Latif 40

41 Simple Reflex Agents Environment Agent Dr. Seemab Latif Sensors
What the world is like now Environment These agents select actions on the basis of the current percept, ignoring the rest of the percept history What action I should do now Condition-Action rules Actuators Dr. Seemab Latif 41

42 A simple Reflex Agent: Program
function SIMPLE-REFLEX-AGENT( percept) returns an action rules: a set of condition—action rules state  INTERPRET-INPUT(percept) rule  RULE-MATCH(state, rule) action  rule.ACTION return action Simple reflex behaviors occur even in more complex environments. Imagine yourself as the driver of the automated taxi. If the car in front brakes and its brake lights come on, then you should notice this and initiate braking. In other words, some processing is done on the visual input to establish the condition we call "The car in front is braking." Then, this triggers some established connection in the agent program to the action "initiate braking." We call such a connection a condition-action ritle,5 written asif car-in-front-is-braking then initiate- braking. The agent in Figure 2.10 will work only if the correct decision can be made on the basis of only the current percept葉hat is. only if the environment is fully observ- able It acts according to a rule whose condition matches the current state, as defined by the percept. Dr. Seemab Latif 42

43 Model-based Reflex Agent
Sensors State What the world is like now How the world evolves Environment What my actions do What action I should do now Updating this internal state information as time goes by requires two kinds of knowl- edge to be encoded in the agent program. First, we need some information about how the world evolves independently of the agent庸or example, that an overtaking car generally will be closer behind than it was a moment ago. Second, we need some information about how the agent's own actions affect the world庸or example, that when the agent turns the steering wheel clockwise, the car turns to the right, or that after driving for five minutes northboundon the freeway, one is usually about five miles north of where one was five minutes ago. This knowledge about "how the world works"謡hether implemented in simple Boolean circuits or in complete scientific theories擁s called a model of the world. An agent that uses such a model is called a model- based agent. Condition-Action rules Agent Actuators Dr. Seemab Latif 43

44 Model-based Reflex Agent: Program
function MODEL-BASED-REFLEX-AGENT(percept.) returns an action state, the agent's current conception of the world state model, a description of how the next state depends on current state and action rules, a set of condition—action rules action, the most recent action, initially none state  UPDATE-STATE(state, action, percept, model) Rule  RULE MATCH(state, rules) Action rule.ACTION return action The most effective way to handle partial observability is for the agent to keep track of the parr of the world it can't see now.That is, the agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. It keeps track of the current state of the world, using an internal model. It then chooses an action in the same way as the reflex agent. Dr. Seemab Latif 44

45 Goal-based Agents Environment Agent Dr. Seemab Latif Sensors Actuators
State What the world is like now How the world evolves Environment What my actions do What it will be like if I perform Action A Knowing something about the current state of the environment is not always enough to decide what to do. For example. at a road junction, the taxi can turn left, turn right, or go straight on. The correct decision depends on where the taxi is trying to get to. Although the goal-based agent appears less efficient, it is more flexible because the knowledge that supports its decisions is represented explicitly and can be modified. If it starts to rain, the agent can update its knowledge of how effectively its brakes will operate; this will automatically cause all of the relevant behaviors to be altered to suit the new conditions. For the reflex agent, on the other hand, we would have to rewrite many condition action rules. The goal- based agent's behavior can easily be changed to go to a different destination, simply by specifying that destination as the goal. The reflex agent's rules for when to turn and when to go straight will work only for a single destination; they must all be replaced to go somewhere new. What action I should do now Goal Agent Actuators Dr. Seemab Latif 45

46 Utility-based Agents Environment Agent Dr. Seemab Latif Sensors
What the world is like now State How the world evolves What it will be like if I perform Action A Environment What my actions do Goals alone are not enough to generate high-quality behavior in most environments. For example, many action sequences will get the taxi to its destination (thereby achieving the goal) but some are quicker, safer, more reliable, or cheaper than others. Goals just provide a crude binary distinction between "happy" and "unhappy" states. A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent. If the internal utility function and the external performance measure are in agreement, then an agent that chooses actions to maximize its utility will be rational according to the external performance measure. How happy I will be in that state Utility What action I should do now Agent Actuators Dr. Seemab Latif 46

47 Learning-based Agents
Performance Standard Sensors Critic Feedback Knowledge Environment Performance Element Learning Element A learning agent can be divided into four conceptual components. The most important distinction is between the learning element, which is responsible for making improvements, and the performance element, which is responsible for selecting external actions. The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions. The learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future. The critic tells the learning element how well the agent is doing with respect to a fixed performance standard. The critic is necessary because the percepts themselves provide no indication of the agent's success. The last component of the learning agent is the problem generator. It is responsible for suggesting actions that will lead to new and informative experiences. The point is that if the performance element had its way, it would keep doing the actions that are best ; given what it knows. But if the agent is willing to explore a little and do some perhaps suboptimal actions in the short run, it might discover much better actions for the long run. The problem generator's job is to suggest these exploratory actions. This is what scientists do when they carry out experiments. Changes Learning Goals Problem Generator Actuators Dr. Seemab Latif Agent 47

48 Traditional Systems vs. Intelligent Agents
How are agents different from the traditional view of system definition? When sensor = static input, effectors = fixed output, the goal is to produce the correct output, and environment is static/ irrelevant, we fall into the category of traditional systems. BUT: Environment may be dynamic Sensing may be an on-going situation assessment process Effectors may require complex planning Goal may be defined with respect to current state of environment As a result: Deriving the input/output mapping (from the goal) is not obvious! Dr. Seemab Latif 48

49 A Perspective on the Design of Intelligent Agents
There is no universal approach to the design of an agent. We will be exploring the design space. Components and architectures Different approaches… For different classes of problems For different environments For different criteria for success Dr. Seemab Latif 49

50 Summary-1 Dr. Seemab Latif
An agent is something that perceives and acts in an environment. The agent function for an agent specifics the action taken by the agent in response to any percept sequence. The performance measure evaluates the behaviour of the agent in an environment. A rational agent acts so as to maximize the expected value of the performance measure, given the percept sequence it has seen so far A task environment specification includes the performance measure, the external environment, the actuators. and the sensors. In designing an agent, the first step must always be to specify the task environment as fully as possible. Task environments vary along several significant dimensions. They can be fully or partially observable, single-agent or multi- agent, deterministic or stochastic, episodic or sequential, static or dynamic, discrete or continuous, and known or unknown. Dr. Seemab Latif 50

51 Summary-2 Dr. Seemab Latif
The agent program implements the agent function. There exists a variety of basic agent-program designs reflecting the kind of information made explicit and used in the decision process. The designs vary in efficiency, compactness, and flexibility. The appropriate design of the agent program depends on the nature of the environment. Simple reflex agents respond directly to percepts, whereas model-based reflex agents maintain internal state to track aspects of the world that are not evident in the current percept. Goal-based agents act to achieve their goals, and utility-based agents try to maximize their own expected "happiness." All agents can improve their performance through learning. Dr. Seemab Latif 51

52 Questions Define an agent. What is a rational agent?
What is bounded rationality? What is an autonomous agent? Describe the salient features of an agent. Dr. Seemab Latif 52

53 Questions Find about the Mars rover:
What are the percepts of this agent? Characterize the operating environment What are the actions this agent can take? How can one evaluate the performance of this agent? What sort of agent architecture do you think is suitable for this agent? Answer the same above questions for internet shopping agent. Dr. Seemab Latif 53


Download ppt "Advance Artificial Intelligence"

Similar presentations


Ads by Google