Artificial Intelligence

Slides:



Advertisements
Similar presentations
Additional Topics ARTIFICIAL INTELLIGENCE
Advertisements

Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
Artificial Intelligence: Chapter 2
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
AI CSC361: Intelligent Agents1 Intelligent Agents -1 CSC361.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Intelligent Agents Chapter 2.
Rutgers CS440, Fall 2003 Lecture 2: Intelligent Agents Reading: AIMA, Ch. 2.
Russell and Norvig: Chapter 2 CMSC421 – Fall 2006
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, spring 2007.
Rational Agents (Chapter 2)
Introduction to Logic Programming WS2004/A.Polleres 1 Introduction to Artificial Intelligence MSc WS 2009 Intelligent Agents: Chapter 2.
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
INTELLIGENT AGENTS Chapter 2 02/12/ Outline  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)
Artificial Intelligence
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
© Copyright 2008 STI INNSBRUCK Introduction to A rtificial I ntelligence MSc WS 2009 Intelligent Agents: Chapter.
Chapter 2 Intelligent Agents. Chapter 2 Intelligent Agents What is an agent ? An agent is anything that perceiving its environment through sensors and.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
CSC 423 ARTIFICIAL INTELLIGENCE Intelligence Agents.
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
CS 8520: Artificial Intelligence Intelligent Agents Paula Matuszek Fall, 2008 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are.
CHAPTER 2 Intelligent Agents. Outline Artificial Intelligence a modern approach 2 Agents and environments Rationality PEAS (Performance measure, Environment,
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Introduction of Intelligent Agents
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
INTELLIGENT AGENTS. Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through.
Dr. Alaa Sagheer Chapter 2 Artificial Intelligence Ch2: Intelligent Agents Dr. Alaa Sagheer.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Chapter 2 Agents. Intelligent Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Chapter 2 Agents & Environments
CSC 9010 Spring Paula Matuszek Intelligent Agents Overview Slides based in part on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn.
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
Intelligent Agents Chapter 2 Dewi Liliana. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
EA C461 – Artificial Intelligence Intelligent Agents
Artificial Intelligence Lecture No. 5
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Artificial Intelligence
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Artificial Intelligence Intelligent Agents Chapter 2 Intelligent Agents

Outline of this Chapter What is an agent? Rational Agents PEAS (Performance measure, Environment, Actuators, Sensors) Structure of Intelligent Agents Types of Agent Program Types of Environments Intelligent Agents

What is an agent? Anything that can be viewed as: perceiving its environment through sensors acting upon that environment through Actuators. Human agent: Sensors: eyes, ears… Actuators : legs, mouth, and other body parts Robotic agent: Sensors: cameras and infrared range finders Actuators: various motors Intelligent Agents

Rational Agents Rational Agent Agent that does the right thing based on what it can perceive and the actions it can perform. Right action  cause the agent to be most successful. Problem: how & when to evaluate the agent’s success? It depends on 4 things: Performance measure- defines degree of success the criteria that determine how successful an agent is. Example: vacuum-cleaner amount of dirt cleaned up, amount of electricity consumed, amount of noise generated, etc.. When to evaluate the agent’s success? E.g. Measure performance over the long run. Intelligent Agents

Rational Agents Environment Percepts Actions Sensors Actuators Agent Program Intelligent Agents 5

Rational Agents (cont.) Rational agent is distinct from omniscient (knows the actual outcome of its actions & act accordingly). percepts may not supply all relevant information 2. Percept sequence Everything that the agent has perceived so far. 3. What the agent knows about the environment The actions that the agent can perform maps any given percept sequence to an action Ideal Rational Agent One that always takes the action that is expected to maximize its performance measure, given the percept sequence it has seen so far & whatever build-in knowledge the agent has. Intelligent Agents

Mapping: percept sequences  actions Mappings describe agents: by making a table of the action it takes in response to each possible percept sequence. Do we need to create an explicit table with an entry for every possible percept sequence? Example: square root function on a calculator Percept sequence: sequence of keystrokes Action: display a No on screen Intelligent Agents

Autonomy An agent’s behaviour can be based on both its own experience and the built-in knowledge used in constructing the agent for the environment in which it operates. An agent is autonomous if its behaviour is determined by its own experience, rather than on knowledge of the environment that has been built-in by the designer. AI agent should have some initial knowledge, & ability to learn. Autonomous intelligent agent should be able to operate successfully in a wide variety of environment, given sufficient time to adapt. Intelligent Agents

PEAS The design of an agent program depends on: PEAS: Performance measure, Environment, Actuators, Sensors Must first specify the setting for intelligent agent design Consider, e.g., the task of designing an automated taxi driver: Performance measure Environment Actuators Sensors Intelligent Agents

PEAS Example, the task of designing an automated taxi driver: Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, odometer, engine sensors, keyboard PEAS Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn. Cameras, sonar, speedometer, odometer, engine sensors, keyboard Sensors: Intelligent Agents

Structure of Intelligent Agents Agent: humans, robots, softbots, etc. The role of AI is to design the agent program. Agent Program: a function that implements the agent mapping from a percept to an action. This program will run on some sort of computing device: Architecture: computing device (computer / special HW), makes the percept from the sensors available to the program, runs the program, and feeds the program’s action choices to the effectors as generated. The relationship among the above can be summed up as bellow: agent = architecture + program Intelligent Agents

Agent Types How to build a program to implement the mapping from percepts to action? Five basic types will be considered: Table-driven agents use a percept sequence/action table in memory to find the next action. They are implemented by a (large) lookup table. Simple reflex agents Respond immediately to percepts. Model-based reflex agents maintain internal state to track aspects of the world that are not evident in the current percept. Goal-based agents Act so that they will achieve their goals Utility-based agents base their decision on classic utility-theory in order to act rationally. Intelligent Agents

Table-driven agents It operates by keeping in memory its entire percept sequence Use it to index into table of actions (contains appropriate action for all possible sequences.) This proposal is doomed to failure. The table needed for something as simple as an agent that can only play chess would be about 35100 entries. Agent has no autonomy at all – the calculation of best actions is entirely built in. (if the Env changes, agent would be lost) Intelligent Agents

Simple reflex agents in schematic form Sensors Actuators Condition-action rules Environment What the world is like now What Action I should I do Now? How the condition-action rules allow the agent to make the connection from percept to action Intelligent Agents

Simple reflex agents can often summarise portions of the look-up table by noting commonly occurring input/output associations which can be written as condition-action rules: if {set of percepts} then {set of actions} Based on condition-action rules. In humans, condition-action rules are both learned responses and innate reflexes (e.g., blinking).  if it is raining then put up umbrella Correct decisions must be made solely based on current percept. Examples:  Is driving purely a matter of reflex? What happens when making a lane change? Intelligent Agents

Simple reflex agents (Cont.) Function SIMPLE-REFLEX-Agent(percept) Static: rules, set of condition-actions rules state <- Interpret-Input(percept) Rule <- Rule-Match(state, rules) action <- Rule-Action[Rule] Return action Find the rule whose condition matches the current situation, and do the action associated with that rule. Intelligent Agents

Simple Reflex Agent Environment Percepts Actions Actuators Sensors Current State Selected Action If-then Rules Intelligent Agents

Simple Reflex Agent Function Simple reflex agents([locaiton, status]) returns actions IF status = Dirty then return Suck Else IF location = A then return Right Else IF location = B then return Left Intelligent Agents

Model-based reflex agents (Reflex agents with state) How the current percept is combined with the old internal state to generate the updated description of the current state. Intelligent Agents

Model-Based Agent Environment Percepts Actions Sensors Actuators Selected Action Current State Previous perceptions Impact of actions World changes If-then Rules Intelligent Agents

Model-based reflex agents (Cont) Function REFLEX-Agent-WITH-STATE(percept) Static: state, a description of the current world state rules, a set of condition-action rules. action, the most recent action, initially none state <- Update-State(state,action,percept) Rule <- Rule-Match(state, rules) action <- Rule-Action[Rule] Return action It keeps track of the current state of the world using an internal model. It then chooses an action in the same way as the reflex agent. Update-State, responsible for creating the new internal state description, interpreting the new percept in the light of existing knowledge about the state & it uses inf about how the world evolves to keep track of the unseen parts of the world. Is it enough to decide what to do by knowing only the current state of the environment? Intelligent Agents

Goal-based agents agents which in addition to state information have a kind of goal information which describes desirable situations. Agents of this kind take future events into consideration. The right decision depends on where the agent is trying to get to  agent needs goal info & current state description. e.g: Decision to change lanes depends on a goal to go somewhere. It is flexible: the knowledge that supports its decisions is represented explicitly & can be modified without having to rewrite a large No of condition-action rules ( reflex agent). Intelligent Agents

An agent with explicit goals Sensors What the world is like now ? What Action should I do Now? Goals Actuators E n v i r o m e t state How the world evolves What my actions do What it will be like if I do Action A It keeps track of the world state as well as a set of goals it is trying to achieve, & chooses an action that will lead to the achievement of its goals. Intelligent Agents

Goal-Based Agent Environment Percepts Actions Sensors Actuators Selected Action Current State Goal Previous perceptions Impact of actions World changes State if I do action X Intelligent Agents

Goal-based agents (Cont.) Given knowledge of how the world evolves, how its own actions will affect its state, an agent will be able to determine the consequences of all possible actions. then compare each of these against its goal to determine which action achieves its goal, and hence which action to take. If a long sequence of actions is required to reach the goal, then Search (Russell Chapters 3-5) and Planning (Chapters 11-13) are the sub-fields of AI that must be called into action. Are goals alone enough to generate high quality behaviour? Many action sequences can result in the same goal being achieved. but some are Quicker,Safer,more Reliable,cheaper than others. Intelligent Agents

Utility-based agents Any utility based agent can be described as possessing an utility function Utility is a function that maps a state/sequence of state onto real number, which describes the associated degree of happiness. Use the utility to choose between alternative sequences of actions/states that lead to a given goal being achieved. Utility function allows rational decisions in 2 kind of cases: When there are conflicting goals, only some of which can be achieved (e.g. speed & safety). When they are several goals that the agent can aim for, none of which can be achieved with certainty. Intelligent Agents

Utility-based agents t n e m n o r i v n E Sensors Agent Utility state What the world is t n like now ? e How the world evolves m n What my actions What it will be like if o do I do Action A r i v n E How happy I will be In such a state Utility What Action should I do Now? Actuators Intelligent Agents

Happiness in that state Utility-Based Agent Environment Percepts Actions Sensors Actuators Selected Action Current State Utility Previous perceptions Impact of actions World changes State if I do action X Happiness in that state Intelligent Agents

Learning agents An agent whose behavior improves over time based on its experience. Intelligent Agents

Learning Agent Environment Percepts Actions Sensors Actuators Problem Generator Learning Element Feedback Performance standard Changes Knowledge Learning Goals Performance Element Critic Intelligent Agents

Machine Learning Percept should not only be used for generating an agent’s immediate actions, but also for improving its ability to act in the future, i.e. for learning learning can correspond to anything from trivial memorisation, to the creation of complete scientific theories. learning can be classified into 3 increasingly difficult classes: Supervised Learning – learning with a teacher (e.g. the system is told what outputs it should produce for each of a set of inputs) Reinforcement learning – learning with limited feedback (e.g. the system must produce outputs for a set of inputs and is only told whether they are good or bad) Unsupervised learning – learning with no help (e.g. the system is given a set of inputs and is expected to make some kind of sense of them on its own) Machine learning systems can be set up to do all three types of learning Intelligent Agents

Types of Environments We have talked about intelligent agents, but little about the environments with which they interact How to couple an agent to an environment? Actions are done by the agent on the environment, which in turn provides percepts to the agent. Fully Observable vs partially: An agent's sensors give it access to the complete state of the environment at each point in time. Such environments are convenient, since the agent is freed from the task of keeping track of the changes in the environment. Deterministic vs stochastic: The next state of the environment is completely determined by the current state and the action selected by the agent. e.g. taxi driving is stochastic- one can never predict the behavior of traffic exactly. Intelligent Agents

Types of Environments (Cont.) Episodic/Nonepisodic: An episodic environment means that subsequent episodes do not depend on what actions occurred in previous episodes. Such environments do not require the agent to plan ahead. e.g. an agent has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions  Static vs dynamic: The environment is unchanged while an agent is deliberating. Easy to deal with  agent doesn’t need to keep looking at the world while it is deciding on an action. Discrete vs continuous: if they are a limited number of distinct, clearly defined percepts and actions.e.g. chess is discrete- they are fixed no of possible moves on each turn. Single agent vs. multiagent: An agent operating by itself in an environment. Diff environment types require somewhat diff agent programs to deal with them effectively. Intelligent Agents

Examples of Environments Solitaire Chess(clock) Internet shopping Taxi driver Observable Yes Yes No No Deterministic Yes Yes Partly No Episodic No No No No Static Yes Semi Semi No Discrete Yes Yes Yes No Single-agent Yes No Yes (except auctions) No The real world is partially observable, nondeterministic, nonepisodic (sequential), dynamic, continuous, multi-agent. Intelligent Agents

Components of an AI Agent The components that need to be built into an AI agent: A means to infer properties of the world from its percepts. Information about the way the world evolves. Information about what will happen as a result of its possible actions. Utility information indicating the desirability of possible world states and the actions that lead to them. Goals that describe the classes of states whose achievement maximises the agent’s utility A mapping from the above forms of knowledge to its actions. An active learning system that will improve the agents ability to perform well Intelligent Agents

Conclusion An agent perceives and acts in an environment. It has an architecture and is implemented by a program. An ideal agent always chooses the action which maximizes its expected performance, given the percept sequence received so far. An autonomous agent uses its own experience rather than built-in knowledge of the environment by the designer. An agent program maps from a percept to an action and updates its internal state. Intelligent Agents

Conclusion (Cont.) Reflex agents respond immediately to percepts. Goal-based agents act in order to achieve their goal(s). Utility-based agents maximize their own utility function. Representing knowledge is important for successful agent design. Some environments are more difficult for agents than others. The most challenging environments are partially observable, non-deterministic, non-episodic, dynamic, and continuous. Intelligent Agents