INTELLIGENT AGENTS Chapter 2 02/12/1436 1. Outline  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)

Slides:



Advertisements
Similar presentations
Additional Topics ARTIFICIAL INTELLIGENCE
Advertisements

Artificial Intelligent
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
Agentes Inteligentes Capítulo 2. Contenido Agentes y medios ambientes Racionalidad PEAS (Performance measure, Environment, Actuators, Sensors) Tipos de.
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
COMP 4640 Intelligent and Interactive Systems Intelligent Agents Chapter 2.
AI CSC361: Intelligent Agents1 Intelligent Agents -1 CSC361.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Intelligent Agents – Lecture 9 Prof. Dieter Fensel (& Francois.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, spring 2007.
Rational Agents (Chapter 2)
Introduction to Logic Programming WS2004/A.Polleres 1 Introduction to Artificial Intelligence MSc WS 2009 Intelligent Agents: Chapter 2.
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Artificial Intelligence
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
© Copyright 2008 STI INNSBRUCK Introduction to A rtificial I ntelligence MSc WS 2009 Intelligent Agents: Chapter.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
1/34 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
AI in game (I) 권태경 Fall, outline AI definition taxonomy agents.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
CS 8520: Artificial Intelligence Intelligent Agents Paula Matuszek Fall, 2008 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are.
CHAPTER 2 Intelligent Agents. Outline Artificial Intelligence a modern approach 2 Agents and environments Rationality PEAS (Performance measure, Environment,
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
INTELLIGENT AGENTS. Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Dr. Alaa Sagheer Chapter 2 Artificial Intelligence Ch2: Intelligent Agents Dr. Alaa Sagheer.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Chapter 2 Agents & Environments
CSC 9010 Spring Paula Matuszek Intelligent Agents Overview Slides based in part on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn.
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
Intelligent Agents Chapter 2 Dewi Liliana. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
1 CSC AI Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment.
CHAPTER 2 Oliver Schulte Intelligent Agents. Outline Artificial Intelligence a modern approach 2 Agents and environments Rationality PEAS (Performance.
Web-Mining Agents Cooperating Agents for Information Retrieval Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny.
CSC AI Intelligent Agents.
Artificial Intelligence
EA C461 – Artificial Intelligence Intelligent Agents
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Artificial Intelligence
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Artificial Intelligence
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

INTELLIGENT AGENTS Chapter 2 02/12/1436 1

Outline  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)  Environment types  Agent types 02/12/1436 2

Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators  Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators  Robotic agent: cameras and infrared range finders for sensors; various motors for actuators 02/12/1436 3

Agents 02/12/1436 4

Agents and environments  The agent function maps from percept histories to actions: [f: P*  A ]  The agent program runs on the physical architecture to produce f  agent = architecture + program 02/12/1436 5

Vacuum-cleaner world  Percepts: location and contents, e.g., [A,Dirty]  Actions: Left, Right, Suck, NoOp 02/12/1436 6

A vacuum-cleaner agent What is the right function? Can it be implemented in as a small agent program? 02/12/1436 7

Rational agents  An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform.  The right action is the one that will cause the agent to be most successful.  Performance measure: An objective criterion for success of an agent's behavior. E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc. 02/12/1436 8

Rational agents  Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.  Rationality is distinct from omniscience (all-knowing with infinite knowledge) 02/12/1436 9

Rational agents  Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration)  An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt) 02/12/

PEAS  PEAS: Performance measure, Environment, Actuators, Sensors  Consider, e.g., the task of designing an automated taxi driver:  Performance measure  Environment  Actuators  Sensors 02/12/

PEAS  Consider, e.g., the task of designing an automated taxi driver:  Performance measure: Safe, fast, legal, comfortable trip, maximize profits  Environment: Roads, other traffic, pedestrians, customers  Actuators: Steering wheel, accelerator, brake, signal, horn  Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard 02/12/

PEAS  Agent: Medical diagnosis system  Performance measure: Healthy patient, minimize costs, lawsuits  Environment: Patient, hospital, staff  Actuators: Screen display (questions, tests, diagnoses, treatments, referrals)  Sensors: Keyboard (entry of symptoms, findings, patient's answers) 02/12/

PEAS  Agent: Part-picking robot  Performance measure: Percentage of parts in correct bins  Environment: Conveyor belt with parts, bins  Actuators: Jointed arm and hand  Sensors: Camera, joint angle sensors 02/12/

PEAS  Agent: Interactive English tutor  Performance measure: Maximize student's score on test  Environment: Set of students  Actuators: Screen display (exercises, suggestions, corrections)  Sensors: Keyboard 02/12/

Environment types  Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time.  Deterministic (vs. stochastic): (actions are predictable) The next state of the environment is completely determined by the current state and the action executed by the agent.  Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. 02/12/

Environment types  Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does)  Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. 02/12/

Environment types  Single agent (vs. multiagent): An agent operating by itself in an environment.  Multi-agents environment  Competitive : chess  Cooperative : taxi 02/12/

Summary  Fully observable vs. Not (fully) observable.  Does the agent see the complete state of the environment?  Deterministic vs. Nondeterministic.  Is there a unique mapping from one state to another state for a given action?  Episodic vs. Sequential  Does the next “episode” depend on the actions taken in previous episodes? 02/12/

Summary  Static vs. Dynamic.  Can the world change while the agent is thinking?  Discrete vs. Continuous.  Are the distinct percepts and actions limited or unlimited? 02/12/

Environment types Chess with Chess without Taxi drivinga clock Fully observableYesYesNo Deterministic Strategic Strategic No Episodic NoNoNo Static SemiYes No DiscreteYes YesNo Single agentNoNoNo  The environment type largely determines the agent design  The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent 02/12/

Agent functions and programs  An agent is completely specified by the agent function mapping percept sequences to actions.  The agent program implements the agent function mapping percepts sequences to actions Agent=architecture + program. Architecture= sort of computing device with physical sensors and actuators.  Aim of AI is to design the agent program : find a way to implement the rational agent function concisely. 02/12/

Table-lookup agent Function Table-Driven-Agent(percept) Static: percepts, a sequence, initially empty table, a table of actions, indexed by percept sequences, initially fully specified append percept to the end of percepts action <- Lookup(percepts,table) Return action The table agent program is invoked for each new percept and returns an action each time. It keeps track of percept sequences using its own private data structure. 02/12/

Table-lookup agent  Drawbacks:  Huge table  Take a long time to build the table  No autonomy  Even with learning, need a long time to learn the table entries.  Example : let P be the set of possible percepts and T be the lifetime of the agent (the total number of percepts it will receive) then the lookup table will contain  P  t (t=0…T) entries.  The table of the vacuum agent (VA) will contain more than 4 T entries (VA has 4 possible percepts). 02/12/

Agent program for a vacuum-cleaner agent The vacuum agent program is very small compared to the corresponding table : it cuts down the number of possibilities from 4 T to 3. This reduction comes from the ignoring of the history percepts. 02/12/

Agent types  Four basic types in order of increasing generality :  Simple reflex agents  Model-based reflex agents  Goal-based agents  Utility-based agents 02/12/

Simple reflex agents Program  Single current percept : the agent select an action on the basis current percept, ignoring the rest of percept history.  Example : The vacuum agent (VA) is a simple reflex agent, because it decision is based only on the current location and on whether that contains dirt.  Rules relate  “State” based on percept  “action” for agent to perform  “Condition-action” rule: If a then b: e.g. vacuum agent (VA) : if in(A) and dirty(A), then vacuum taxi driving agent (TA): if car-in-front-is-braking then initiate- braking. 02/12/

Schematic diagram of a simple reflex agents 02/12/

Simple reflex agents Program Function Simple-Reflex-Agent(percept) Static: rules, set of condition-actions rules; state <- Interpret-Input(percept) Rule <- Rule-Match(state, rules) action <- Rule-Action[Rule] Return action A simple reflex agent. It acts according to rule whose condition matches the current state, as defined by the percept. 02/12/

Simple reflex agents Program  Simple, but VERY limited  Must be fully observable to be accurate  Limited intelligence (decision can be made – only if the environment is fully observable)  Example : vacuum agent deprived of its location sensor, and has only a dirt sensor (2 possible percepts : [dirty] and [clean]) : The action for [dirty] is suck. What is the action for [clean] ? Moving left fails for ever if it happens to start in location A. 02/12/

Model-based reflex agents  Solution to partial observability problems  Maintain state Keep track of parts of the world can't see now Maintain internal state that depends on the percept history  Update previous state based on Knowledge of how world changes, e.g. TA : an overtaking car generally will be closer behind than it was a moment ago. Knowledge of effects of own actions, e.g. TA: When the agent turns the steering wheel clockwise the car turns to the right. => Model called “Model of the world” implements the knowledge about how the world work. 02/12/

Schematic diagram of a Model-based reflex agents 02/12/

Model-based reflex agents Function Model-based-Reflex-Agent(percept) Static: state, a description of the current world state rules, set of condition-actions rules; actions, the most recent action, initially none State<-Update-State(oldInternalState,LastAction,percept) rule<- Rule-Match(State, rules) action <- Rule-Action[rule] Return action A model-based reflex agent. It keep track of the current state of the world using an internal model. It then chooses an action in the same way as the reflect agent. 02/12/

Goal-based agents The current state of the environment is not always enough to decide what to do. The taxi can turn left, right or go straight on. The decision depends on where the taxi is trying to get to (for example : being at the passenger’s destination). 02/12/

Goal-based agents 02/12/

A model-based, goal-based agent Function Model-based-Reflex-Agent(percept) Static: state, a description of the current world state rules, set of condition-actions rules; actions, the most recent action, initially none State<-Update-State(oldInternalState,LastAction,percept) rule <- Rule-Match(State, rules, goal) action <- Rule-Action[rule] Return action A goal-based agent. It keep track of the current state of the world using an internal model as well as a set of goals it is trying to achieve. It then chooses an action that leads to the achievement of the goal. 02/12/

Utility-based agents Goal: –Issue: Only binary: achieved/not achieved –Want more nuanced: Not just achieve state, but faster, cheaper, smoother,... Solution: Utility –Utility function: state (sequence percepts) -> value –Select among multiple or conflicting goals 02/12/

Utility-based agents Goals just provide a crude distinction between “happy” and not “happy” states, whereas more general performance measure should allow a comparison of different world sequences Utility is a function that maps a state onto real number, which describes the associated degree of happiness  helps to make decision when there are conflicting goals (like speed and safety) The right decision = Function (percept, goal) + Quicker + Safer + Reliable + Less cost 02/12/

Utility-based agents 02/12/

A model-based, utility-based agent Function Model-based-Reflex-Agent(percept) Static: state, a description of the current world state rules, set of condition-actions rules; actions, the most recent action, initially none State<-Update-State(oldInternalState,LastAction,percept) AllRules <- Rule-Match(State, rules, goal) bestRule <- utility(AllRules) action <- Rule-Action[bestRule] Return action A utility-based agent. It uses a model of the world along with utility function that measures its preferences among states of the world. 02/12/

Learning agents All agents can improve their performance through learning. Learning: allow agent to match new states/actions A learning agent can be divided into four conceptual components: Learning element: makes improvements Performance element: selects external actions based on percept (entire agent in previous cases). Critic: gives feedback to learning about success (it tells the learning element how well the agent is doing with respect to a fixed performance standard. Problem generator: suggests actions to find new states. 02/12/

Example : Learning agents : Taxi driving The performance element consists of whatever collection of knowledge and procedures the TA has for selecting its driving actions. The critic observes the world and passes information along to the learning element. For example after the taxi makes a quick left turn across three lanes the critic observes the shocking language used by other drivers. From this experience the learning element is able to formulate a rule saying this was a bad action, and the performance element is modified by installing this new rule. The problem generator may identify certain areas of behavior in need of improvement and suggest experiments : such as testing the brakes on different road surfaces under different conditions. The learning element can make change in any Knowledge of previous agent types : observation between two states (how the world evolves), observation of results of actions (what my action do). 02/12/

Learning agents 02/12/

Summary : Exercises  Define in your own words the following terms : agent, agent function, agent program, rationality, autonomy, reflex agent, model-based agent, goal- based agent, utility-based agent, learning agent. 02/12/

Solution 02/12/ Agent : an entity that perceives and acts (program that operate on behalf of a human). Agent function : a function that specifies the agent’s action in response to every possible percept sequence (input percept sequence, output action). Agent program : that program which combined with a machine architecture implements an agent function (input one percept, output an action). Rationality : property of agents that choose actions that maximizes their expected utility, given the percepts to date. Autonomy a property of agents whose behavior is determined by their own experience rather than solely by their initial programming. Reflex-based agent : an agent whose action depends only on the current percept. Model-based agent : an agent whose action is derived directly from an internal model of the current world state that is updated over time. Goal-based agent : an agent that selects actions that it believes will achieve explicitly represented goals. Utility-based agent : : an agent that selects actions that it believes will maximize the expected utility of the outcome state. Learning agent : an agent whose behavior improves over time based on its experience.

Exercise 02/12/