Cooperating Intelligent Systems Intelligent Agents Chapter 2, AIMA.

Slides:



Advertisements
Similar presentations
Additional Topics ARTIFICIAL INTELLIGENCE
Advertisements

Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
Artificial Intelligence: Chapter 2
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
Cooperating Intelligent Systems Intelligent Agents Chapter 2, AIMA.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Plans for Today Chapter 2: Intelligent Agents (until break) Lisp: Some questions that came up in lab Resume intelligent agents after Lisp issues.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Intelligent Agents Chapter 2.
Rutgers CS440, Fall 2003 Lecture 2: Intelligent Agents Reading: AIMA, Ch. 2.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, spring 2007.
INTELLIGENT AGENTS Yılmaz KILIÇASLAN. Definitions An agent is anything that can be viewed as perceiving its environment through sensors and acting upon.
Rational Agents (Chapter 2)
Intelligent Agents: an Overview. 2 Definitions Rational behavior: to achieve a goal minimizing the cost and maximizing the satisfaction. Rational agent:
Introduction to Logic Programming WS2004/A.Polleres 1 Introduction to Artificial Intelligence MSc WS 2009 Intelligent Agents: Chapter 2.
Rational Agents (Chapter 2)
For Wednesday Read chapter 3, sections 1-4 Homework: –Chapter 2, exercise 4 –Explain your answers (Identify any assumptions you make. Where you think there’s.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
Artificial Intelligence
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
COMP 4640 Intelligent & Interactive Systems Cheryl Seals, Ph.D. Computer Science & Software Engineering Auburn University Lecture 2: Intelligent Agents.
© Copyright 2008 STI INNSBRUCK Introduction to A rtificial I ntelligence MSc WS 2009 Intelligent Agents: Chapter.
Chapter 2 Intelligent Agents. Chapter 2 Intelligent Agents What is an agent ? An agent is anything that perceiving its environment through sensors and.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
INTELLIGENT AGENTS. Agent and Environment Environment Agent percepts actions ? Sensors Effectors.
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
INTELLIGENT AGENTS. Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Chapter 2 Agents & Environments
CSC 9010 Spring Paula Matuszek Intelligent Agents Overview Slides based in part on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn.
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
Intelligent Agents By, JITHIN M J.
EA C461 – Artificial Intelligence Intelligent Agents
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
هوش مصنوعي فصل دوم عاملهاي هوشمند.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Cooperating Intelligent Systems Intelligent Agents Chapter 2, AIMA

An agent An agent perceives its environment through sensors and acts upon that environment through actuators. Percepts x Actions  Agent function f Agent Percepts Environment Sensors Effectors Actions ? Image borrowed from W. H. Hsu, KSU

An agent An agent perceives its environment through sensors and acts upon that environment through actuators. Percepts x Actions  Agent function f Agent Percepts Environment Sensors Effectors Actions ? Image borrowed from W. H. Hsu, KSU ”Percept sequence”

Example: Vacuum cleaner world AB Image borrowed from V. Pavlovic, Rutgers Percepts: x 1 (t)  {A, B}, x 2 (t)  {clean, dirty} Actions:   (t) = suck,  2 (t) = right, a 3 (t) = left

Example: Vacuum cleaner world AB Image borrowed from V. Pavlovic, Rutgers Percepts: x 1 (t)  {A, B}, x 2 (t)  {clean, dirty} Actions:   (t) = suck,  2 (t) = right, a 3 (t) = left This is an example of a reflex agent

A rational agent A rational agent does ”the right thing”: For each possible percept sequence, x(t)...x(0), should a rational agent select the action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. We design the performance measure, S

Rationality Rationality ≠ omniscience –Rational decision depends on the agent’s experiences in the past (up to now), not expected experiences in the future or others’ experiences (unknown to the agent). –Rationality means optimizing expected performance, omniscience is perfect knowledge.

Vacuum cleaner world performance measure AB Image borrowed from V. Pavlovic, Rutgers State defined perf. measure Action defined perf. measure Does not really lead to good behavior

Task environment Task environment = problem to which the agent is a solution. PPerformance measure Maximize number of clean cells & minimize number of dirty cells. EEnvironment Discrete cells that are either dirty or clean. Partially observable environment, static, deterministic, and sequential. Single agent environment. AActuators Mouthpiece for sucking dirt. Engine & wheels for moving. SSensors Dirt sensor & position sensor.

Some basic agents Random agent Reflex agent Model-based agent Goal-based agent Utility-based agent Learning agents

The reflex agent The action  (t) is selected based on only the most recent percept x(t) No consideration of percept history. Can end up in infinite loops. The random agent The action  (t) is selected purely at random, without any consideration of the percept x(t) Not very intelligent.

The goal based agent The action  (t) is selected based on the percept x(t), the current state  (t), and the future expected set of states. One or more of the states is the goal state. The model based agent The action  (t) is selected based on the percept x(t) and the current state  (t). The state  (t) keeps track of the past actions and the percept history.

The learning agent The learning agent is similar to the utility based agent. The difference is that the knowledge parts can now adapt (i.e. The prediction of future states, the utility,...etc.) The utility based agent The action  (t) is selected based on the percept x(t), and the utility of future, current, and past states  (t). The utility function U(  (t)) expresses the benefit the agent has from being in state  (t).

Discussion Exercise 2.2: Both the performance measure and the utility function measure how well an agent is doing. Explain the difference between the two. They can be the same but do not have to be. The performance function is used externally to measure the agent’s performance. The utility function is used internally to measure (or estimate) the performance. There is always a performance function but not always an utility function.

Discussion Exercise 2.2: Both the performance measure and the utility function measure how well an agent is doing. Explain the difference between the two. They can be the same but do not have to be. The performance function is used externally to measure the agent’s performance. The utility function is used internally (by the agent) to measure (or estimate) it’s performance. There is always a performance function but not always an utility function (cf. random agent).

Exercise Exercise 2.4: Let’s examine the rationality of various vacuum-cleaner agent functions. a.Show that the simple vacuum-cleaner agent function described in figure 2.3 is indeed rational under the assumptions listed on page 36. b.Describe a rational agent function for the modified performance measure that deducts one point for each movement. Does the corresponding agent program require internal state? c.Discuss possible agent designs for the cases in which clean squares can become dirty and the geography of the environment is unknown. Does it make sense for the agent to learn from its experience in these cases? If so, what should it learn?

Exercise 2.4 a.If (square A dirty & square B clean) then the world is clean after one step. No agent can do this quicker. If (square A clean & square B dirty) then the world is clean after two steps. No agent can do this quicker. If (square A dirty & square B dirty) then the world is clean after three steps. No agent can do this quicker. The agent is rational (elapsed time is our performance measure). AB Image borrowed from V. Pavlovic, Rutgers

Exercise 2.4 b.The reflex agent will continue moving even after the world is clean. An agent that has memory would do better than the reflex agent if there is a penalty for each move. Memory prevents the agent from visiting squares where it has already cleaned. (The environment has no production of dirt; a dirty square that has been cleaned remains clean.) AB Image borrowed from V. Pavlovic, Rutgers

Exercise 2.4 c.If the agent has a very long lifetime (infinite) then it is better to learn a map. The map can tell where the probability is high for dirt to accumulate. The map can carry information about how much time has passed since the vacuum cleaner agent visited a certain square, and thus also the probability that the square has become dirty. If the agent has a short lifetime, then it may just as well wander around randomly (there is no time to build a map). AB Image borrowed from V. Pavlovic, Rutgers