Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.

Slides:



Advertisements
Similar presentations
Chapter 2: Intelligent Agents
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Artificial Intelligent
Intelligent Agents Chapter 2.
Chapter 2 Intelligent Agents.
Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
Intelligent Agents CPS
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
AI CSC361: Intelligent Agents1 Intelligent Agents -1 CSC361.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Intelligent Agents Chapter 2.
Russell and Norvig: Chapter 2 CMSC421 – Fall 2006
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, spring 2007.
Rational Agents (Chapter 2)
Introduction to Logic Programming WS2004/A.Polleres 1 Introduction to Artificial Intelligence MSc WS 2009 Intelligent Agents: Chapter 2.
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
INTELLIGENT AGENTS Chapter 2 02/12/ Outline  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
© Copyright 2008 STI INNSBRUCK Introduction to A rtificial I ntelligence MSc WS 2009 Intelligent Agents: Chapter.
How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally.
Chapter 2 Intelligent Agents. Chapter 2 Intelligent Agents What is an agent ? An agent is anything that perceiving its environment through sensors and.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
CS 8520: Artificial Intelligence Intelligent Agents Paula Matuszek Fall, 2008 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are.
CHAPTER 2 Intelligent Agents. Outline Artificial Intelligence a modern approach 2 Agents and environments Rationality PEAS (Performance measure, Environment,
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Rational Agents (Chapter 2)
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
INTELLIGENT AGENTS. Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through.
Dr. Alaa Sagheer Chapter 2 Artificial Intelligence Ch2: Intelligent Agents Dr. Alaa Sagheer.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Chapter 2 Agents & Environments
CSC 9010 Spring Paula Matuszek Intelligent Agents Overview Slides based in part on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn.
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
Intelligent Agents Chapter 2 Dewi Liliana. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
EA C461 – Artificial Intelligence Intelligent Agents
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Chapter 2 Hande AKA

Outline Agents and Environments Rationality The Nature of Environments Agent Types

Agents and Environments An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators Human agent Eyes, ears,and other organs for sensors Hands,legs,mouth,and other body parts for actuators Robotic agent Cameras and infrared range finders for sensors and various motors for actuators

Agent function & Agent program The term percept means the agent’s perceptual inputs at any given instant. The agent’s percept sequence is the complete history of everything the agent has ever perceived. The agent function maps any given percept sequence to an action.(The mathematical description : [f: P*  A ] ) The agent program is the concrete implementation, running on the agent architecture.

A vacuum-cleaner example It perceives which square it is and whether there is a dirt in the square. (ex: [A,dirty] ) Possible actions : move right, move left, suck up the dirt or do nothing.

Rational agents A rational agent is one that does the right thing.Right actions will cause the agent to be most successful. How can we measure the success?

Performance Measures An objective criterion for success of an agent's behavior. Performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc.

Rationality What is rational at any given time depens on ;  The performance measure  The agent’s prior knowledge of the environment  The actions that the agent can perform  The agent’s percept sequence

Definition of a rational agent For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Rationality ≠ Omniscience An omniscient agent knows the actual outcome of its actions, but omniscience is impossible in reality. Rationality is not the same as perfection.

Rationality Our definition requires;  Information gathering/exploration -Doing actions in order to modify future percepts  Learning -Extending prior knowledge  Autonomy - Compensate for incorrect prior knowledge

The nature of environments To design a rational agent we must specify its task environment. PEAS description of the environment  Performance  Environment  Actuators  Sensors

Automated taxi driver example PEAS description of the environment Performance -Safety, destination, profits, legality, comfort Environment - Streets/freeways, other traffic, pedestrians, weather Actuators -Steering, accelerating, brake, horn, speaker/display Sensors - Video, sonar, speedometer, engine sensors, keyboard, GPS

Examples of agent types and their PEAS description

Environment Types 1. Fully observable vs. partially observable An environment is fully observable when the sensors can detect all aspects that are relevant to the choice of action. An environment might be partially observable because some parts of the state are missing from the sensor data. ex: A vacuum agent with only a local dirt sensor cannot tell wheter there is a dirt in other squares.

2. Deterministic vs. stochastic If the next state of the environment is determined by the current state and the action executed by the agent, then, the environment is deterministic, otherwise, it is stochastic.  Taxi driving is stochastic, because one can never predict the behaviour of the traffic exactly. If the environment is deterministic except for the actions of other agents, the environment is strategic. (playing chess )

3. Episodic vs. sequential In an episodic environment the agent’s experience can be divided into atomic steps where the agents perceives and then performs a single action. The next episode does not depend on the actions taken in previous episodes. ex: Imagine, an agent that has to spot defective parts on an assembly line. It’s current decision doesn’t affect whether the next part is defective. In sequential environments,current decision could affect future decisions. Chess is sequential.

4. Static vs. dynamic If the environment can change while the agent is choosing an action, the environment is dynamic, otherwise it is static.If the environment itself doesn’t change, but the agent’s performance score does, then it is semidynamic.  Taxi driving -> dynamic  Chess with a clock ->semidynamic  Crossword puzzles -> static

5. Discrete vs Continuous Chess game is a discrete environment,because it has a finite number of distinct states.It has also discrete set of percepts and actions. Tavi driving is continuous ; the speed and the location of the taxi and the other vehicles changes through a range of continuous values.

5. Single agent vs. multiagent If the environment contain other agents who are also maximizing some performance measure that depends on the current agent’s actions,then it is multiagent. (chess, taxi driving )

Examples of task environments and their characteristics

Agent Types All agents have the same skeleton  Input= current percepts  Output = actions  Program= manipulates input to produce output

Agent Types Four basic kind of agent programs;  Simple reflex agents  Model-based reflex agents  Goal-based agents  Utility-based agents All these can be turned into learning agents

Simple reflex agents  These agents select actions on the basis of the current percept,ignoring the rest of the percept history.  Eg. Vacuum agent is a simple reflex agent because its decisions based only on the current location amd whether that contains dirty  Implemented through condition-action rules  If dirty then suck

Simple reflex agents

Model-based reflex agents  Agents should maintain some internal state  E.g. internal state can be the previous frame of the camera.  Over time update state using world knowledge How does the world change? How do actions affect world?

Model-based reflex agents UPDATE-STATE is responsible for creating the new internal state description

Goal-based agents  Knowing about the current state of the environment is not always enough to decide what to do.  The agents needs a goal. E.g. being at passenger’s destination is the goal of an taxi driver.

Utility-based agents  Certain goals can be reached in different ways.  Some are better, have a higher utility.  E.g. There are many action sequences that will get the taxi to its destination but some are quicker,safer or cheaper than others.

Learning agents All previous agent-programs describe methods for selecting actions.  Learning mechanisms can be used to perform this task.

Learning agents  Learning element –responsible for making improvements  Performance element- Selecting actions based on percepts  Critic- Learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better.  Problem generator- Suggests actions that will lead to new and informative experiences.