Rational Agency CSMC 25000 Introduction to Artificial Intelligence January 8, 2004.

Slides:



Advertisements
Similar presentations
Chapter 2: Intelligent Agents
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
Artificial Intelligence: Chapter 2
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
ECE457 Applied Artificial Intelligence R. Khoury (2007)Page 1 Please pick up a copy of the course syllabus from the front desk.
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
Cooperating Intelligent Systems Intelligent Agents Chapter 2, AIMA.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Intelligent Agents Chapter 2.
Russell and Norvig: Chapter 2 CMSC421 – Fall 2006
Agents & Environments. © Daniel S. Weld Topics Agents & Environments Problem Spaces Search & Constraint Satisfaction Knowledge Repr’n & Logical.
INTELLIGENT AGENTS Yılmaz KILIÇASLAN. Definitions An agent is anything that can be viewed as perceiving its environment through sensors and acting upon.
Rational Agents (Chapter 2)
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Structure of intelligent agents and environments.
Intelligent Agents: an Overview. 2 Definitions Rational behavior: to achieve a goal minimizing the cost and maximizing the satisfaction. Rational agent:
Introduction to Logic Programming WS2004/A.Polleres 1 Introduction to Artificial Intelligence MSc WS 2009 Intelligent Agents: Chapter 2.
Rational Agents (Chapter 2)
For Wednesday Read chapter 3, sections 1-4 Homework: –Chapter 2, exercise 4 –Explain your answers (Identify any assumptions you make. Where you think there’s.
CSE 573 Artificial Intelligence Dan Weld Peng Dai
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
1 AI and Agents CS 171/271 (Chapters 1 and 2) Some text and images in these slides were drawn from Russel & Norvig’s published material.
INTELLIGENT AGENTS Chapter 2 02/12/ Outline  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)
Intelligent Agents Chapter 2 Intelligent Agents. Slide Set 2: State-Space Search 2 Agents An agent is anything that can be viewed as perceiving its environment.
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
1/34 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
CHAPTER 2 Intelligent Agents. Outline Artificial Intelligence a modern approach 2 Agents and environments Rationality PEAS (Performance measure, Environment,
Intelligent Agents อาจารย์อุทัย เซี่ยงเจ็น สำนักเทคโนโลยีสารสนเทศและการ สื่อสาร มหาวิทยาลัยนเรศวร วิทยาเขต สารสนเทศพะเยา.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
A RTIFICIAL I NTELLIGENCE Intelligent Agents 30 November
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
Chapter 2 Agents. Intelligent Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
Intelligent Agents Chapter 2 Dewi Liliana. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
CHAPTER 2 Oliver Schulte Intelligent Agents. Outline Artificial Intelligence a modern approach 2 Agents and environments Rationality PEAS (Performance.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Problem Solving Agents
Intelligent Agents Chapter 2.
AI and Agents CS 171/271 (Chapters 1 and 2)
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004

Studying AI Develop principles for rational agents –Implement components to construct Knowledge Representation and Reasoning –What do we know, how do we model it, how we manipulate it Search, constraint propagation, Logic, Planning Machine learning Applications to perception and action –Language, speech, vision, robotics.

Studying AI Develop principles for rational agents –Implement components to construct Knowledge Representation and Reasoning –What do we know, how do we model it, how we manipulate it Search, constraint propagation, Logic, Planning Machine learning Applications to perception and action –Language, speech, vision, robotics.

Roadmap Rational Agents –Defining a Situated Agent –Defining Rationality –Defining Situations What makes an environment hard or easy? –Types of Agent Programs Reflex Agents – Simple & Model-Based Goal & Utility-based Agents Learning Agents –Conclusion

Situated Agents Agents operate in and with the environment –Use sensors to perceive environment Percepts –Use actuators to act on the environment Agent function –Percept sequence -> Action Conceptually, table of percepts/actions defines agent Practically, implement as program

Situated Agent Example Vacuum cleaner: –Percepts: Location (A,B); Dirty/Clean –Actions: Move Left, Move Right; Vacuum A,Clean -> Move Right A,Dirty -> Vacuum B,Clean -> Move Left B,Dirty -> Vacuum A,Clean, A,Clean -> Right A,Clean, A,Dirty -> Vacuum.....

What is Rationality? “Doing the right thing” What's right? What is success??? Solution: –Objective, externally defined performance measure Goals in environment Can be difficult to design –Rational behavior depends on: Performance measure, agent's actions, agent's percept sequence, agent's knowledge of environment

Rational Agent Definition For each possible percept sequence, –A rational agent should act so as to maximize performance, given knowledge of the environment So is our agent rational? Check conditions –What if performance measure differs?

Limits and Requirements of Rationality Rationality isn't perfection –Best action given what the agent knows THEN Can't tell the future Rationality requires information gathering –Need to incorporate NEW percepts Rationality requires learning –Percept sequences potentially infinite Don't hand-code –Use learning to add to built-in knowledge Handle new experiences

DefiningTask Environments Performance measure Environment Actuators Sensors

Characterizing Task Environments From Complex & Artificial to Simple & Real Key dimensions: –Fully observable vs partially observable –Deterministic vs stochastic (strategic) –Episodic vs Sequential –Static vs dynamic –Discrete vs continuous –Single vs Multi agent

Examples Vacuum cleaner Assembly line robot Language Tutor Waiter robot

Agent Structure Agent = architecture + program –Architecture: system of sensors & actuators –Program: Code to map percepts to actions All take sensor input & produce actuator command Most trivial: –Tabulate agent function mapping Program is table lookup Why not? –It works, but HUGE Too big to store, learn, program, etc.. –Want mechanism for rational behavior – not just table

Simple Reflex Agents Single current percept Rules relate –“State” based on percept, to –“action” for agent to perform –“Condition-action” rule: If a then b: e.g. if in(A) and dirty(A), then vacuum Simple, but VERY limited –Must be fully observable to be accurate

Model-based Reflex Agent Solution to partial observability problems –Maintain state Parts of the world can't see now –Update previous state based on Knowledge of how world changes: e.g. Inertia Knowledge of effects of own actions => “Model” Change: –New percept + Model+Old state => New state –Select rule and action based on new state

Goal-based Agents Reflexes aren't enough! –Which way to turn? Depends on where you want to go!! Have goal: Desirable states –Future state (vs current situation in reflex) Achieving goal can be complex –E.g. Finding a route –Relies on search and planning

Utility-based Agents Goal: –Issue: Only binary: achieved/not achieved –Want more nuanced: Not just achieve state, but faster, cheaper, smoother,... Solution: Utility –Utility function: state (sequence) -> value –Select among multiple or conflicting goals

Learning Agents Problem: –All agent knowledge pre-coded Designer can't or doesn't want to anticipate everything Solution: –Learning: allow agent to match new states/actions –Components: Learning element: makes improvements Performance element: picks actions based on percept Critic: gives feedback to learning about success Problem generator: suggests actions to find new states

Conclusions Agents use percepts of environment to produce actions: agent function Rational agents act to maximize performance Specify task environment with –Performance measure, action, environment, sensors Agent structures from simple to complex, more powerful –Simple and model-based reflex agents –Binary goal and general utility-based agents –+ Learning

Roadmap Rational Agents –Defining a Situated Agent –Defining Rationality –Defining Situations What makes an environment hard or easy? –Types of Agent Programs Reflex Agents – Simple & Model-Based Goal & Utility-based Agents Learning Agents –Conclusion

Situated Agents Agents operate in and with the environment –Use sensors to perceive environment Percepts –Use actuators to act on the environment Agent function –Percept sequence -> Action Conceptually, table of percepts/actions defines agent Practically, implement as program

Situated Agent Example Vacuum cleaner: –Percepts: Location (A,B); Dirty/Clean –Actions: Move Left, Move Right; Vacuum A,Clean -> Move Right A,Dirty -> Vacuum B,Clean -> Move Left B,Dirty -> Vacuum A,Clean, A,Clean -> Right A,Clean, A,Dirty -> Vacuum.....

What is Rationality? “Doing the right thing” What's right? What is success??? Solution: –Objective, externally defined performance measure Goals in environment Can be difficult to design –Rational behavior depends on: Performance measure, agent's actions, agent's percept sequence, agent's knowledge of environment

Rational Agent Definition For each possible percept sequence, –A rational agent should act so as to maximize performance, given knowledge of the environment So is our agent rational? Check conditions –What if performance measure differs?

Limits and Requirements of Rationality Rationality isn't perfection –Best action given what the agent knows THEN Can't tell the future Rationality requires information gathering –Need to incorporate NEW percepts Rationality requires learning –Percept sequences potentially infinite Don't hand-code –Use learning to add to built-in knowledge Handle new experiences

DefiningTask Environments Performance measure Environment Actuators Sensors

Characterizing Task Environments From Complex & Artificial to Simple & Real Key dimensions: –Fully observable vs partially observable –Deterministic vs stochastic (strategic) –Episodic vs Sequential –Static vs dynamic –Discrete vs continuous –Single vs Multi agent

Examples Vacuum cleaner Assembly line robot Language Tutor Waiter robot

Agent Structure Agent = architecture + program –Architecture: system of sensors & actuators –Program: Code to map percepts to actions All take sensor input & produce actuator command Most trivial: –Tabulate agent function mapping Program is table lookup Why not? –It works, but HUGE Too big to store, learn, program, etc.. –Want mechanism for rational behavior – not just table

Simple Reflex Agents Single current percept Rules relate –“State” based on percept, to –“action” for agent to perform –“Condition-action” rule: If a then b: e.g. if in(A) and dirty(A), then vacuum Simple, but VERY limited –Must be fully observable to be accurate

Model-based Reflex Agent Solution to partial observability problems –Maintain state Parts of the world can't see now –Update previous state based on Knowledge of how world changes: e.g. Inertia Knowledge of effects of own actions => “Model” Change: –New percept + Model+Old state => New state –Select rule and action based on new state

Goal-based Agents Reflexes aren't enough! –Which way to turn? Depends on where you want to go!! Have goal: Desirable states –Future state (vs current situation in reflex) Achieving goal can be complex –E.g. Finding a route –Relies on search and planning

Utility-based Agents Goal: –Issue: Only binary: achieved/not achieved –Want more nuanced: Not just achieve state, but faster, cheaper, smoother,... Solution: Utility –Utility function: state (sequence) -> value –Select among multiple or conflicting goals

Learning Agents Problem: –All agent knowledge pre-coded Designer can't or doesn't want to anticipate everything Solution: –Learning: allow agent to match new states/actions –Components: Learning element: makes improvements Performance element: picks actions based on percept Critic: gives feedback to learning about success Problem generator: suggests actions to find new states

Conclusions Agents use percepts of environment to produce actions: agent function Rational agents act to maximize performance Specify task environment with –Performance measure, action, environment, sensors Agent structures from simple to complex, more powerful –Simple and model-based reflex agents –Binary goal and general utility-based agents –+ Learning