Artificial Intelligence Lecture No. 5

Slides:



Advertisements
Similar presentations
Additional Topics ARTIFICIAL INTELLIGENCE
Advertisements

Intelligent Agents Chapter 2.
Chapter 2 Intelligent Agents.
Intelligent Agents Russell and Norvig: 2
Artificial Intelligence: Chapter 2
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
Artificial Intelligence Lecture No. 5 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
Properties of task environments
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Plans for Today Chapter 2: Intelligent Agents (until break) Lisp: Some questions that came up in lab Resume intelligent agents after Lisp issues.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Intelligent Agents Chapter 2.
Rational Agents (Chapter 2)
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
Introduction to AI. H.Feili, 1 Introduction to Artificial Intelligence LECTURE 2: Intelligent Agents What is an intelligent agent?
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
CSC 423 ARTIFICIAL INTELLIGENCE Intelligence Agents.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
CHAPTER 2 Intelligent Agents. Outline Artificial Intelligence a modern approach 2 Agents and environments Rationality PEAS (Performance measure, Environment,
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Rational Agents (Chapter 2)
A RTIFICIAL I NTELLIGENCE Intelligent Agents 30 November
Introduction of Intelligent Agents
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
Chapter 2 Agents. Intelligent Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
Intelligent Agents Chapter 2 Dewi Liliana. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
ARTIFICIAL INTELLIGENCE
Artificial Intelligence Programming Spring 2016
CHAPTER 2 Oliver Schulte
Artificial Intelligence
EA C461 – Artificial Intelligence Intelligent Agents
Intelligent Agents.
AI as the Design of Agents
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
Intelligent Agents Chapter 2.
© James D. Skrentny from notes by C. Dyer, et. al.
Introduction to Artificial Intelligence
Intelligent Agents Chapter 2.
AI: Artificial Intelligence
EA C461 – Artificial Intelligence Problem Solving Agents
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Lecture 3: Environs and Algorithms
Russell and Norvig: Chapter 3, Sections 3.1 – 3.3
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Artificial Intelligence Lecture No. 5

Summary of Previous Lecture What is an Intelligent agent? Agents & Environments Performance measure Environment Actuators Sensors Features of intelligent agents

Today’s Lecture Different types of Environments IA examples based on Environment Agent types

Environments Actions are done by the agent on the environment. Environment provides percepts to the agent. Determine to a large degree the interaction between the “outside world” and the agent the “outside world” is not necessarily the “real world” as we perceive it it may be a real or virtual environment the agent lives in In many cases, environments are implemented within computers They may or may not have a close correspondence to the “real world”

Properties of environments Fully observable vs. partially observable Or Accessible vs. inaccessible If an agent’s sensory equipment gives it access to the complete state of the environment, then we say that environment is fully observable to the agent. An environment is effectively fully observable if the sensors detect all aspects that are relevant to the choice of action. A fully observable environment is convenient because the agent need not maintain any internal state to keep track of the world.

Properties of environments Deterministic vs. nondeterministic. If the next state of the environment is completely determined by the current state and the actions selected by the agents, then we say the environment is deterministic. If the environment is inaccessible, then it may appear to be nondeterministic (bunch of uncertainties).

Properties of task environments Episodic vs. sequential. Agent’s experience is divided into “episodes.” Each episode consists of the agent perceiving and acting. Subsequent episodes do not depend on what actions occur in previous episodes. In sequential environments current actions affect all succeeding actions

Properties of task environments Static vs. Dynamic If the environment can change while an agent is performing action, then we say the environment is dynamic. Else its static. Static environments are easy to deal with, because the agent does not keep on looking at the environment while it is deciding on an action. Semidynamic: if the environment does not change with the passage of time but the agent performance score does.

Properties of environments Discrete vs. continuous If there are a limited number of distinct, clearly defined percepts and actions, we say that the environment is discrete. Chess, since there are a fixed number of possible moves on each turn. Taxi driving is continuous.

Properties of environments Single agent vs. Multiagent In the single agent environment there is only one agent A computer software playing crossword puzzle In multiagent systems, there are more than one active agents Video games

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Chess without a clock Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Backgammon Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Backgammon Stochastic Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Backgammon Stochastic Taxi driving Dynamic Continuous Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Backgammon Stochastic Taxi driving Dynamic Continuous Medical diagnosis Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Backgammon Stochastic Taxi driving Dynamic Continuous Medical diagnosis Single Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Backgammon Stochastic Taxi driving Dynamic Continuous Medical diagnosis Single Image analysis Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Backgammon Stochastic Taxi driving Dynamic Continuous Medical diagnosis Single Image analysis Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Backgammon Stochastic Taxi driving Dynamic Continuous Medical diagnosis Single Image analysis Robot part picking Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Backgammon Stochastic Taxi driving Dynamic Continuous Medical diagnosis Single Image analysis Robot part picking Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Backgammon Stochastic Taxi driving Dynamic Continuous Medical diagnosis Single Image analysis Robot part picking Interactive English tutor Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Environment Examples Fully observable vs. partially observable Deterministic Episodic Static Discrete Agents Chess with a clock Fully Strategic Sequential Semi Multi Chess without a clock Poker Partial Backgammon Stochastic Taxi driving Dynamic Continuous Medical diagnosis Single Image analysis Robot part picking Interactive English tutor Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

Agent types Four basic types in order of increasing generalization: Simple reflex agents Reflex agents with state/model Goal-based agents Utility-based agents

Simple Reflex Agent Instead of specifying individual mappings in an explicit table, common input-output associations are recorded Requires processing of percepts to achieve some abstraction Frequent method of specification is through condition-action rules if percept then action If car-in-front-is-braking then initiate-braking Similar to innate reflexes or learned responses in humans Efficient implementation, but limited power Environment must be fully observable Easily runs into infinite loops

Simple reflex agents

Simple Reflex Agent function SIMPLE-REFLEX-AGENT (percept) returns action static: rules, a set of condition-action rules state ← INTERPRET-INPUT (percept) rule ← RULE-MATCH (state, rules) action ← RULE-ACTION [rule] return action

A simple reflex agent.. which works by finding a rule whose condition matches the current situation and then doing the action associated with that rule

Reflex agents with state/model Evan a little bit of un observability can cause serious trouble. The braking rule given earlier assumes that the condition car-in-front-is-braking can be determined from the current percept – the current video image. More advanced techniques would require the maintenance of some kind of internal state to choose an action.

Reflex agents with state/model An internal state maintains important information from previous percepts Sensors only provide a partial picture of the environment Helps with some partially observable environments The internal states reflects the agent’s knowledge about the world This knowledge is called a model May contain information about changes in the world

Model-based reflex agents Required information: How the world evolves independently of the agent? An overtaking car generally will be closer behind than it was a moment ago. The current percept is combined with the old internal state to generate the updated description of the current state.

Model-based reflex agents

Model-based reflex agents function REFLEX-AGENT-WITH-STATE (percept) returns an action static: state, a description of the current world state rules, a set of condition-action rules action, the most recent action, initially none state ← UPDATE-STATE (state, action, percept) rule ← RULE-MATCH (state, rules) action ← RULE-ACTION [rule] state ← UPDATE-STATE (state, action) return action

Goal-based agent Merely knowing about the current state of the environment is not always enough to decide what to do next. The right decision depends on where the taxi is trying to get to. So the goal information is also needed.

Goal-based agent Goal-based agents are far more flexible. If it starts to rain, the agent adjusts itself to the changed circumstances, since it also looks at the way its actions would affect its goals (remember doing the right thing). For the reflex agent we would have to rewrite a large number of condition-action rules.

Goal-based agents

Utility-based agents Goals are not really enough to generate high-quality behavior. There are many ways to reach the destination, but some are qualitatively better than others. More safe Shorter Less expensive

Utility-based agent We say that if one world state is preferred to another, then it has higher utility for the agent. Utility is a function that maps a state onto a real number. state → R Any rational agent possesses a utility function.

Utility-based agents

Summery of Today’s Lecture Different types of Environments IA examples based on Environment Agent types Simple reflex agents Reflex agents with state/model Goal-based agents Utility-based agents