Agent Architectures and Hierarchical Control

Slides:



Advertisements
Similar presentations
Additional Topics ARTIFICIAL INTELLIGENCE
Advertisements

Mehran University of Engineering and Technology, Jamshoro Artificial Intelligence (Second Term Final Year, 07 CS)
Artificial Intelligent
Intelligent Agents Chapter 2.
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Oregon State University – CS430 Intro to AI (c) 2003 Thomas G. Dietterich and Devika Subramanian1 Agents and Environments.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
Agents and Intelligent Agents  An agent is anything that can be viewed as  perceiving its environment through sensors and  acting upon that environment.
AI CSC361: Intelligent Agents1 Intelligent Agents -1 CSC361.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
A Summary of the Article “Intelligence Without Representation” by Rodney A. Brooks (1987) Presented by Dain Finn.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, spring 2007.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.2 [P]: Agent Architectures and Hierarchical.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Artificial Intelligence
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
COMP 4640 Intelligent & Interactive Systems Cheryl Seals, Ph.D. Computer Science & Software Engineering Auburn University Lecture 2: Intelligent Agents.
Course Instructor: K ashif I hsan 1. Chapter # 2 Kashif Ihsan, Lecturer CS, MIHE2.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
Artificial Intelligence Lecture No. 4 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
1 CO Games Development 1 Week 2 Game Agents Gareth Bellaby.
Introduction of Intelligent Agents
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
What is in a ROBOT? Robotic Components Unit A – Ch 3.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Bridges To Computing General Information: This document was created for use in the "Bridges to Computing" project of Brooklyn College. You are invited.
Chapter 2 Agents & Environments
CSC 9010 Spring Paula Matuszek Intelligent Agents Overview Slides based in part on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
The Agent and Environment Presented By:sadaf Gulfam Roll No:15156 Section: E.
Artificial Intelligence Programming Spring 2016
Lecture #1 Introduction
CSCE 580 Artificial Intelligence Ch
CSCE 580 Artificial Intelligence Ch
AI as the Design of Agents
Artificial Intelligence Lecture No. 4
Artificial Intelligence Lecture No. 5
Artificial Intelligence (CS 461D)
Intelligent Agents Chapter 2.
Web-Mining Agents Cooperating Agents for Information Retrieval
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
© James D. Skrentny from notes by C. Dyer, et. al.
Introduction to Artificial Intelligence
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Subsuption Architecture
Intelligent Agents Chapter 2.
Artificial Intelligence
Chapter 12: Building Situated Robots
Chapter 1: Computational Intelligence and Knowledge
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Agent Architectures and Hierarchical Control Chapter 2, Artificial Intelligence. Foundations of Computational Agents. Poole & Mackworth

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment with its effectors (also called actuators). An agent gets percepts one at a time, and maps this percept sequence to actions Purposive agents have preferences. They prefer some states of the world to other states, and they act to try to achieve the states they prefer most.

Human agent: Robotic agent: A software agent: eyes, ears, and other organs for sensors hands, legs, mouth, and other body parts for actuators Robotic agent: cameras and infrared range finders for sensors various motors for actuators A software agent: Keystrokes, file contents, received network packages as sensors Displays on the screen, files, sent network packets as actuators

An agent system is made up of an agent and its environment An agent system is made up of an agent and its environment. An agent is made up of a body and a controller. The controller receives percepts from the body and sends commands to the body. An embodied agent has a physical body. A robot is an artificial purposive embodied agent. percepts sensors Environment effectors actions

Examples of agents in different types of applications Agent type Percepts Actions Goals Environment Medical diagnosis system Symptoms, findings, patient's answers Questions, tests, treatments Healthy patients, minimize costs Patient, hospital Satellite image analysis system Pixels of varying intensity, color Print a categorization of scene Correct categorization Images from orbiting satellite Part-picking robot   Pixels of varying intensity Pick up parts and sort into bins Place parts in correct bins Conveyor belts with parts Refinery controller Temperature, pressure readings Open, close valves; adjust temperature Maximize purity, yield, safety Refinery Interactive English tutor Typed words Print exercises, suggestions, corrections Maximize student's score on test Set of students

Percept: agent’s perceptual input at any given instant. Percept sequence is a complete history of everything the agent has ever perceived. An agent’s choice of action at any given instant can depend on the entire percept sequence observed to date. An agent’s behavior is described by the agent function which maps from percept histories to actions: [𝑓: 𝑃 ∗ →𝐴]

The difficulty is that the agent does not know the exact state it is in. Instead, it must maintain a probability distribution, known as the belief state, over the possible states. Example: The belief state for an agent that is following a fixed sequence of instructions may be a program counter that records its current position in the sequence. If there exists a finite number of possible belief states, the controller is called a finite state controller or a finite state machine.

The idealized hierarchical agent system architecture is hierarchy of controllers. The unlabeled rectangles represent layers, and the double lines represent information flow The dotted lines show how the output at one time is the input for the next time.

Each layer sees the layers below it as a virtual body from which it gets percepts and to which it sends commands. There can be multiple features passed from layer to layer and between states at different times.

There are three types of inputs to each layer at each time: the features that come from the belief state. the features representing the percepts from the layer below in the hierarchy. the features representing the commands from the layer above in the hierarchy. There are three types of outputs from each layer at each time: the higher-level percepts for the layer above. the lower-level commands for the layer below. the next values for the belief-state features.

Why qualitative reasoning is important? Qualitative reasoning is reasoning, often using logic, about qualitative distinctions rather than numerical values for given parameters. Why qualitative reasoning is important? An agent may not know what the exact values are. The reasoning may be applicable regardless of the quantitative values. An agent needs to do qualitative reasoning to determine which quantitative laws are applicable.

Qualitative reasoning uses discrete values, which can take a number of forms: Landmarks are values that make qualitative distinctions in the individual being modeled. Orders-of-magnitude reasoning involves approximate reasoning that ignores minor distinctions. Qualitative derivatives indicate whether some value is increasing, decreasing, or staying the same.

Consider a delivery robot able to carry out high-level navigation tasks while avoiding obstacles. The delivery robot is required to visit a sequence of named locations in the environment, avoiding obstacles it may encounter.

middle layer This layer of the controller maintains no internal belief state, so the belief state transition function is vacuous. Tries to keep traveling toward the current target position, avoiding obstacles. The middle layer is built on a lower layer that provides a simple view of the robot.

Takes in a plan to execute. Maintains an internal belief state consisting of a list of names of locations that the robot still needs to visit and the coordinates of the current target. top layer The upper level knows about the names of locations, but the lower levels only know about coordinates.

A model of a world is a representation of the state of the world at a particular time and/or the dynamics of the world. Dead reckoning is the process of calculating an agent's current position by using a previously determined position and advancing the position based upon known or estimated speeds over elapsed time, and course. Perception is the use of sensing information to understand the world. Action includes the ability to negative through the world and manipulate objects. If you want to build robots that live in the world, you must understand these processes.

An embedded agent is one that is run in the real world, where the actions are carried out in a real domain and where the sensing comes from a domain. Embedded mode is how the agent must run to be useful. A simulated agent is one that is run with a simulated body and environment; that is, where a program takes in the commands and returns appropriate percepts. This is often used to debug a controller before it is deployed.

A agent system model is where there are models of the controller, the body, and the environment that can answer questions about how the agent will act. Such a model can be used to prove properties of agents before they are built, or it can be used to answer hypothetical questions about an agent that may be difficult or dangerous to answer with the real agent.

For an intelligent agent, the belief state can be very complex, even for a single layer. Knowledge is the information about a domain that is used for solving problems in that domain. Knowledge can include general knowledge that can be applied to particular situations. A knowledge-based system is a system that uses knowledge about a domain to act or to solve problems.

Offline and online decomposition of an agent The knowledge base is its long-term memory, where it keeps the knowledge that is needed to act in the future. The belief state is the short-term memory of the agent, which maintains the model of current environment needed between time steps. Offline and online decomposition of an agent

Online, when the agent is acting, the agent uses its knowledge base, its observations of the world, and its goals and abilities to choose what to do and to update its knowledge base. Offline, before the agent has to act, it can build the knowledge base that is useful for it to act online. The goals and abilities are given offline, online, or both, depending on the agent.

Offline, there are three major roles involved with a knowledge-based system: Software engineers build the inference engine and user interface. They typically know nothing about the contents of the knowledge base. Domain experts are the people who have the appropriate prior knowledge about the domain. They know about the domain, but typically they know nothing about the particular case that may be under consideration. Knowledge engineers design, build, and debug the knowledge base in consultation with domain experts. They know about the details of the system and about the domain through the domain expert.

Online, there are three major roles involved: A user is a person who has a need for expertise or has information about individual cases. Users typically are not experts in the domain of the knowledge base. Sensors provide information about the environment. For example, a thermometer is a sensor that can provide the current temperature at the location of the thermometer. Sensors come in two main varieties. A passive sensor and an active sensor. An external knowledge source, such as a website or a database, can typically be asked questions and can provide the answer for a limited domain. An agent can ask a weather website for the temperature at a particular location.

Overview: An agent system is composed of an agent and an environment. Agents have sensors and actuators to interact with the environment. An agent is composed of a body and interacting controllers. Agents are situated in time and must make decisions of what to do based on their history of interaction with the environment. An agent has direct access not to its history, but to what it has remembered (its belief state) and what it has just observed. At each point in time, an agent decides what to do and what to remember based on its belief state and its current observations. Complex agents are built modularly in terms of interacting hierarchical layers. An intelligent agent requires knowledge that is acquired at design time, offline or online.