Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rutgers CS440, Fall 2003 Lecture 2: Intelligent Agents Reading: AIMA, Ch. 2.

Similar presentations


Presentation on theme: "Rutgers CS440, Fall 2003 Lecture 2: Intelligent Agents Reading: AIMA, Ch. 2."— Presentation transcript:

1 Rutgers CS440, Fall 2003 Lecture 2: Intelligent Agents Reading: AIMA, Ch. 2

2 Rutgers CS440, Fall 2003 What is an agent? An entity in an environment that perceives it through sensors and acts upon it through actuators. Modified from "A Kalman Filter Model of the Visual Cortex", by P. Rao, Neural Computation 9(4):721--763, 1997 Environment Agent Percepts Actions sensor actuator Agents: human, robot, softbot, thermostat, etc. Agents act on environment to achieve a goal.

3 Rutgers CS440, Fall 2003 Agent function & program Agent’s choice of action is based on a sequence of percepts Agent is specified by an agent function f that maps sequences of percepts Y to actions a: Agent program implements agent function on a physical architecture “Easy” solution: table that maps every possible sequence Y to an action a Problem: not feasible

4 Rutgers CS440, Fall 2003 Example: Vacuum-cleaner world Percepts:location and contents, e.g., (A,dirty) Actions:move, clean, do nothing: LEFT, RIGHT, SUCK, NOP AB

5 Rutgers CS440, Fall 2003 Vacuum-cleaner world: agent function What is the right function? Can the function be implemented in a “short” program?

6 Rutgers CS440, Fall 2003 The “right” agent function – rational behavior Rational agent is the one that does the “right thing”: functional table is filled out correctly What is the “right thing”? Define success through a performance measure, r Vacuum-cleaner world: –+1 point for each clean square in time T –+1 point for clean square, -1 for each move –-1000 for more than k dirty squares Rational agent: An agent who selects an action that is expected to maximize the performance measure for a given percept sequence and its built-in knowledge Ideal agent: maximizes actual performance, but needs to be omniscient. Impossible! Builds a model of environment.

7 Rutgers CS440, Fall 2003 Properties of a rational agent Maximize expected performance Gathers information – does actions to modify future percepts Explores – in unknown environments Learns – from what it has perceived so far (dung beetle, sphex wasp) Autonomous – increase its knowledge by learning

8 Rutgers CS440, Fall 2003 Task environment To design a rational agent we need to specify a task environment = problem to which the agent is a solution P.E.A.S. =Performance measure Environment Actuators Sensors Example: automated taxi driver Performance measure: safe, fast, legal, comfortable, maximize profits Environment: roads, other traffic, pedestrians, customers Actuators: steering, accelerator, brake, signal, horn Sensors: cameras, sonar, speedometer, GPS

9 Rutgers CS440, Fall 2003 More PEAS examples College test-taker Internet shopping agent Mars lander The president …

10 Rutgers CS440, Fall 2003 Properties of task environments SolitaireBackgammonInternet shopping Taxi Observable (hidden) Deterministic (stochastic) Episodic (sequential) Static (Dynamic) Discrete (Continuous) Single-agent (multi-agent)

11 Rutgers CS440, Fall 2003 Properties of task environments (cont’d) SolitaireBackgammonInternet shopping Taxi Observable (hidden) Yes No Deterministic (stochastic) Episodic (sequential) Static (Dynamic) Discrete (Continuous) Single-agent (multi-agent)

12 Rutgers CS440, Fall 2003 Properties of task environments (cont’d) SolitaireBackgammonInternet shopping Taxi Observable (hidden) Yes No Deterministic (stochastic) YesNoPartlyNo Episodic (sequential) Static (Dynamic) Discrete (Continuous) Single-agent (multi-agent)

13 Rutgers CS440, Fall 2003 Properties of task environments (cont’d) SolitaireBackgammonInternet shopping Taxi Observable (hidden) Yes No Deterministic (stochastic) YesNoPartlyNo Episodic (sequential) No Static (Dynamic) Discrete (Continuous) Single-agent (multi-agent)

14 Rutgers CS440, Fall 2003 Properties of task environments (cont’d) SolitaireBackgammonInternet shopping Taxi Observable (hidden) Yes No Deterministic (stochastic) YesNoPartlyNo Episodic (sequential) No Static (Dynamic) YesSemi No Discrete (Continuous) Single-agent (multi-agent)

15 Rutgers CS440, Fall 2003 Properties of task environments (cont’d) SolitaireBackgammonInternet shopping Taxi Observable (hidden) Yes No Deterministic (stochastic) YesNoPartlyNo Episodic (sequential) No Static (Dynamic) YesSemi No Discrete (Continuous) Yes No Single-agent (multi-agent)

16 Rutgers CS440, Fall 2003 Properties of task environments (cont’d) SolitaireBackgammonInternet shopping Taxi Observable (hidden) Yes No Deterministic (stochastic) YesNoPartlyNo Episodic (sequential) No Static (Dynamic) YesSemi No Discrete (Continuous) Yes No Single-agent (multi-agent) YesNo

17 Rutgers CS440, Fall 2003 Structure of agents Goal of AI: give task environment, construct agent function, and design an agent program that implements agent function on a particular architecture Skeleton agent: function SKELETON-AGENT(percept t ) returns action t static: state, the agent’s memory of the world state t = Update-State(state t-1,…,percept t,action t-1 ) action t = Choose-Best-Action(state t ) state t = Update-Memory(state t,action t ) return action ytyt atat stst

18 Rutgers CS440, Fall 2003 Skeleton agent Graphical depiction (we will see more of it later in the semester) atat stst ytyt a t+1 s t+1 y t+1 ……

19 Rutgers CS440, Fall 2003 Agent types Simplest agent: Table-driven agent: for each percept sequence Y, has a table entry with associated action function TABLE-DRIVEN-AGENT( percept ) returns action static: sequence or percepts percepts = Update-Percepts( percept ) action = Table( percepts ) return action Four basic types, in order of increasing complexity 1.Simple reflex agent 2.Model-based reflex agent (reflex agent with state) 3.Goal-based agent 4.Utility-driven agent

20 Rutgers CS440, Fall 2003 Simple reflex agent function REFLEX_VACUUM_AGENT( percept ) returns action (location,status) = UPDATE_STATE( percept ) if status = DIRTY then action = SUCK; else if location = A then return RIGHT; else if location = B then return LEFT; return action

21 Rutgers CS440, Fall 2003 Model-based reflex agent

22 Rutgers CS440, Fall 2003 Goal-driven agent

23 Rutgers CS440, Fall 2003 Utility-based agent

24 Rutgers CS440, Fall 2003 Learning agent Any other agent


Download ppt "Rutgers CS440, Fall 2003 Lecture 2: Intelligent Agents Reading: AIMA, Ch. 2."

Similar presentations


Ads by Google