Presentation on theme: "Plans for Today Chapter 2: Intelligent Agents (until break) Lisp: Some questions that came up in lab Resume intelligent agents after Lisp issues."— Presentation transcript:
Plans for Today Chapter 2: Intelligent Agents (until break) Lisp: Some questions that came up in lab Resume intelligent agents after Lisp issues
Current Class Numbers By my roster: 3a: 30 students 4a: 15 students
Intelligent Agents Agent: anything that can be viewed as… perceiving its environment through sensors acting upon its environment through effectors Diagram Examples: Human Web search agent Chess player What are sensors and effectors for each of these?
Rational Agents Rational Agent: one that does the right thing Criteria: Performance measure Performance measures for Web search engine? Tic-tac-toe player? Chess player? How does when performance is measured play a role? short vs. long term
Rational Agents Omniscient agent Knows the actual outcome of its actions What information would a chess player need to have to be omniscient? Omniscience is (generally) impossible A rational agent should do the right thing based on the knowledge it has
Rational Agents What is rational depends on four things: Performance measure Percept sequence: everything agent has seen so far Knowledge agent has about environment Actions agent is capable of performing Ideal Rational Agent Does whatever action is expected to maximize its performance measure, based on percept sequence and built-in knowledge
Ideal Mapping Percept sequence is only varying element of rationality Percept Sequence Action In theory, can build a table mapping percept sequence to action Sample table for chess board Ideal mapping describes behavior of ideal agents
The Mapping Table In most cases, mapping is explosively too large to write down explicitly In some cases, mapping can be defined via a specification Example: agent to sort a list of numbers Sample table for such an agent Lisp code
Autonomy “Independence” A system is autonomous if its behavior is determined by its own experience An alarm that goes off at a prespecified time is not autonomous An alarm that goes off when smoke is sensed is autonomous A system without autonomy lacks flexibility
Structure of Intelligent Agents AI designes the agent program The program runs on some kind of architecture To design an agent program, need to understand Percepts Actions Goals Environment
Some Sample Agents What are percepts, actions, goals, and environment for: Chess Player? Web Search Tool? Matchmaker? Musical performer? Environments can be real (web search tool) or artificial (Turing test) distinction is about whether it is a simulation for agent or the “real thing”
Agent Programs Some extra Lisp: Persistence of state (static variables) Allows a function to keep track of a variable over repeated calls. Put functions inside a let block (let ((sum 0)) (defun myfun (x) (setf sum (+ sum x))) (defun report () sum) )
Specific Agent Example: Pathfinder (Mars Explorer) Percepts: Actions: Goals: Environment: Would table-driven work? Table would take massive memory to store Long time for programmer to build table No autonomy
Four kinds of better agent programs Simple reflex agents Agents that keep track of the world Goal-based agents Utility-based agents
Simple reflex agents Specific response to percepts, i.e. condition-action rule if new-boulder-in-sight then move-towards-new-boulder Pretty easy to implement Requires no knowledge of past Limited what you can do with it
Agents that keep track of the world Maintain an internal state which is adjusted by each percept Example: Internal state: looking for a new boulder, or rolling towards one Affects how Pathfinder will react when seeing a new boulder Rule for action depends on both state and percept Different from reflex, which only depends on percept
Goal-Based Agents Agent continues to receive percepts and maintain state Agent also has a goal Makes decisions based on achieving goal Example Pathfinder goal: reach a boulder If pathfinder trips or gets stuck, can make decisions to reach goal
Utility-Based Agents Goals are not enough – need to know value of goal Is this a minor accomplishment, or a major one? Affects decision making – will take greater risks for more major goals Utility: numerical measurement of importance of a goal A utility-based agent will attempt to make the appropriate tradeoff
Environments: Accessible vs. Inaccessible Accessible: agents sensors detect all aspects of environment relevant to deciding action Inaccessible: some relevant information is not available Examples? Which is more desirable?
Environments: Determinstic vs. Nondeterminstic Deterministic: next state of environment is completely determined by current state and agent actions Nondeterminstic: uncertainty as to next state If environment is inaccessible but deterministic, may appear nondeterministic Agent’s point of view is the important one Examples? Which is more desirable?
Environments: Episodic vs. Nonepisodic Episodic: Experience is divided into “episodes” of agent perceiving than acting Quality of action depends just on the episode itself, not on previous Examples? Which is more desirable?
Environments: Static vs. Dynamic Dynamic: Environment can change while agent is thinking Static: Environment does not change while agent thinks Semidynamic: Environment does not change with time, but performance score does Examples? Which is more desirable?
Environments: Discrete vs. Continuous Discrete: Percepts and actions are distinct, clearly defined, and often limited in number Examples? Which is more desirable?
Why the dot in cons? Two Explanations: High level: cons expects a list in the second position Lower level: Cons takes a cons cell from the free storage list Puts first argument in “first” position Puts second argument in “rest” position Separates by a dot, unless “rest” position is a pointer (indicates continuing list)
How does append work? Makes copy of first list Takes last pointer and points to second list Picture
How to debug? Can trace function calls with (trace function) and (untrace function) Demonstration with mystery function from lab At a Break> prompt, can see call stack with backtrace Can go through code step by step (step (mystery 2 3)) Use step and next to go through each function as you go along Use (print var)
Random bits Are Lisp functions pass by value or pass by reference? Lisp does pass by value, but all lists are referenced by a pointer. So like C++, can affect original list this way. Why the p in (zerop x)? p = predicate NOT true that p = positive
Scoping and binding let declares a scope where variable bindings are insulated from outside usual notions of local and global variables apply if you want to change a global variable from within a function
Assignment 2: Vacuum-World Demonstration Percepts: Vacuum cleaner can only detect: Bumped into something? Dirt underneath me? At home location? Actions: forward, turn right, turn left, suck up dirt, turn off
Vacuum-World cont. Goals: Clean up as much dirt as possible with as few moves as possible. Specifically: 100 points for each piece of dirt vacuumed -1 point for each action -1000 if it turns off when not at home You will modify choose-action function possibly others? New Lisp: try out each function, explain as you go