Presentation on theme: "Artificial Intelligence: Chapter 2"— Presentation transcript:
1 Artificial Intelligence: Chapter 2 Intelligent AgentsAgent: anything that can be viewed as…perceiving its environment through sensorsacting upon its environment through actuatorsExamples:HumanWeb search agentChess playerWhat are sensors and actuators for each of these?Draw diagram showing environment, sensors, actuators
2 Rational Agents Conceptually: one that does the right thing Criteria: Performance measurePerformance measures forWeb search engine?Tic-tac-toe player? Chess player?When performance is measured plays a roleshort vs. long term
3 Rational Agents Omniscient agent Omniscience is (generally) impossible Knows actual outcome of its actionsWhat info would chess player need to be omniscient?Omniscience is (generally) impossibleRational agent should do right thing based on knowledge it has
4 Rational Agents What is rational depends on four things: Performance measurePercept sequence: everything agent has seen so farKnowledge agent has about environmentActions agent is capable of performingRational Agent definition:Does whatever action is expected to maximize its performance measure, based on percept sequence and built-in knowledge
5 Autonomy “Independence” A system is autonomous if its behavior is determined by its percepts (as opposed to built-in prior knowledge)An alarm that goes off at a prespecified time is not autonomousAn alarm that goes off when smoke is sensed is somewhat autonomousAn alarm that learns over time via feedback when smoke is from cooking vs a real fire is really autonomousA system without autonomy lacks flexibility
6 The Task Environment An agent’s rationality depends on Performance MeasureEnvironmentActuatorsSensorsWhat are each of these for:Chess Player?Web Search Tool?Matchmaker?Musical performer?
7 Environments: Fully Observable vs. Partially Observable Artificial Intelligence: Chapter 2Environments: Fully Observable vs. Partially ObservableFully observable: agent’s sensors detect all aspects of environment relevant to deciding actionExamples?Which is more desirable?Crossword puzzle, chessMight not know what other agent will do, but not considered part of the environment since it reacts to you
8 Environments: Determinstic vs. Stochastic Deterministic: next state of environment is completely determined by current state and agent actionsStochastic: uncertainty as to next stateIf environment is partially observable but deterministic, may appear stochasticIf environment is determinstic except for actions of other agents, called strategicAgent’s point of view is the important oneExamples?Which is more desirable?
9 Environments: Episodic vs. Sequential Artificial Intelligence: Chapter 2Environments: Episodic vs. SequentialEpisodic: Experience is divided into “episodes” of agent perceiving then acting. Action taken in one episode does not affect next one at all.Sequential typically means need to do lookaheadExamples?Which is more desirable?Episodic: Answering trivia questionsSequential: playing chessEpisodic is easier to handle
10 Environments: Static vs. Dynamic Artificial Intelligence: Chapter 2Environments: Static vs. DynamicDynamic: Environment can change while agent is thinkingStatic: Environment does not change while agent thinksSemidynamic: Environment does not change with time, but performance score doesExamples?Which is more desirable?Chess staticChess with clock semidynamicDriving dynamic
11 Environments: Discrete vs. Continuous Discrete: Percepts and actions are distinct, clearly defined, and often limited in numberExamples?Which is more desirable?
12 Environments: Single agent vs. multiagent What is distinction between environment and another agent?for something to be another agent, maximize a performance measure depending on your behaviorExamples?
13 Structure of Intelligent Agents What does an agent program look like?Some extra Lisp: Persistence of state (static variables)Allows a function to keep track of a variable over repeated calls.Put functions inside a let block(let ((sum 0)) (defun myfun (x) (setf sum (+ sum x))) (defun report () sum) )
15 Artificial Intelligence: Chapter 2 Table Lookup AgentIn theory, can build a table mapping percept sequence to actionInputs: perceptOutputs: actionStatic Variable: percepts, tableSample table for chess boardIn most cases, mapping is explosively too large to write down explicitly
17 Specific Agent Example: Pathfinder (Mars Explorer) Artificial Intelligence: Chapter 2Specific Agent Example: Pathfinder (Mars Explorer)Performance Measure:Environment:Actuators:Sensors:Would table-driven work?Table would take massive memory to storeLong time for programmer to build tableNo autonomy
18 Four kinds of better agent programs Simple reflex agentsModel-based reflex agentsGoal-based agentsUtility-based agents
19 Artificial Intelligence: Chapter 2 Simple reflex agentsSpecific response to percepts, i.e. condition-action ruleif new-boulder-in-sight then move-towards-new-boulderAdvantages:Disadvantages:Pretty easy to implementRequires no knowledge of pastLimited what you can do with it
20 Model-based reflex agents Maintain an internal state which is adjusted by each perceptInternal state: looking for a new boulder, or rolling towards oneAffects how Pathfinder will react when seeing a new boulderCan be used to handle partial observability by use of a model about the worldRule for action depends on both state and perceptDifferent from reflex, which only depends on percept
21 Goal-Based AgentsAgent continues to receive percepts and maintain stateAgent also has a goalMakes decisions based on achieving goalExamplePathfinder goal: reach a boulderIf pathfinder trips or gets stuck, can make decisions to reach goal
22 Utility-Based Agents Goals are not enough – need to know value of goal Is this a minor accomplishment, or a major one?Affects decision making – will take greater risks for more major goalsUtility: numerical measurement of importance of a goalA utility-based agent will attempt to make the appropriate tradeoff