Presentation is loading. Please wait.

Presentation is loading. Please wait.

How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally.

Similar presentations


Presentation on theme: "How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally."— Presentation transcript:

1 How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally thinking vs. acting Rational Agents

2 2 Acting Rationally “Doing the right thing" … "that which is expected to maximize goal achievement, given the available information." Unlike the previous approach (thinking rationally) the process of "acting" rationally doesn't necessarily require "thinking." (blinking fits in here). In AI we define things that act rationally as “agents.”

3 3 One more idea before we “start” AI goes beyond “normal” CS… AI often is working on problems that we know are intractable… –Consider chess

4 4 One more idea before we “start” In the early 1970s someone wrote the following bit of trivia: –If every man, woman, and child on earth were to spend every waking moment playing chess (16 hours per day) at the rate of one game per minute, it would take 146 billion years to use every variation of the first 10 moves.

5 5 One more idea before we “start” Given this complexity, how the heck can humans EVER hope to play chess well??? We don't often arrive at the best solutions, but we usually do arrive at solutions that are good enough. This idea of satisficing rather than optimizing when confronted with an intractable problem is central to what AI is about.

6 6 Agents “An agent is simply something that acts.” An agent is an entity that is capable of perceiving its environment (through sensors) and responding appropriately to it (through actuators).

7 7 Agents If the agent is intelligent, it should be able to weigh alternatives. “A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.”

8 8 Agents An agent should be able to derive new information from data by applying sound logical rules. It should possess extensive knowledge in the domain where it is expected to solve problems.

9 9 Agents We will consider true intelligent, rational, agents as entities which display: –Perception –Persistence –Adaptability –Autonomous control

10 Agents and Environments Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: The agent program runs on the physical architecture to produce

11 Agents and Environments Vacuum-Cleaner World (Figure 2.2)

12 Agents and Environments Vacuum-Cleaner World (Figure 2.3)

13 Agents and Environments Vacuum-Cleaner World (Figure 2.3)

14 Agents and Environments Vacuum-Cleaner World

15 Rationality A rational agent does the right thing. What is the right thing? One possibility: –The action that will maximize success. But what is success? –The action that maximizes the agent’s goals. Use a performance measure to evaluate agent’s success. So what would be a good performance measure for the vacuum agent?

16 Rationality Fixed performance measure evaluates the environment sequence –One point per square cleaned up in time T –One point per clean square per time step, minus one per move? –Penalize for more than k dirty squares? A rational agent chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date.

17 Rationality Rational agent definition: “For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built- in knowledge the agent has.”

18 Rationality Rationality is not –Omniscience –Clairvoyance –Success Rationality implies –Exploration –Learning –Autonomy

19 PEAS To design a rational agent, we must specify the task environment (the “problems” to which rational agents are the “solutions”). –Performance measure –Environment –Actuators –Sensors Example: the task of designing an automated taxi.

20 PEAS Performance measure? Environment? Actuators? Sensors? Safety, destination, profits, legality, comfort… US streets/freeways, traffic, pedestrians, weather… Steering, accelerator, brake, horn, speaker/display… Video, accelerometers, gauges, engine sensors, keyboard, GPS, …

21 PEAS - Internet news gathering agent Scans Internet news sources to pick interesting items for its customers Performance measure? Environment? Actuators? Sensors?

22 PEAS - Internet news gathering agent Scans Internet news sources to pick interesting items for its customers Performance measure? Environment? Actuators? Sensors?

23 Environment Types We often describe the environment based on six attributes. –Fully/partially observable –Deterministic/stochastic –Episodic/sequential –Static/dynamic –Discrete/continuous –Single agent/multiagent

24 Environment Types Categorization of environment tasks: –Fully/partially observable extent to which an agent’s sensors give it access to the complete state of the environment –Deterministic/stochastic extent to which the next state of the environment is determined by the current state and the current action

25 Environment Types Categorization of environment tasks: –Episodic/sequential extent to which the agent’s experience is divided into atomic episodes –Static/dynamic extent to which the environment can change while the agent is deliberating

26 Environment Types Categorization of environment tasks: –Discrete/continuous extent to which state of the environment, time, percepts and actions of the agent are expressed as a set of discrete values –Single agent/multiagent

27 Environment Types

28

29

30

31

32

33

34 The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

35 RoboCup “By the year 2050, develop a team of fully autonomous humanoid robots that can win against the human world soccer champion team. “ (www.robocup.org). Develop a PEAS description of the task environment for a RoboCup participant. Include a thorough classification of the environment using R&N’s six properties of task environments.

36 Agent Types Agent = architecture + program –Simple reflex agent –Reflex agent with state –Goal-based agent –Utility-based agent –Learning agent (arguably not a 5 th agent but a different model of any of the previous agents).

37 Agent Types Simple reflex agent

38 Agent Types Reflex agent with state

39 Agent Types Goal-based agent

40 Agent Types Utility-based agent

41 Agent Types Learning agent


Download ppt "How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally."

Similar presentations


Ads by Google