Presentation is loading. Please wait.

Presentation is loading. Please wait.

Www.sti-innsbruck.at © Copyright 2008 STI INNSBRUCK www.sti-innsbruck.at Intelligent Systems Problem Solving Methods – Lecture 7 Prof. Dieter Fensel (&

Similar presentations


Presentation on theme: "Www.sti-innsbruck.at © Copyright 2008 STI INNSBRUCK www.sti-innsbruck.at Intelligent Systems Problem Solving Methods – Lecture 7 Prof. Dieter Fensel (&"— Presentation transcript:

1 www.sti-innsbruck.at © Copyright 2008 STI INNSBRUCK www.sti-innsbruck.at Intelligent Systems Problem Solving Methods – Lecture 7 Prof. Dieter Fensel (& James Scicluna)

2 www.sti-innsbruck.at Agenda Agents Environments Rational Agents Environment Types Problem Solving Agent Types Problem Types Problem Formulation Example 2

3 www.sti-innsbruck.at Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators Robotic agent: cameras and infrared range finders for sensors; various motors for actuators 3

4 www.sti-innsbruck.at Environments 4 The agent function maps from percept histories to actions: [f: P*  A ] The agent program runs on the physical architecture to produce f agent = architecture + program

5 www.sti-innsbruck.at Vacuum-cleaner world 5 Percepts: location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck, NoOp

6 www.sti-innsbruck.at Rational agents An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful Performance measure: An objective criterion for success of an agent's behavior –E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc. –It is better to design performance measures according to what one actually wants in the environment than according to how one thinks the agent should behave Rationality is distinct from omniscience (all-knowing with infinite knowledge) Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration) An agent is autonomous if its behavior is determined by its own experience (ability to learn and adapt). 6

7 www.sti-innsbruck.at PEAS Performance measure, Environment, Actuators, Sensors PEAS need to be specified in order to design an intelligent agent Examples: –Medical Diagnosis System Agent Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers) –Automated Taxi Driver Agent Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard 7

8 www.sti-innsbruck.at Environment Types Fully observable (vs. partially observable) –An agent's sensors give it access to the complete state of the environment at each point in time. Deterministic (vs. stochastic) –The next state of the environment is completely determined by the current state and the action executed by the agent. –If the environment is deterministic except for the actions of other agents, then the environment is strategic Episodic (vs. sequential) –The agent's experience is divided into atomic episodes –each episode consists of the agent perceiving and then performing a single action –the choice of action in each episode depends only on the episode itself. Static (vs. dynamic) –the environment is unchanged while an agent is deliberating. –the environment is semi-dynamic if the environment itself does not change with the passage of time but the agent's performance score does Discrete (vs. continuous) –a limited number of distinct, clearly defined percepts and actions. Single agent (vs. multiagent) –an agent operating by itself in an environment. 8

9 www.sti-innsbruck.at Environment types Chess with Crossword Taxi driving a clock puzzle Fully observableFullyFullyPartially DeterministicStrategicDeterministicStochastic Episodic SequentialSequentialSequential Static SemiStatic Dynamic DiscreteDiscrete DiscreteContinuous Single agentSingle/MultiSingleMulti The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent 9

10 www.sti-innsbruck.at Agent Functions and Types An agent is completely specified by the agent function mapping percept sequences to actions The aim is to find a way to implement the rational agent function concisely Four basic types in order of increasing generality –Simple reflex agents –Model-based reflex agents –Goal-based agents –Utility-based agents –Learning agents 10

11 www.sti-innsbruck.at Problem Solving Agent Types 11 Reflex-based Model-based Goal-based

12 www.sti-innsbruck.at Problem-solving agents Goal formulation: limiting the objectives that the agent is trying to achieve Problem formulation: what actions and states to consider given a goal Search: looking for “the best“ sequence of actions that leads to the goal Execution 12

13 www.sti-innsbruck.at Some background for the next slides 13

14 www.sti-innsbruck.at Example: Problem-solving agent An agent on holiday in Romania; currently in Arad; flight leaves tomorrow from Bucharest –Goal formulation: limiting the objectives that the agent is trying to achieve: be in Bucharest tomorrow –Problem formulation: what actions and states to consider given a goal: states: being in various cities actions: drive between cities –Search: choosing “the best“ sequence of actions that leads to the goal (requires knowledge about actions and states) sequence of visited cities, e.g., Arad, Sibiu, Fagaras, Bucharest the agent uses a map of Romania –Execution 14

15 www.sti-innsbruck.at Example: Romania 15

16 www.sti-innsbruck.at Problem-solving agents 16

17 www.sti-innsbruck.at Some notes The algorithm uses a simplified representation/notion of states and actions (abstraction) It works only if the environment is –Static: we do not pay attention to any changes occurring in the environment in between –Observable: initial state is known –Discrete: we can enumerate actions –Deterministic: we do not handle events occurring while executing the action sequence 17

18 www.sti-innsbruck.at Problem types Deterministic, fully observable  single-state problem –Agent knows exactly which state it will be in; solution is a sequence of actions Partially observable  sensorless problem (conformant problem) –Agent may have no idea where it is –Reasoning about belief states –Solution is a sequence of actions Nondeterministic and/or partially observable  contingency problem –Percepts provide new information about current state –Solution is a tree with state conditions as nodes and actions as edges –Often search and execution are interleaved Unknown state space  exploration problem 18

19 www.sti-innsbruck.at Example: vacuum world 19 Single-state, start in #5. Solution? [Right, Suck]

20 www.sti-innsbruck.at Example: vacuum world Single-state, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] 20

21 www.sti-innsbruck.at Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] Contingency –Nondeterministic: Action "Suck" may dirty a clean carpet –Partially observable: location, dirt at current location. –Percept: [L, :Dirty], i.e., start in #5 or #7 Solution? [Right, if dirt then Suck] 21

22 www.sti-innsbruck.at Single-state problem formulation A problem is defined by four items: 1.initial state e.g., “in Arad" 2.description of actions: successor function S(x) = set of action–state pairs –e.g., S(Arad) = {, … } 3.goal test, can be –explicit, e.g., x = “in Bucharest" –implicit, e.g., Checkmate(x) (an arbitrary condition (formula) over the goal state ) 4.path cost (reflects the performance measure - additive) –e.g., sum of distances, number of actions executed, etc. –c(x,a,y) is the step cost, assumed to be ≥ 0 A solution is a sequence of actions leading from the initial state to a goal state; optimal solution 22

23 www.sti-innsbruck.at Selecting a state space Real world is absurdly complex  state space must be abstracted for problem solving (Abstract) state = set of real states (Abstract) action = complex combination of real actions –e.g., "Arad  Zerind" represents a set of possible complex routes with detours, rest stops, driving by taxi, bus, hitchhiking etc. (Abstract) solution = can be expanded into a solution in the real world Each abstract action should be "easier" than the original problem Good abstraction – removing detail, maintaining validity 23

24 www.sti-innsbruck.at Vacuum world state space graph states? Dirt and robot locations actions? Left,Right,Suck goal test? No dirty locations path cost? 1 per action 24

25 www.sti-innsbruck.at Another example: 8-puzzle states? locations of tiles ((n*n! possible states) actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move 25

26 www.sti-innsbruck.at Another example: robotic assembly states? real-valued coordinates of robot joint angles and parts of the object to be assembled actions? continuous motions of robot joints goal test? complete assembly path cost? time to execute 26

27 www.sti-innsbruck.at Another example: Romania 27 states? location of the agent actions? move along edges goal test? in Bucharest path cost? travel distance

28 www.sti-innsbruck.at Tree search algorithms Basic, general idea: –offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states) –Algorithm needs to know the expansion function 28

29 www.sti-innsbruck.at Tree search example 29

30 www.sti-innsbruck.at Tree search example 30

31 www.sti-innsbruck.at Tree search example 31

32 www.sti-innsbruck.at Implementation: general tree search 32 The strategy is hidden in the insertion step!

33 www.sti-innsbruck.at Representation of the state space as a tree: states vs. nodes A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states. 33 ACTION = right DEPTH = 6 PATH-COST g = 6

34 www.sti-innsbruck.at Search Strategies A search strategy is defined by picking the order of node expansion Evaluation of Search strategies –Completeness: The strategy is guaranteed to find a solution whenever there exists one –Termination: The algorithm terminates with a solution or with an error message if no solution exists. –Soundness: The strategy only returns admissible solutions. –Correctness: Soundness + Termination –Optimality: The strategy finds the “highest-quality” solution (minimal number of operator applications or minimal cost) –Effort: How many time and/or how many memory is needed to generate an output? Main Search Types: –Uninformed (Blind-) Search –Informed (Heuristic-) Search...Lecture 5 (?) 34


Download ppt "Www.sti-innsbruck.at © Copyright 2008 STI INNSBRUCK www.sti-innsbruck.at Intelligent Systems Problem Solving Methods – Lecture 7 Prof. Dieter Fensel (&"

Similar presentations


Ads by Google