Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Strong Method Problem Solving 7 7.0Introduction 7.1Overview of Expert System Technology 7.2Rule-Based Expert Systems 7.3Model-Based, Case Based, and.

Similar presentations


Presentation on theme: "1 Strong Method Problem Solving 7 7.0Introduction 7.1Overview of Expert System Technology 7.2Rule-Based Expert Systems 7.3Model-Based, Case Based, and."— Presentation transcript:

1 1 Strong Method Problem Solving 7 7.0Introduction 7.1Overview of Expert System Technology 7.2Rule-Based Expert Systems 7.3Model-Based, Case Based, and Hybrid Systems 7.4Planning 7.5Epilogue and References 7.6Exercises

2 2 Chapter Objectives Learn about knowledge-intensive AI applications Learn about the issues in building Expert Systems: knowledge engineering, inference, providing explanations Learn about the issues in building Planning Systems: writing operators, plan generation, monitoring execution The agent model: Can perform “expert quality” problem solving; can generate and monitor plans

3 3 Expert systems (ESs) - motivations Experts usually have a lot of knowledge, why not build a system that incorporates a lot of knowledge in a specific area. Will attempt to solve a problem that is  non-trivial  complex  poorly understood The resulting system will be  fast  reliable  cheap  transportable  usable in remote sites

4 4 What is in an expert system? lots of knowledge a production system architecture inference techniques advanced features for the user  should make their job easier  explanations

5 5 Guidelines to determine whether a problem is appropriate for an ES solution The need for the solution justifies the cost and effort of building an expert system. Human expertise is not available in all situations where it is needed. The problem may be solved using symbolic reasoning. The problem domain is well structured and does not require commonsense reasoning. The problem may not be solved using traditional computing methods. Cooperative and articulate experts exist. The problem is of proper size and scope.

6 6 Architecture of a typical expert system

7 7 The role of mental or conceptual models in problem solving.

8 Exploratory development cycle

9 9 A small ES for diagnosing automotive problems If the engine is getting gas, and the engine will turn over, then the problem is spark plugs If the engine does not turn over, and the lights do not come on then the problem is battery or cables If the engine does not turn over, and the lights do come on, then the problem is the starter motor If there is gas in the fuel tank, and there is gas in the carburetor then the engine is getting gas 1 2 3 4

10 The and/or graph searched in the car diagnosis example

11 11 The production system at the start of a consultation

12 12 The production system after Rule 1 has fired

13 13 The system after Rule 4 has fired. Note the stack-based approach to goal reduction

14 14 Explanation: Answering “why” questions ES - Gas in fuel tank? U-yes ES-Gas in carburetor? U-yes ES-Engine will turn over? U-why ES-It has been established that 1. The engine is getting gas, therefore if 2. The engine will turn over, then the problem is spark plugs. The user is asking “why is this information important?”

15 15 Explanation: Answering “how” questions U-how the engine is getting gas ES-This follows from rule 4: if gas in fuel tank, and there is gas in the carburetor then the engine is getting gas gas in fuel tank was given by the user gas in the carburetor was given by the user The user is asking “How did the system come up with this conclusion?”

16 16 The production system at the start of a consultation for data-driven reasoning

17 17 The production system after evaluating the first premise of Rule 2, which then fails

18 18 After considering Rule 4, beginning its second pass through the rules

19 19 The search graph as described by the contents of WM data-driven BFS

20 20 ES examples - DENDRAL (Russell & Norvig, 2003) DENDRAL is the earliest ES (project 1965- 1980) Developed at Stanford by Ed Feigenbaum, Bruce Buchanan, Joshua Lederberg, G.L. Sutherland, Carl Djerassi. Problem solved: inferring molecular structure from the information provided by a mass spectrometer. This is an important problem because the chemical and physical properties of compounds are determined not just by their constituent atoms, but by the arrangement of these atoms as well.

21 21 ES examples - DENDRAL (Russell & Norvig, 2003) Inputs: elementary formula of the molecule (e.g., C 6 H 13 NO 2 ), and the mass spectrum giving the masses of the various fragments of the molecule generated when it is bombarded by an electron beam (e.g., the mass spectrum might contain a peak at m=15, corresponding to the mass of a methyl (CH 3 ) fragment.

22 22 ES examples - DENDRAL (cont’d) Naïve version: DENDRAL stands for DENDritic Algoritm: a procedure to exhaustively and nonredundantly enumerate all the topologically distinct arrangements of any given set of atoms. Generate all the possible structures consistent with the formula, predict what mass spectrum would be observed for each, compare this with the actual spectrum. This is intractable for large molecules! Improved version: look for well-known patterns of peaks in the spectrum that suggested common substructures in the molecule. This reduces the number of possible candidates enormously.

23 23 ES examples - DENDRAL (cont’d) A rule to recognize a ketone (C=0) subgroup (weighs 28) if there are two peaks at x1 and x2 such that (a) x1 + x2 = M + 28 (M is the mass of the whole molecule); (b) x1 - 28 is a high peak (c) x2 - 28 is a high peak (d) at least one of x1 and x2 is high then there is a ketone subgroup Cyclopropyl-methyl-ketone Dicyclopropyl-methyl-ketone

24 24 ES examples - MYCIN MYCIN is another well known ES. Developed at Stanford by Ed Feigenbaum, Bruce Buchanan, Dr. Edward Shortliffe. Problem solved: diagnose blood infections. This is an important problem because physicians usually must begin antibiotic treatment without knowing what the organism is (laboratory cultures take time). They have two choices: (1) prescribe a broad spectrum drug (2) prescribe a disease-specific drug (better).

25 25 ES examples - MYCIN (cont’d) Differences from DENDRAL: No general theoretical model existed from which MYCIN rules could be deduced. They had to be acquired from extensive interviewing of experts, who in turn acquired them from textbooks, other experts, and direct experience of cases. The rules reflected uncertainty associated with medical knowledge: certainty factors (not a sound theory)

26 26 ES examples - MYCIN (cont’d) About 450 rules. One example is: If the site of the culture is blood the gram of the organism is neg the morphology of the organism is rod the burn of the patient is serious then there is weakly suggestive evidence (0.4) that the identity of the organism is pseudomonas.

27 27 ES examples - MYCIN (cont’d) If the infection which requires therapy is meningitis only circumstantial evidence is available for this case the type of the infection is bacterial the patient is receiving corticosteroids then there is evidence that the organisms which might be causing the infection are e.coli(0.4), klebsiella- pneumonia(0.2), or pseudomonas-aeruginosa(0.1).

28 28 ES examples - MYCIN (cont’d) Starting rule: “If there is an organism requiring therapy, then, compute the possible therapies and pick the best one.” It first tries to see if the disease is known. Otherwise, tries to find it out.

29 29 ES examples - MYCIN (cont’d) Can ask questions during the process: > What is the patient’s name? John Doe. > Male or female? Male. >Age? He is 55. > Have you obtained positive cultures indicating general type? Yes. > What type of infection is it? Primary bacteremia.

30 30 ES examples - MYCIN (cont’d) > Let’s call the first significant organism from this culture U1. Do you know the identity of U1? No. > Is U1 a rod or a coccus or something else? Rod. > What is the gram stain of U1? Gram-negative. In the last two questions, it is trying to ask the most general question possible, so that repeated questions of the same type do not annoy the user. The format of the KB should make the questions reasonable.

31 31 ES examples - MYCIN (cont’d) Studies about its performance showed its recommendations were as well as some experts, and considerably better than junior doctors. Could calculate drug dosages very precisely. Dealt well with drug interactions. Had good explanation features and rule acquisition systems. Was narrow in scope (not a large set of diseases). Another expert system, INTERNIST, knows about internal medicine. Issues in doctors’ egos, legal aspects.

32 32 Asking questions to the user Which questions should be asked and in what order? Try to ask questions to make facilitate a more comfortable dialogue. For instance, ask related questions rather than bouncing between unrelated topics (e.g., zipcode as part of an address or to relate the evidence to the area the patient lives).

33 33 ES examples - R1 (or XCON) The first commercial expert system (~1982). Developed at Digital Equipment Corporation (DEC). Problem solved: Configure orders for new computer systems. Each customer order was generally a variety of computer products not guaranteed to be compatible with one another (conversion cards, cabling, support software…) By 1986, it was saving the company $40 million a year. Previously, each customer shipment had to be tested for compatibility as an assembly before being shipper. By 1988, DEC’s AI group had 40 expert systems deployed.

34 34 ES examples - R1 (or XCON) (cont’d) Rules to match computers and their peripherals: “If the Stockman 800 printer and DPK202 computer have been selected, add a printer conversion card, because they are not compatible.” Being able to change the rule base easily was an important issue because the products were always changing. Over 99% of the configurations were reported to be accurate. Errors were due to lack of product information on recent products (easily correctible.) Like MYCIN, performs as well as or better than most experts. 6,000 - 10,000 rules.

35 35 Expert Systems: then and now The AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988. Nearly every major U.S. corporation had its own AI group and was either using or investigating expert systems. For instance, Du Pont had 100 ESs in use and 500 in development, saving an estimated $10 million per year. AAAI had 15,000 members during the “expert systems craze.” Soon a period called the “AI Winter” came…BIRRR...

36 36 Expert Systems: then and now (cont’d) The AI industry has shifted focus and stabilized (AAAI members 5500- 7000) Expert systems continue to save companies money  IBM’s San Jose facility has an ES that diagnoses problems on disk drives  Pac Bell’s diagnoses computer network problems  Boeing’s tells workers how to assemble electrical connectors  American Express Co’s helps in card application approvals  Met Life’s processes mortgage applications Expert Sytem Shells: abstract away the details to produce an inference engine that might be useful for other tasks. Many are available.

37 37 Heuristics and control in expert systems organization of a rule’s premises rule order costs of different tests which rules to select:  refraction  recency  specificity restrict potentially usable rules

38 38 Model-based reasoning Attempt to describe the “inner details” of the system. This way, the expert system (or any other knowledge-intensive program) can revert to first principles, and can still make inferences if rules summarizing the situation are not present. Include a description of: each component of the device, device’s internal structure, observations of the device’s actual performance

39 39 The behavioral description of an adder (Davis and Hamscher,1988) Behaviour at the terminals of the device: e.g., C is A+B.

40 40 Taking advantage of direction of information flow (Davis and Hamscher, 1988) Either ADD-1 is bad, or the inputs are incorrect (MULT-1 or MULT-2 is bad)

41 41 Fault diagnosis procedure Generate hypotheses: identify the faulty component(s), e.g., ADD-1 is not faulty Test hypotheses: Can they explain the observed behaviour? Discriminate between hypotheses: What additional information is necessary to resolve conflicts?

42 42 A schematic of the simplified Livingstone propulsion system (Williams and Nayak,1996)

43 43 A model-based configuration management system (Williams and Nayak, 1996)

44 44 Case-based reasoning (CBR) Allows reference to past “cases” to solve new situations. Ubiquitous practice: medicine, law, programming, car repairs, …

45 45 Common steps performed by a case- based reasoner Retrieve appropriate cases from memory Modify a retrieved case so that it will apply to the current situation Apply the transformed case Save the solution, with a record of success or failure, for future use

46 46 Preference heuristics to help organize the storage and retrieval cases (Kolodner, 1993) Goal directed preference: Retrieve cases that have the same goal as the current situation Salient-feature preference: Prefer cases that match the most important features or those matching the largest number of important features Specify preference: Look for as exact as possible matches of features before considering more general matches Recency preference: Prefer cases used most recently Ease of adaptation preference: Use first cases most easily adapted to the currrent situation

47 47 Transformational analogy (Carbonell, 1983)

48 48 Advantages of a rule-based approach Ability to directly use experiential knowledge acquired from human experts Mapping of rules to state space search Separation of knowledge from control Possibility of good performance in limited domains Good explanation facilities

49 49 Disadvantages of a rule-based approach highly heuristic nature of rules not capturing the functional (or model-based) knowledge of the domain brittle nature of heuristic rules rapid degradation of heuristic rules descriptive (rather than theoretical) nature of explanation rules highly task dependent knowledge

50 50 Advantages of model-based reasoning Ability to use functional/structure of the domain Robustness due to ability to resort to first principles Transferable knowledge Aibility to provide causal explanations

51 51 Advantages of model-based reasoning Lack of experiental (descriptive) knowledge of the domain Requirement for an explicit domain model High complexity Unability to deal with exceptional situations

52 52 Advantages of case-based reasoning Ability to encode historical knowledge directly Achieving speed-up in reasoning using shortcuts Avoiding past errors and exploiting past successes No (strong) requirement for an extensive analysis of domain knowledge Added problems solving power via appropriate indexing strategies

53 53 Disadvantages of case-based reasoning No deeper knowledge of the domain Large storage requirements Requirement for good indexing and matching criteria

54 54 How about combining those approaches? Complex!! But nevertheless useful. rule-based + case-based can  first check among previous cases, then engage in rule- based reasoning  provide a record of examples and exceptions  provide a record of searches done

55 55 How about combining those approaches? rule-based + model-based can  enhance explanations with functional knowledge  improve robustness when rules fail  add heuristic search to model-based search model-based + case-based can  give more mature explanations to the situations recorded in cases  first check against stored cases before proceeding with model- based reasoning  provide a record of examples and exceptions  record results of model-based inference Opportunities are endless!

56 56 It is a system whose task is to find a sequence of actions to accomplish a specific task. The main components of a planning problem are:  a description of the starting situation (the initial state),  a description of the desired situation (the goal state),  the actions available to the executing agent (operator library, aka domain theory). Formally, a (classical) planning problem is a triple:, where I is the initial state, G is the goal state, and D is the domain theory. What is planning? planner planning problem plan

57 57 Characteristics of classical planners They need a mechanism to reason about actions and the changes they inflict on the world Important assumptions:  the agent is the only source of change in the world, otherwise the environment is static  all the actions are deterministic  the agent is omniscient: knows everything it needs to know about start state and effects of actions  the goals are categorical, the plan is considered successful iff all the goals are achieved

58 58 The blocks world

59 59 Represent this world using predicates ontable(a) ontable(c) ontable(d) on(b,a) on(e,d) clear(b) clear(c) clear(e) gripping()

60 60 Declarative (or procedural) rules If a block is clear, then there are no blocks on top of it (declarative) OR To make sure that a block is clear, make sure to remove all the blocks on top of it (procedural) 1. (  X) ( clear(X)   (  Y) ( on(Y, X) )) 2. (  Y)(  X)  on(Y, X)  ontable(Y) 3. (  Y) gripping()   gripping(Y)

61 61 Rules for operation on the states 4. (  X) pickup(X)  (gripping(X)  (gripping()  clear(X)  ontable(X))) 5. (  X) putdown(X)  (gripping()  ontable(X)  clear(X)  (gripping(X))) 6. (  X) stack(X,Y)  ((on (X,Y)  gripping()  clear(X))  (clear(Y)  gripping(X)) ) 7. (  X) unstack(X,Y)  ((clear(Y)  gripping(X) )  (on(X,Y)  clear(X)  gripping()) )

62 62 The format of the rules A  (B  C) where, A is the operator B is the “result” of the operation C is the conditions that must be true in order for the operator to be executable They tell what changes when the operator is executed (or applied.)

63 63 Portion of the search space or the blocks world example

64 64 But... We have no explicit notion of a “state” that changes over time as actions are performed. Remember that predicate logic is “timeless”, everything refers to the same time. In order to work reasoning about actions into logic, we need a way to tell that changes are happening over discrete times (or situations.)

65 65 Situation calculus We need to add an additional parameter which represents the state. We’ll use s 0, …, s n to represent states (aka situations). Now we can say: 4. (  X) pickup(X, s 0 )  (gripping(X, s 1 )  (gripping(, s 0 )  clear(X, s 0 )  ontable(X, s 0 ))) If the pickup action was attempted in state 0, with the conditions listed holding, then in state 1, gripping will be true for X.

66 66 Introduce “holds” and “result” and generalize over states 4. (  X) (  s) (holds (gripping( ), s )  holds (clear(X), s )  holds (ontable(X), s ) )  (holds(gripping(X), result(pickup(X),s)) Using rules like this we can logically prove what happens as several actions are applied consecutively. Notice that gripping, clear, …, are now functions. Is “result” a function or a predicate?

67 67 A small “plan” c ba (result(stack(c,b), (result( pickup(c), (result (stack(b, a), (result(pickup(b), (result(putdown(c), (result(unstack(c,b),s0 )))))) b c a

68 68 Our rules will still not work, because... We are making an implicit (but big) assumption: we are assuming that if nothing tells us that p has changed, then p has not changed. This is important because we want to reason about change, as well as no-change. For instance, block a is still clear after we move block c around (except on top of block a). Things are going to start to get messier because we now need frame axioms.

69 69 A frame axiom Tells what doesn’t change when an action is performed. For instance, if Y is “unstacked” from Z, nothing happens to X. (  X) (  Y) (  Z) (  s) (holds (ontable(X), s )  (holds(ontable(X), result(unstack(Y, Z), s) For our logic system to work, we’ll have to define such an axiom for each action and for each predicate. This is called the frame problem. Perhaps the time to get “un-logical”.

70 70 The STRIPS representation No frame problem. Special purpose representation. An operator is defined in terms of its: name, parameters, preconditions, and results. A planner is a special purpose algorithm rather than a general purpose logic theorem prover: forward or backward chaining (state space), plan space algorithms, and several significant others including logic-based.

71 71 Four operators for the blocks world P: gripping()  clear(X)  ontable(X) pickup(X)A: gripping(X) D: ontable(X)  gripping() P: gripping(X) putdown(X)A: ontable(X)  gripping()  clear(X) D: gripping(X) P: gripping(X)  clear(Y) stack(X,Y)A: on(X,Y)  gripping()  clear(X) D: gripping(X)  clear(Y) P: gripping()  clear(X)  on(X,Y) unstack(X,Y)A: gripping(X)  clear(Y) D: on(X,Y)  gripping()

72 72 Notice the simplification Preconditions, add lists, and delete lists are all conjunctions. We no more have the full power of predicate logic. The same applies to goals. Goals are conjunctions of predicates. A detail: Why do we have two operators for picking up (pickup and unstack), and two for putting down (putdown and stack)?

73 73 A goal state for the blocks world

74 74 A state space algorithm for STRIPS operators Search the space of situations (or states). This means each node in the search tree is a state. The root of the tree is the start state. Operators are the means of transition from each node to its children. The goal test involves seeing if the set of goals is a subset of the current situation. Why is the frame problem no more a problem?

75 75 Now, the following graph makes much more sense

76 76 Problems in representation Frame problem: List everything that does not change. It no more is a significant problem because what is not listed as changing (via the add and delete lists) is assumed to be not changing. Qualification problem: Can we list every precondition for an action? For instance, in order for PICKUP to work, the block should not be glued to the table, it should not be nailed to the table, … It still is a problem. A partial solution is to prioritize preconditions, i.e., separate out the preconditions that are worth achieving.

77 77 Problems in representation (cont’d) Ramification problem: Can we list every result of an action? For instance, if a block is picked up its shadow changes location, the weight on the table decreases,... It still is a problem. A partial solution is to code rules so that inferences can be made. For instance, allow rules to calculate where the shadow would be, given the positions of the light source and the object. When the position of the object changes, its shadow changes too.

78 78 The gripper domain The agent is a robot with two grippers (left and right) There are two rooms (rooma and roomb) There are a number of balls in each room Operators:  PICK  DROP  MOVE

79 79 A “deterministic” plan Pick ball1 rooma right Move rooma roomb Drop ball1 roomb right Remember: no observability, nothing can go wrong.

80 80 (define (domain gripper-strips) (:predicates (room ?r)(ball ?b) (gripper ?g)(at-robby ?r) (at ?b ?r)(free ?g) (carry ?o ?g)) (:action move :parameters (?from ?to) :precondition (and (room ?from) (room ?to) (at-robby ?from)) :effect (and (at-robby ?to) (not (at-robby ?from)))) The domain definition for the gripper domain name of the domain “?” indicates a variable combined add and delete lists

81 81 The domain definition for the gripper domain (cont’d) (:action pick :parameters (?obj ?room ?gripper) :precondition (and (ball ?obj) (room ?room) (gripper ?gripper) (at ?obj ?room) (at-robby ?room) (free ?gripper)) :effect (and (carry ?obj ?gripper) (not (at ?obj ?room)) (not (free ?gripper))))

82 82 The domain definition for the gripper domain (cont’d) (:action drop :parameters (?obj ?room ?gripper) :precondition (and (ball ?obj) (room ?room) (gripper ?gripper) (at-robby ?room) (carrying ?obj ?gripper)) :effect (and (at ?obj ?room) (free ?gripper) (not (carry ?obj ?gripper))))))

83 83 An example problem definition for the gripper domain (define (problem strips-gripper2) (:domain gripper-strips) (:objects rooma roomb ball1 ball2 left right) (:init (room rooma)(room roomb) (ball ball1)(ball ball2) (gripper left)(gripper right) (at-robby rooma) (free left)(free right) (at ball1 rooma)(at ball2 rooma) ) (:goal (at ball1 roomb)))

84 84 Running VHPOP Once the domain and problem definitions are in files gripper-domain.pddl and gripper-2.pddl respectively, the following command runs Vhpop: vhpop gripper-domain.pddl gripper-2.pddl The output will be: ;strips-gripper2 1:(pick ball1 rooma right) 2:(move rooma roomb) 3:(drop ball1 roomb right) Time: 0 “pddl” is the planning domain definition language.

85 85 Why is planning a hard problem? It is due to the large branching factor and the overwhelming number of possibilities. There is usually no way to separate out the relevant operators. Take the previous example, and imagine that there are 100 balls, just two rooms, and two grippers. Again, the goal is to take 1 ball to the other room. How many PICK operators are possible in the initial situation? pick :parameters (?obj ?room ?gripper) That is only one part of the branching factor, the robot could also move without picking up anything.

86 86 Why is planning a hard problem? (cont’d) Also, goal interactions is a major problem. In planning, goal-directed search seems to make much more sense, but unfortunately cannot address the exponential explosion. This time, the branching factor increases due to the many ways of resolving interactions. When subgoals are compatible, i.e., they do not interact, they are said to be linear ( or independent, or serializable).

87 87 How to deal with the exponential explosion? Use goal-directed algorithms Use domain-independent heuristics Use domain-dependent heuristics (need a language to specify them)

88 88 The “monkey and bananas” problem

89 89 The “monkey and bananas” problem (cont’d) The problem statement: A monkey is in a laboratory room containing a box, a knife and a bunch of bananas. The bananas are hanging from the ceiling out of the reach of the monkey. How can the monkey obtain the bananas? ?

90 90 VHPOP coding (define (domain monkey-domain) (:requirements :equality) (:constants monkey box knife glass water waterfountain) (:predicates (on-floor) (at ?x ?y) (onbox ?x) (hasknife) (hasbananas) (hasglass) (haswater) (location ?x) (:action go-to :parameters (?x ?y) :precondition (and (not = ?y ?x)) (on-floor) (at monkey ?y) :effect (and (at monkey ?x) (not (at monkey ?y))))

91 91 VHPOP coding (cont’d) (:action climb :parameters (?x) :precondition (and (at box ?x) (at monkey ?x)) :effect (and (onbox ?x) (not (on-floor)))) (:action push-box :parameters (?x ?y) :precondition (and (not (= ?y ?x)) (at box ?y) (at monkey ?y) (on-floor)) :effect (and (at monkey ?x) (not (at monkey ?y)) (at box ?x) (not (at box ?y))))

92 92 VHPOP coding (cont’d) (:action getknife :parameters (?y) :precondition (and (at knife ?y) (at monkey ?y)) :effect (and (hasknife) (not (at knife ?y)))) (:action grabbananas :parameters (?y) :precondition (and (hasknife) (at bananas ?y) (onbox ?y) ) :effect (hasbananas))

93 93 VHPOP coding (cont’d) (:action pickglass :parameters (?y) :precondition (and (at glass ?y) (at monkey ?y)) :effect (and (hasglass) (not (at glass ?y)))) (:action getwater :parameters (?y) :precondition (and (hasglass) (at waterfountain ?y) (ay monkey ?y) (onbox ?y)) :effect (haswater))

94 94 Problem 1: monkey-test1.pddl (define (problem monkey-test1) (:domain monkey-domain) (:objects p1 p2 p3 p4) (:init (location p1) (location p2) (location p3) (location p4) (at monkey p1) (on-floor) (at box p2) (at bananas p3) (at knife p4)) (:goal (hasbananas))) go-to p4 p1 get-knife p4 go-to p2 p4 push-box p3 p2 climb p3 grab-bananas p3time = 30 msec.

95 95 Problem 2: monkey-test2.pddl (define (problem monkey-test2) (:domain monkey-domain) (:objects p1 p2 p3 p4 p6) (:init (location p1) (location p2) (location p3) (location p4) (location p6) (at monkey p1) (on-floor) (at box p2) (at bananas p3) (at knife p4) (at waterfountain p3) (at glass p6)) (:goal (and (hasbananas) (haswater)))) go-to p4 p1go-to p2 p6 get-knife p4push-box p3 p2 go-to p6 p4climb p3 pickglass p6getwater p3 grab-bananas p3 time = 70 msec.

96 96 The “monkey and bananas” problem (cont’d) (Russell & Norvig, 2003) Suppose that the monkey wants to fool the scientists, who are off to tea, by grabbing the bananas, but leaving the box in its original place. Can this goal be solved by a STRIPS- style system?

97 97 Triangle table (execution monitoring and macro operators)

98 98 Teleo-reactive planning: combines feedback- based control and discrete actions (Klein et al., 2000)

99 99 Model-based reactive configuration management (Williams and Nayak, 1996a) Intelligent space probes that autonomously explore the solar system. The spacecraft needs to: radically reconfigure its control regime in response to failures, plan around these failures during its remaining flight.

100 100 A schematic of the simplified Livingstone propulsion system (Williams and Nayak,1996)

101 101 A model-based configuration management system (Williams and Nayak, 1996) ME: mode estimation MR: mode reconfiguration

102 102 The transition system model of a valve (Williams and Nayak, 1996a)

103 103 Mode estimation (Williams and Nayak, 1996a)

104 104 Mode reconfiguration (MR) (Williams and Nayak, 1996a)

105 105 Comments on planning It is a synthesis task Classical planning is based on the assumptions of a deterministic and static environment Algorithms to solve planning problems include:  forward chaining: heuristic search in state space  Graphplan: mutual exclusion reasoning using plan graphs  Partial order planning (POP): goal directed search in plan space  Satifiability based planning: Convert problem into logic Non-classical planners include:  probabilistic planners  contingency planners (aka conditional planners)  decision-theoretic planners  temporal planners


Download ppt "1 Strong Method Problem Solving 7 7.0Introduction 7.1Overview of Expert System Technology 7.2Rule-Based Expert Systems 7.3Model-Based, Case Based, and."

Similar presentations


Ads by Google