Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to AI & AI Principles (Semester 1) WEEK 10 – Tuesday/Wednesday Introduction to AI & AI Principles (Semester 1) WEEK 10 – Tuesday/Wednesday.

Similar presentations


Presentation on theme: "Introduction to AI & AI Principles (Semester 1) WEEK 10 – Tuesday/Wednesday Introduction to AI & AI Principles (Semester 1) WEEK 10 – Tuesday/Wednesday."— Presentation transcript:

1 Introduction to AI & AI Principles (Semester 1) WEEK 10 – Tuesday/Wednesday Introduction to AI & AI Principles (Semester 1) WEEK 10 – Tuesday/Wednesday (2008/09) John Barnden Professor of Artificial Intelligence School of Computer Science University of Birmingham, UK

2 Tuesday/Wednesday uContinue existing discussion of logical reasoning. uOutline a major AI logical-reasoning method and its advantages. uCompare & contrast logic and production systems. uView reasoning, whether in logic or PSs, as search. uAdditional matters concerning PSs. uNew topic: Semantic Networks.

3 Deductive Reasoning in Logic (contd.)

4 Reminder: Some Difficulties uWhat inference rule to apply when, and exactly how (e.g., what variable instantiations to do)?? ui.e., hefty search process: how to guide it? NB: Searching backwards from a reasoning goal is generally beneficial. (Backwards chaining.) uLots of fiddling around, piecing together and taking apart conjunctions, disjunctions, etc. uAnd have only shown some of the types of fiddling that are needed! uIt would be more convenient to be able combine the effects of certain inference rules in various ways, e.g. to combine MP and variable instantiation.

5 Reminder: Part of Proof Tree for an example (Proof “Graph” more generally) UnivElim (p:P123)  (  p) ((is-pers(p)     asleep(p)  (unconsc(p))  dead(p)))   (shld-call(Ego, Pol)  should-prot(Ego,p))) Modus Ponens Conj Intro  shld-call(Ego, Pol)  shld-prot(Ego, P123)  (is-pers(P123)     asleep(P123)  (unconsc(P123))  dead(P123)))   (shld-call(Ego, Pol)  should-prot(Ego,P123))  is-pers(P123)     asleep(P123)  (unconsc(P123))  dead(P123)) is-pers(P123)  unconsc(P123))  dead(P123)   asleep(P123) unconsc(P123)) Disj Intro shld-prot(Ego, P123) Conj Elim

6 “Natural Deduction” = the sort of reasoning process we have seen so far

7 Reminder: Some Other Sorts of Fiddling uFollowing are logical inference rules in non-traditional IF-THEN form. uRules about distributivity of conjunction and disjunction over each other.     l IF-HAVE A  (B  C) THEN-HAVE (A  B)  (A  C) and its converse     IF-HAVE (A  B)  (A  C) THEN-HAVE A  (B  C)      l IF-HAVE A  (B  C) THEN-HAVE (A  B)  (A  C) and its converse      IF-HAVE (A  B)  (A  C) THEN-HAVE A  (B  C)

8 Reminder: Some Other Sorts of Fiddling, contd. uDouble negation:   IF-HAVE   A THEN-HAVE A   IF-HAVE A THEN-HAVE   A uDe Morgan’s Laws (in inference-rule form):     l IF-HAVE  (A  B) THEN-HAVE  A   B and its converse    IF-HAVE  A   B THEN-HAVE  (A  B)      l IF-HAVE  (A  B) THEN-HAVE  A   B and its converse   IF-HAVE  A   B THEN-HAVE  (A  B)

9 (New) Other Sorts of Fiddling, contd. uAn analogue of De Morgan’s Laws for interchange of universal and existential quantification:   l IF-HAVE  (  α) A THEN-HAVE (  α)  A and its converse  IF-HAVE (  α)  A THEN-HAVE  (  α) A  l IF-HAVE  (  α) A THEN-HAVE (  α)  A and its converse  IF-HAVE (  α)  A THEN-HAVE  (  α) A uThat is, can push a negation sign through a quantifier in either direction if switch universal to existential and vice versa.                        

10 A Partial Response to the Difficulties uUse essentially just one inference rule: resolution, combining variable instantiation with a generalization of MP. (See Callan book for details if interested.) u Also, in effect, reason backwards from the goal, using reasoning by contradiction (“Reductio ad Absurdum”). i.e., you assume the negation of the goal G, and show that you can infer a contradiction (basically, show that you can infer something C and its negation not-C). uReasoning by contradiction is often used in human common-sense reasoning as well as in mathematics. uIn the resolution proof method, the reasoning to the contradiction is done by (mainly) applications of resolution.

11 Resolution Proof Method contd. uNeed to have converted all formulas in the knowledge base into clause form: each item in the knowledge base is now either a literal (a predicate-symbol application or a negation of one) or a disjunction of literals. All variables are considered universally quantified. A special treatment of existential quantification (Skolemization) is needed for this. Simple Example of a clause: Simple Example of a clause:   is-person(p)   is-shop(q)  old-fashioned(q)  likes(p,q) uThe negation of the reasoning goal is also converted into clause form. uThe conversion into clause form effectively absorbs the above sorts of annoying “fiddling” – introduction/elimination of conjunction and disjunction, distribution, De Morgan, etc. l The conversion is complex but only needs to be done once for KB items.

12 Proof Diagram (KB items underlined)   old-fash(S, G) Resol (q:G, p:S) NEGATED GOAL:  NEGATED GOAL:  likes(S, G)   is-pers(p)   is-shop(q)  old-fash(q)  likes(p,q)   is-pers(S)   is-shop(G)  old-fash(G) is-shop is-shop(G)   is-pers(S)  old-fash(G) is-pers is-pers(S) old-fash(S, G) Resol ( ) is-pers is-pers(S) CONTRADICTION (= null clause)

13 Benefits of Resolution Proof Method uReasons backwards from the goal, thereby focussing the search. uSidelines some annoying fiddling (absorbs most of it into the reduction to clause form, & some into resolution). uCombines variable substitution with other inference acts in an intuitive and algorithmically convenient way (in the resolution inference rule). uBy ensuring that each step is a bigger chunk, the overall search is simplified (though still difficult) compared to what is needed with Natural Deduction (ND). uBecause of bigger chunks, and having essentially just one inference rule, the task of finding useful search-guidance heuristics is simplified compared to ND. uClause form is somewhat unnatural, but resolution is itself quite natural once you get used to it. uND is probably better for human use, whereas the resolution proof method is probably better for machine use. However, ND has been implemented in AI.

14 Procedural/Declarative Trade-Off uRecall: inference rules are mechanisms – they do something. They’re “procedural.”  u Implications (  ) and double implications (  ) are just statements. They’re “declarative.” They don’t do anything all by themselves. And recall we can rewrite an implication,  L  R, as a disjunction:   L  R. Need to apply an inference rule such as Modus Ponens to get an implication to deliver a conclusion concerning left-hand or right-hand side. (Double implication: either need another inference rule similar to MP, or have to go to considerably more complication using something like  (L  R) (L  R)  (L  R) L  R R.) and two applications of MP: one to get L  R, the other to get R.)

15 P/D Trade-Off, contd. uConsider again inference rules such as (part of De Morgan):     l (R1) IF-HAVE  (A  B) THEN-HAVE  A   B    (R2) IF-HAVE  A   B THEN-HAVE  (A  B) uWhat if we used the following implications instead (or one double implication):     (F1)  (A  B)   A   B     (F2)  A   B   (A  B)   l Suppose we’re given  ( happy(Mike)  rich(Peter) )   How do we get  happy(Mike)   rich(Peter) ?? First create right instance of (F1):       (happy(Mike)  rich(Peter))   happy(Mike)   rich(Peter) Now apply MP to this and to the given formula. uSo we have the trade-off between (a) simply applying (the appropriate instance of) R1 directly to the given formula and (b) applying MP to (the appropriate instance of) F1 and the given formula.

16 Using Special De Morgan Rule       (happy(Mike)  rich(Peter))   happy(Mike)   rich(Peter)    happy(Mike)   rich(Peter)    ( happy(Mike)  rich(Peter) ) R1    happy(Mike)   rich(Peter)    ( happy(Mike)  rich(Peter) ) Using an Implication plus MP MP

17 P/D Trade-Off, contd. uSo we have the trade-off between (a) simply applying (the appropriate instance of) R1 directly to the given formula (b) applying MP to (the appropriate instance of) F1 and the given formula. Method (a) is simpler and somewhat less work, but means having an extra inference rule to design and to include in processing.  uAlso note: instead of, say, the formula  (x) (is-shop(x)  likes(S, x)) could have the inference rule IF-HAVE is-shop (x) THEN-HAVE likes(S, x) [although domain-specific and unsound, and hence unlike trad inference rules, but rules of this sort do appear in special, advanced logics].  This inference rule is less work to apply (much as above), but doesn’t allow an inference from, say,  likes(S, x) to  is-shop(G). Moreover the formula could itself be a result of other reasoning, and apply only under certain conditions, rather than being permanently in play. uGeneral lesson: more procedural approaches can be simpler in some ways and more efficient, but can be less flexible in some ways.

18 Logic compared/contrasted to Production Systems

19 Logic versus Production Systems uRules in PSs and inference rules in logic are at one level exactly the same sort of thing, and it’s hard to make a firm general distinction. You could in principle regard PSs as a form of logic. (“Logic” is not itself a logically watertight category.) uPSs are much more procedural than logic tends to be. In PSs, long-term domain knowledge tends very strongly to be put into rules, whereas in logic such knowledge tends more to be put into formulas l and in the simpler, more trad forms of logic it is always put into formulas. uThe rules in PSs are generally domain specific and unsound, whereas in logic the rules are much more likely to be domain-neutral and sound l and in the simpler, more trad forms of logic they must be domain-neutral and sound. uRelatedly, rules in PSs are often regarded merely as default rules (or: as defeasible): their effect is not regarded as absolutely definite, and is subject to defeat (cancellation) by the effect of other rules. You need to quite advanced forms of logic to get similar abilities. And disciplinary tradition causes much more anxiety over the mathematical underpinnings of such flexible logics than of PSs.

20 Reasoning in Logic or Production Systems viewed as a case of Search

21 Reasoning (in Logic or PSs) as Search uRecall: in a search problem we have: l states, including the picking out of one as the initial state l operations – ways of converting a state into a (usually different) state l operation costs (or whole-path costs) – in search problems where operations correspond to actions in a “world” outside the search itself l one or more individually specified goal states, OR a goal condition l a specification of the precise task – e.g. return an optimal solution path, return a reasonably good solution path; return a goal confirming to the goal condition; return the best goal conforming to the goal condition; see whether one or more goals can be reached at all; etc.

22 The Case of Reasoning l a state = a collection of propositions expressed in some way; a state could be the contents of a PS’s WM at some particular point. l initial state = contents of an initial WM or of (a portion of) an initial KB plus possibly other things, such as “(sub)goals” in the reasoning sense (hypotheses to be investigated) or clauses for the negated goal (in the resolution proof method) l operations = inference rules (or ways of assuming things, or mechanisms for simplifying a proposition, simplifying a state, doing other clean-up operations, etc.) but NB we must take the effect of the rule (or whatever) to be a whole new state, not just the propositions that are emitted by the rule l costs: (usually) not applicable in non-planning reasoning tasks, because operations (usually) do not represent actions in some world outside the search: the operations are the actions of interest (of course, doing an operation has a computational cost, and differences in this cost might come into decisions about what operations to try when)

23 PS or Logic: holds3(Ego, B) distinct(K,S) in(Ego, K) ……….. Initial State holds3(Ego, B) holds0(Ego, B) distinct(K,S) in(Ego, K) ………….. holds3(Ego, B) holds0(Ego, B) distinct(K,S) in(Ego, K), in(B, K) …………… holds3(Ego, B) holds0(Ego, B) distinct(K,S) in(Ego, K), in(B, K),  in(B, S) ………….. R4 (a PS rule or logic inference rule) R2 R5

24 The Case of Reasoning, contd. ua goal condition is appropriate, rather than one or more individually specified goal states: the condition could be that the state include a particular formula being investigated (i.e., a goal formula), or a particular type of formula (e.g., stating something the agent should do), or ///…anything that looks interesting/good/bad/… in some way, or … uthe task: l Normally and mainly (unless doing planning), to determine whether a goal state can be reached. l Possibly also to return one or more solution paths. l Why? E.g.: Provide an explanation to a human user. Learn from the path things that could be useful for later reasoning tasks. l Note: planning can be viewed as a form of reasoning, and then a solution path will correspond to a world action path, and so is of course (an important part of) the answer.

25 The Case of Reasoning, contd. uIf an operation can never modify or delete anything in the state worked upon, and can never prevent any later application of any operation or affect its result, there’s never any need to backtrack or otherwise switch to another part of the search space. l Just keep accumulating propositions added by operations. l The choice issue is not choice between states but choice of what propositions to apply operations to (and how exactly to apply them to those propositions). Usually impractical to apply all possible operations in all possible ways. I.e., we don’t generally expand a state fully. uOtherwise, we can’t in general just keep accumulating, so we have the usual between-state choice issue as well as the above. l Example on next slide.

26 may-be-married(S, P) may-be-married(S, G) is-policeman(P) is-gardener(G) ……… Initial State [[ may-be-married(S, P) ]] are-married(S, P)   may-be-married(S, G) is-policeman(P) is-gardener(G) ………   may-be-married(S, P) are-married(S, G) [[ may-be-married(S, G) ]] is-policeman(P) is-gardener(G) ……… ASSUME: are-married(S,P) ASSUME: are-married(S,G) Each ASSUME prevents the other and deletes something in the state

27 More on Production Systems (separate slides by John Bullinaria) uRecognize/Act cycle for forward chaining (p.4) uNeed for a Reason Maintenance system (p.12) uChoice between Forwards and Backwards (pp.13-14) uConflict Resolution (pp.15 onwards, but excluding meta- rules)


Download ppt "Introduction to AI & AI Principles (Semester 1) WEEK 10 – Tuesday/Wednesday Introduction to AI & AI Principles (Semester 1) WEEK 10 – Tuesday/Wednesday."

Similar presentations


Ads by Google