Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 416 Artificial Intelligence Lecture 10 Logic Chapters 7 and 8 Lecture 10 Logic Chapters 7 and 8.

Similar presentations


Presentation on theme: "CS 416 Artificial Intelligence Lecture 10 Logic Chapters 7 and 8 Lecture 10 Logic Chapters 7 and 8."— Presentation transcript:

1 CS 416 Artificial Intelligence Lecture 10 Logic Chapters 7 and 8 Lecture 10 Logic Chapters 7 and 8

2 Homework Assignment First homework assignment is out Due October 20th First homework assignment is out Due October 20th

3 Midterm October 25 th Up through chapter 9 (excluding chapter 5) October 25 th Up through chapter 9 (excluding chapter 5)

4 Review Store information in a knowledge base Backus-Naur Form (BNF)Backus-Naur Form (BNF) Use equivalences to isolate what you wantUse equivalences to isolate what you want Use inference to relate sentencesUse inference to relate sentences –Modus Ponens –And-Elimination Store information in a knowledge base Backus-Naur Form (BNF)Backus-Naur Form (BNF) Use equivalences to isolate what you wantUse equivalences to isolate what you want Use inference to relate sentencesUse inference to relate sentences –Modus Ponens –And-Elimination

5 Constructing a proof Proving is like searching Find sequence of logical inference rules that lead to desired resultFind sequence of logical inference rules that lead to desired result Note the explosion of propositionsNote the explosion of propositions –Good proof methods ignore the countless irrelevant propositions The two inference rules are sound but not complete! Proving is like searching Find sequence of logical inference rules that lead to desired resultFind sequence of logical inference rules that lead to desired result Note the explosion of propositionsNote the explosion of propositions –Good proof methods ignore the countless irrelevant propositions The two inference rules are sound but not complete!

6 Recall that a complete inference algorithm is one that can derive any sentence that is entailed Resolution is a single inference rule Guarantees the ability to derive any sentence that is entailedGuarantees the ability to derive any sentence that is entailed –i.e. it is complete It must be partnered with a complete search algorithmIt must be partnered with a complete search algorithm Recall that a complete inference algorithm is one that can derive any sentence that is entailed Resolution is a single inference rule Guarantees the ability to derive any sentence that is entailedGuarantees the ability to derive any sentence that is entailed –i.e. it is complete It must be partnered with a complete search algorithmIt must be partnered with a complete search algorithm Completeness

7 Resolution Resolution is a single inference rule Guarantees the ability to derive any sentence that is entailedGuarantees the ability to derive any sentence that is entailed –i.e. it is complete It must be partnered with a complete search algorithmIt must be partnered with a complete search algorithm Resolution is a single inference rule Guarantees the ability to derive any sentence that is entailedGuarantees the ability to derive any sentence that is entailed –i.e. it is complete It must be partnered with a complete search algorithmIt must be partnered with a complete search algorithm

8 Resolution Unit Resolution Inference Rule If m and l i are complementary literalsIf m and l i are complementary literals Unit Resolution Inference Rule If m and l i are complementary literalsIf m and l i are complementary literals

9 Resolution Inference Rule Also works with clauses But make sure each literal appears only once We really just want the resolution to return A Also works with clauses But make sure each literal appears only once We really just want the resolution to return A

10 Resolution and completeness Any complete search algorithm, applying only the resolution rule, can derive any conclusion entailed by any knowledge base in propositional logic More specifically, refutation completenessMore specifically, refutation completeness –Able to confirm or refute any sentence –Unable to enumerate all true sentences Any complete search algorithm, applying only the resolution rule, can derive any conclusion entailed by any knowledge base in propositional logic More specifically, refutation completenessMore specifically, refutation completeness –Able to confirm or refute any sentence –Unable to enumerate all true sentences

11 What about “and” clauses? Resolution only applies to “or” clauses (disjunctions) Every sentence of propositional logic can be transformed to a logically equivalent conjunction of disjunctions of literalsEvery sentence of propositional logic can be transformed to a logically equivalent conjunction of disjunctions of literals Conjunctive Normal Form (CNF) A sentence expressed as conjunction of disjunction of literalsA sentence expressed as conjunction of disjunction of literals Resolution only applies to “or” clauses (disjunctions) Every sentence of propositional logic can be transformed to a logically equivalent conjunction of disjunctions of literalsEvery sentence of propositional logic can be transformed to a logically equivalent conjunction of disjunctions of literals Conjunctive Normal Form (CNF) A sentence expressed as conjunction of disjunction of literalsA sentence expressed as conjunction of disjunction of literals

12 CNF

13 An algorithm for resolution We wish to prove KB entails  That is, will some sequence of inferences create new sentences in knowledge base equal to That is, will some sequence of inferences create new sentences in knowledge base equal to  If some sequence of inferences created ~  we would know the knowledge base could not also entail If some sequence of inferences created ~  we would know the knowledge base could not also entail  –  ^ ~  is impossible We wish to prove KB entails  That is, will some sequence of inferences create new sentences in knowledge base equal to That is, will some sequence of inferences create new sentences in knowledge base equal to  If some sequence of inferences created ~  we would know the knowledge base could not also entail If some sequence of inferences created ~  we would know the knowledge base could not also entail  –  ^ ~  is impossible

14 An algorithm for resolution To show KB cannot resolve  Must show (KB ^ ~  ) is unsatisfiableMust show (KB ^ ~  ) is unsatisfiable –No possible way for KB to entail (not  ) –Proof by contradiction To show KB cannot resolve  Must show (KB ^ ~  ) is unsatisfiableMust show (KB ^ ~  ) is unsatisfiable –No possible way for KB to entail (not  ) –Proof by contradiction

15 An algorithm for resolution Algorithm (KB ^ ~  is put in CNF(KB ^ ~  is put in CNF Each pair with complementary literals is resolved to produce new clause which is added to KB (if novel)Each pair with complementary literals is resolved to produce new clause which is added to KB (if novel) –Cease if no new clauses to add (~  is not entailed) –Cease if resolution rule derives empty clause  If resolution generates  …  ^ ~  = empty clause  (~  is entailed) Algorithm (KB ^ ~  is put in CNF(KB ^ ~  is put in CNF Each pair with complementary literals is resolved to produce new clause which is added to KB (if novel)Each pair with complementary literals is resolved to produce new clause which is added to KB (if novel) –Cease if no new clauses to add (~  is not entailed) –Cease if resolution rule derives empty clause  If resolution generates  …  ^ ~  = empty clause  (~  is entailed)

16 Example of resolution Proof that there is not a pit in P 1,2 : ~P 1,2 KB ^ P 1,2 leads to empty clauseKB ^ P 1,2 leads to empty clause Therefore ~P 1,2 is trueTherefore ~P 1,2 is true Proof that there is not a pit in P 1,2 : ~P 1,2 KB ^ P 1,2 leads to empty clauseKB ^ P 1,2 leads to empty clause Therefore ~P 1,2 is trueTherefore ~P 1,2 is true

17 Formal Algorithm

18 Horn Clauses Horn Clause Disjunction of literals where at most one is positiveDisjunction of literals where at most one is positive –(~a V ~b V ~c V d) –(~a V b V c V ~d) Not a Horn Clause Horn Clause Disjunction of literals where at most one is positiveDisjunction of literals where at most one is positive –(~a V ~b V ~c V d) –(~a V b V c V ~d) Not a Horn Clause

19 Horn Clauses Why? Every Horn clause can be written as an implication:Every Horn clause can be written as an implication: –(a conjunction of positive literals)  single positive literal –Inference algorithms can use forward chaining and backward chaining –Entailment computation is linear in size of KB Why? Every Horn clause can be written as an implication:Every Horn clause can be written as an implication: –(a conjunction of positive literals)  single positive literal –Inference algorithms can use forward chaining and backward chaining –Entailment computation is linear in size of KB

20 Horn Clauses Can be written as a special implication (~a V ~b V c) becomes (a ^ b) => c(~a V ~b V c) becomes (a ^ b) => c –(~a V ~b V c) == (~(a ^ b) V c) … de Morgan –(~(a ^ b) V c) == ((a ^ b) => c … implication elimination Can be written as a special implication (~a V ~b V c) becomes (a ^ b) => c(~a V ~b V c) becomes (a ^ b) => c –(~a V ~b V c) == (~(a ^ b) V c) … de Morgan –(~(a ^ b) V c) == ((a ^ b) => c … implication elimination

21 Forward Chaining Does KB (Horn clauses) entail q (single symbol)? Keep track of known facts (positive literals)Keep track of known facts (positive literals) If the premises of an implication are knownIf the premises of an implication are known –Add the conclusion to the known set of facts Repeat process until no further inferences or q is addedRepeat process until no further inferences or q is added Does KB (Horn clauses) entail q (single symbol)? Keep track of known facts (positive literals)Keep track of known facts (positive literals) If the premises of an implication are knownIf the premises of an implication are known –Add the conclusion to the known set of facts Repeat process until no further inferences or q is addedRepeat process until no further inferences or q is added

22 Forward Chaining

23 Properties SoundSound CompleteComplete –All entailed atomic sentences will be derived Data Driven Start with what we knowStart with what we know Derive new info until we discover what we wantDerive new info until we discover what we wantProperties SoundSound CompleteComplete –All entailed atomic sentences will be derived Data Driven Start with what we knowStart with what we know Derive new info until we discover what we wantDerive new info until we discover what we want

24 Backward Chaining Start with what you want to know, a query (q) Look for implications that conclude qLook for implications that conclude q –Look at the premises of those implications  Look for implications that conclude those premises… Goal-Directed Reasoning Can be less complex search than forward chainingCan be less complex search than forward chaining Start with what you want to know, a query (q) Look for implications that conclude qLook for implications that conclude q –Look at the premises of those implications  Look for implications that conclude those premises… Goal-Directed Reasoning Can be less complex search than forward chainingCan be less complex search than forward chaining

25 Making it fast Example Problem to solve is in CNFProblem to solve is in CNF –Is Marvin a Martian given --- M == 1 (true)?  Marvin is green --- G=1  Marvin is little --- L=1  (little and green) implies Martian --- (L ^ G) => M ~(L^G) V M ~L V ~G V M –Proof by contradiction… are there true/false values for G, L, and M that are consistent with knowledge base and Marvin not being a Martian?  G ^ L ^ (~L V ~G V M) ^ ~M == empty? Example Problem to solve is in CNFProblem to solve is in CNF –Is Marvin a Martian given --- M == 1 (true)?  Marvin is green --- G=1  Marvin is little --- L=1  (little and green) implies Martian --- (L ^ G) => M ~(L^G) V M ~L V ~G V M –Proof by contradiction… are there true/false values for G, L, and M that are consistent with knowledge base and Marvin not being a Martian?  G ^ L ^ (~L V ~G V M) ^ ~M == empty?

26 Searching for variable values Want to find values such that: Randomly consider all true/false assignments to variables until we exhaust them all or find match (model checking)Randomly consider all true/false assignments to variables until we exhaust them all or find match (model checking) –(G, L, M) = (1, 0, 0)… no = (0, 1, 0)… no = (0, 0, 0)… no = (1, 1, 0)… no Alternatively…Alternatively… Want to find values such that: Randomly consider all true/false assignments to variables until we exhaust them all or find match (model checking)Randomly consider all true/false assignments to variables until we exhaust them all or find match (model checking) –(G, L, M) = (1, 0, 0)… no = (0, 1, 0)… no = (0, 0, 0)… no = (1, 1, 0)… no Alternatively…Alternatively… G ^ L ^ (~L V ~G V M) ^ ~M == 0

27 Backtracking Algorithm Davis-Putnam Algorithm (DPLL) Search through possible assignments to (G, L, M) via depth-first search (0, 0, 0) to (0, 0, 1) to (0, 1, 0)…Search through possible assignments to (G, L, M) via depth-first search (0, 0, 0) to (0, 0, 1) to (0, 1, 0)… –Each clause of CNF must be true  Terminate consideration when clause evaluates to false –Use heuristics to reduce repeated computation of propositions  Early termination  Pure symbol heuristic  Unit clause heuristic Davis-Putnam Algorithm (DPLL) Search through possible assignments to (G, L, M) via depth-first search (0, 0, 0) to (0, 0, 1) to (0, 1, 0)…Search through possible assignments to (G, L, M) via depth-first search (0, 0, 0) to (0, 0, 1) to (0, 1, 0)… –Each clause of CNF must be true  Terminate consideration when clause evaluates to false –Use heuristics to reduce repeated computation of propositions  Early termination  Pure symbol heuristic  Unit clause heuristic G L M Cull branches from tree

28 Searching for variable values Other ways to find (G, L, M) assignments for: G ^ L ^ (~L V ~G V M) ^ ~M == 0 G ^ L ^ (~L V ~G V M) ^ ~M == 0 Simulated Annealing (WalkSAT) – –Start with initial guess (0, 1, 1) – –With each iteration, pick an unsatisfied clause and flip one symbol in the clause – –Evaluation metric is the number of clauses that evaluate to true – –Move “in direction” of guesses that cause more clauses to be true – –Many local mins, use lots of randomness Other ways to find (G, L, M) assignments for: G ^ L ^ (~L V ~G V M) ^ ~M == 0 G ^ L ^ (~L V ~G V M) ^ ~M == 0 Simulated Annealing (WalkSAT) – –Start with initial guess (0, 1, 1) – –With each iteration, pick an unsatisfied clause and flip one symbol in the clause – –Evaluation metric is the number of clauses that evaluate to true – –Move “in direction” of guesses that cause more clauses to be true – –Many local mins, use lots of randomness

29 WalkSAT termination How do you know when simulated annealing is done? No way to know with certainty that an answer is not possibleNo way to know with certainty that an answer is not possible –Could have been bad luck –Could be there really is no answer –Establish a max number of iterations and go with best answer to that point How do you know when simulated annealing is done? No way to know with certainty that an answer is not possibleNo way to know with certainty that an answer is not possible –Could have been bad luck –Could be there really is no answer –Establish a max number of iterations and go with best answer to that point

30 So how well do these work? Think about it this way 16 of 32 possible assignments are models (are satisfiable) for this sentence16 of 32 possible assignments are models (are satisfiable) for this sentence Therefore, 2 random guesses should find a solutionTherefore, 2 random guesses should find a solution WalkSAT and DPLL should work quicklyWalkSAT and DPLL should work quickly Think about it this way 16 of 32 possible assignments are models (are satisfiable) for this sentence16 of 32 possible assignments are models (are satisfiable) for this sentence Therefore, 2 random guesses should find a solutionTherefore, 2 random guesses should find a solution WalkSAT and DPLL should work quicklyWalkSAT and DPLL should work quickly

31 What about more clauses? If # symbols (variables) stays the same, but number of clauses increases More ways for an assignment to fail (on any one clause)More ways for an assignment to fail (on any one clause) More searching through possible assignments is neededMore searching through possible assignments is needed Let’s create a ratio, m/n, to measure #clauses / # symbolsLet’s create a ratio, m/n, to measure #clauses / # symbols We expect large m/n causes slower solutionWe expect large m/n causes slower solution If # symbols (variables) stays the same, but number of clauses increases More ways for an assignment to fail (on any one clause)More ways for an assignment to fail (on any one clause) More searching through possible assignments is neededMore searching through possible assignments is needed Let’s create a ratio, m/n, to measure #clauses / # symbolsLet’s create a ratio, m/n, to measure #clauses / # symbols We expect large m/n causes slower solutionWe expect large m/n causes slower solution

32 What about more clauses Higher m/n means fewer assignments will work If fewer assignments work it is harder for DPLL and WalkSAT If fewer assignments work it is harder for DPLL and WalkSAT Higher m/n means fewer assignments will work If fewer assignments work it is harder for DPLL and WalkSAT If fewer assignments work it is harder for DPLL and WalkSAT

33 Combining it all 4x4 Wumpus World The “physics” of the gameThe “physics” of the game – – At least one wumpus on boardAt least one wumpus on board – A most one wumpus on board (for any two squares, one is free)A most one wumpus on board (for any two squares, one is free) – n(n-1)/2 rules like: ~W 1,1 V ~W 1,2 Total of 155 sentences containing 64 distinct symbolsTotal of 155 sentences containing 64 distinct symbols 4x4 Wumpus World The “physics” of the gameThe “physics” of the game – – At least one wumpus on boardAt least one wumpus on board – A most one wumpus on board (for any two squares, one is free)A most one wumpus on board (for any two squares, one is free) – n(n-1)/2 rules like: ~W 1,1 V ~W 1,2 Total of 155 sentences containing 64 distinct symbolsTotal of 155 sentences containing 64 distinct symbols

34 Wumpus World Inefficiencies as world becomes large Knowledge base must expand if size of world expandsKnowledge base must expand if size of world expands Preferred to have sentences that apply to all squares We brought this subject up last weekWe brought this subject up last week Next chapter addresses this issueNext chapter addresses this issue Inefficiencies as world becomes large Knowledge base must expand if size of world expandsKnowledge base must expand if size of world expands Preferred to have sentences that apply to all squares We brought this subject up last weekWe brought this subject up last week Next chapter addresses this issueNext chapter addresses this issue

35 Chapter 8: First-Order Logic

36 What do we like about propositional logic? It is: DeclarativeDeclarative –Relationships between variables are described –A method for propagating relationships ExpressiveExpressive –Can represent partial information using disjunction CompositionalCompositional –If A means foo and B means bar, A ^ B means foo and bar It is: DeclarativeDeclarative –Relationships between variables are described –A method for propagating relationships ExpressiveExpressive –Can represent partial information using disjunction CompositionalCompositional –If A means foo and B means bar, A ^ B means foo and bar

37 What don’t we like about propositional logic? Lacks expressive power to describe the environment concisely Separate rules for every square/square relationship in Wumpus worldSeparate rules for every square/square relationship in Wumpus world Lacks expressive power to describe the environment concisely Separate rules for every square/square relationship in Wumpus worldSeparate rules for every square/square relationship in Wumpus world

38 Natural Language English appears to be expressive Squares adjacent to pits are breezySquares adjacent to pits are breezy But natural language is a medium of communication, not a knowledge representation Much of the information and logic conveyed by language is dependent on contextMuch of the information and logic conveyed by language is dependent on context Information exchange is not well definedInformation exchange is not well defined Not compositional (combining sentences may mean something different)Not compositional (combining sentences may mean something different) It is ambiguousIt is ambiguous English appears to be expressive Squares adjacent to pits are breezySquares adjacent to pits are breezy But natural language is a medium of communication, not a knowledge representation Much of the information and logic conveyed by language is dependent on contextMuch of the information and logic conveyed by language is dependent on context Information exchange is not well definedInformation exchange is not well defined Not compositional (combining sentences may mean something different)Not compositional (combining sentences may mean something different) It is ambiguousIt is ambiguous

39 But we borrow representational ideas from natural language Natural language syntax Nouns and noun phrases refer to objectsNouns and noun phrases refer to objects –People, houses, cars Properties and verbs refer to object relationsProperties and verbs refer to object relations –Red, round, nearby, eaten Some relationships are clearly defined functions where there is only one output for a given inputSome relationships are clearly defined functions where there is only one output for a given input –Best friend, first thing, plus We build first-order logic around objects and relations Natural language syntax Nouns and noun phrases refer to objectsNouns and noun phrases refer to objects –People, houses, cars Properties and verbs refer to object relationsProperties and verbs refer to object relations –Red, round, nearby, eaten Some relationships are clearly defined functions where there is only one output for a given inputSome relationships are clearly defined functions where there is only one output for a given input –Best friend, first thing, plus We build first-order logic around objects and relations

40 Ontology a “specification of a conceptualization”a “specification of a conceptualization” A description of the objects and relationships that can existA description of the objects and relationships that can exist –Propositional logic had only true/false relationships –First-order logic has many more relationships The ontological commitment of languages is differentThe ontological commitment of languages is different –How much can you infer from what you know?  Temporal logic defines additional ontological commitments because of timing constraints a “specification of a conceptualization”a “specification of a conceptualization” A description of the objects and relationships that can existA description of the objects and relationships that can exist –Propositional logic had only true/false relationships –First-order logic has many more relationships The ontological commitment of languages is differentThe ontological commitment of languages is different –How much can you infer from what you know?  Temporal logic defines additional ontological commitments because of timing constraints


Download ppt "CS 416 Artificial Intelligence Lecture 10 Logic Chapters 7 and 8 Lecture 10 Logic Chapters 7 and 8."

Similar presentations


Ads by Google