Presentation is loading. Please wait.

Presentation is loading. Please wait.

Propositional Classical Deduction

Similar presentations


Presentation on theme: "Propositional Classical Deduction"— Presentation transcript:

1 Propositional Classical Deduction
Jacques Robin

2 Outline Classical Propositional Logic (CPL) Syntax CPL Semantics
Full CPL Implicative Normal Form CPL (INFCPL) Horn CPL (HCPL) CPL Semantics Cognitive and Herbrand interpretations, models CPL Reasoning FCPL Reasoning Truth-tabel based model checking Multiple inference rules INFCPL Reasoning Resolution and factoring DPLL WalkSat HCPL Reasoning Forward chaining Backward chaining

3 Full Classical Propositional Logic (FCPL): syntax
(a  (b  ((c  d)  a)  b)) FCPLUnaryConnective Connective: enum{} FCPLBinaryConnective Connective: enum{, , , } ConstantSymbol Arg 1..2 FCPLConnective Functor FCPLFormula

4 CPL Normal Forms Semantic equivalence: a  b  c  d (a  b)  c  d
Implicative Normal Form (INF) Semantic equivalence: a  b  c  d (a  b)  c  d a  b  c  d INFCPLFormula Functor =  INFCPLClause Functor =  * INFCLPLHS Functor =  Premise ConstantSymbol * INFCLPRHS Functor =  Conclusion * Conjunctive Normal Form (CNF) Literal * CNFCPLFormula Functor =  CNFCPLClause Functor =  * NegativeLiteral Functor =  ConstantSymbol

5 Horn CPL Implicative Normal Form (INF) INFCLPLHS Functor =  Premise *
INFCPLFormula Functor =  INFCPLClause Functor =  Conclusion * ConstantSymbol IntegrityConstraint context IntegrityConstraint inv IC: Conclusion.ConstantSymbol = false a  b  c  false DefiniteClause context DefiniteClause inv DC: Conclusion.ConstantSymbol <> false a  b  c  d Fact context Fact inv Fact: Premise -> size() = 1 and Premise -> ConstantSymbol = true true  d Conjunctive Normal Form (CNF) * * CNFCPLFormula Functor =  CNFCPLClause Functor =  Literal NegativeLiteral Functor =  ConstantSymbol IntegrityConstraint context IntegrityConstraint inv IC: Literal->forAll(oclIsKindOf(NegativeLiteral)) a  b  c DefiniteClause context DefiniteClause inv DC: Literal.oclIsKindOf(ConstantSymbol)->size() = 1 a  b  c  d Fact context Fact inv Fact: Literal->forAll(oclIsKindOf(ConstantSymbol)) d

6 FCPL semantics: Cognitive and Herbrand Interpretations
FCPLBinaryConnective Connective: enum{, , , } CompoundDomainProperty FormulaMapping fm1(pitIn12   pitIn11) = there is a pit in (1,2) and no pit in (1,1) fm1(pitIn12   pitIn11) = John is Kind of England and is not King of France FCPLUnaryConnective Connective: enum{} Syntax AtomicDomainProperty ConstantMapping csm1(pitIn12) = there is a pit in (1,2) csm2(pitIn12) = John is King of England Entered as input to inference engine by knowledge engineer Arg Functor FCPLConnective FCPLFormula 1..2 ConstantSymbol ConstantValuation FormulaValuation Defined from Arg.AtomicDomainProperty.TruthValue and FCPL truth table Semantics FCLPHerbrandInterpretation FCLPHerbrandModel FCLPCognitiveInterpretation Convention defined by knowledge engineer Derived by the knowledge engineer: CompoundDomainProperty.TruthValue = FCPLFormula.TruthValue TruthValue Value: enum{true,false} Known by knowledge engineer

7 Unsatisfiable formulas
Entailment and models Valid formulas Satisfiable Unsatisfiable formulas Ic(f): cognitive interpretation of formula f Ih(f): Herbrand interpretation of formula f Herbrand model: A Herbrand interpretation Ih(f) of formula f is a Herbrand model Mh(f) iff f is true in Mh(f) Entailment |=: f |= f’ iff: Ic, (f true in Ic(f)  f´true in Ic(f’)) Logical equivalence  : f  f’ iff f |= f’ and f’ |= f f valid (or tautology) iff true in all Ih(f), ex, a  a f satisfiable iff true in at least one Ih(f) f unsatisfiable (or contradiction) iff false in all Ih(f), ex, a  a Theorems: f |= f’ iff: Mh(f)  Mh(f´) f |= f’ iff: ff´is satisfiable f |= f iff: ff´is unsatisfiable (since ff´  (ff´)  (ff´)

8 Cognitive x Herbrand Semantics
Cognitive semantics: Knowledge engineer and application domain dependent symbolic convention Truth values associated to constant symbols and formulas indirectly via knowledge engineer beliefs about atomic and compound properties of the real world domain being represented Allows deductively deriving new properties n1, … , ni about entities of this domain from other, given properties g1, …, gj Herbrand semantics: Knowledge engineer and application domain independent syntactical convention Truth values associated directly to constant symbols and formulas Relies on connective truth-table to deduct truth value of formula f from those of its constant symbols Allows testing inference engine reasoning soundness and completeness independently of any specific knowledge base or real world referential

9 Logic-Based Agent Strenghts:
Given B as axiom, formula f is a theorem of L? i.e., B |=L f ? Environment Sensors Ask Inference Engine: Theorem Prover for Logic L Knowledge Base B: Domain Model in Logic L Tell Retract Actuators Relies on: Mh(f)  Mh(f´) ? (model checking) ff´is satisfiable ? (boolean CSP search) ff´is unsatisfiable ? (refutation proof) Strenghts: Reuse results and insights about correct reasoning that matured over 23 centuries Semantics (meaning) of a knowledge base can be represented formally as syntax, a key step towards automating reasoning

10 Model Checking: Truth-Table Enumeration
kb = persistentKb  volatileKb = pf1  pf2  pf3  vf1  vf = p11  (b11  p12  p21)  (b21  p22  p31  p11)  b11  b21 q1 = pit12, q2 = pit22, q3 = pit31 kb |= q1, kb |≠ q2, kb |≠ q3, 11 21 31 22 12 V A, B P? b11 b21 p11 p12 p21 p22 p31 pf1 pf2 pf3 vf1 vf2 kb q1 q2 q3 f t .

11 FCLP inference rules Bi-directional (logical equivalences)
R1: f  g  g  f R2: f  g  g  f R3: (f  g)  h  f  (g  h) R4: (f  g)  h  f  (g  h) R5: f  f R6: f  g  g  f R7: f  g  f  g R8: f  g  (f  g)  (g  f) R9: (f  g)  f  g R10: (f  g)  f  g R11: f  (g  h)  (f  g)  (f  h) R12: f  (g  h)  (f  g)  (f  h) R13: f  f  f %factoring Directed (logical entailments) R14: f  g, f |= g %modus ponens R15: f  g, g |= f %modus tollens R16: f  g |= f %and-elimination R17: l1  ...  li  ... lk, m1  ...  mj-1  li  mj+1... mk |= l1  ...  li-1  li+1... lk  m1  ...  mj-1  mj+1... mk %resolution

12 Multiple inference rule application
Idea: KB |= f ? KB0 = KB Apply inference rule: KBi |= g Update KBi+1 = KBi  g Iterate until f  KBk or until f  KBn and KBn+1 = KBn Transforms proving KB |= f into search problem At each step: Which inference rule to apply? To which sub-formula of f? Example proof: KB0 = P1,1  (B1,1  P1,2  P2,1)  (B2,1  P1,1  P2,2  P3,1)  B1,1  B2,1 Query: (P1,2  P2,1) Cognitive interpretation: BX,Y: agent felt breeze in coordinate (X,Y) PX,Y: agent knows there is a pit in coordinate (X,Y) Apply R8 to B1,1  P1,2  P2,1 KB1 = KB0  (B1,1  (P1,2  P2,1))  ((P1,2  P2,1)  B1,1) Apply R6 to last sub-formula KB2 = KB1  (B1,1  (P1,2  P2,1)) Apply R14 to B1,1 and last sub-formula KB3 = KB2  (P1,2  P2,1)

13 Resolution and factoring
Repeated application of only two inference rules: resolution and factoring More efficient than using multiple inference rules search space with far smaller branching factor Refutation proof: Derive false from KB  Query Requires both in normal form (conjunctive or implicative) Example proof in conjunctive normal form:

14 Resolution strategies
Search heuristics for resolution-based theorem proving Two heuristic classes: Choice of clause pair to resolve inside current KB Choice of literals to resolve inside chosen clause pair Unit preference: Prefer pairs with one unit clause (i.e., literals) Rationale: generates smaller clauses, eliminates much literal choice in pair Unit resolution: turn preference into requirement Set of support: Define small subset of initial clauses as initial “set of support” At each step: Only consider clause pairs with one member from current set of support Add step result to set of support Efficiency depend on cleverness of initial set of support Common domain-independent initial set of support: negated query Beyond efficiency, results in easier to understand, goal-directed proofs Linear resolution: At each step only consider pairs (f,g) where f is either: (a) in KB0, or (b) an ancestor of g in the proof tree Input resolution: Specialization of linear resolution excluding (b) case Generates spine-looking proofs trees

15 FCPL theorem proving as boolean CSP exhaustive global backtracking search
Put f = KB  Query in conjunctive normal form Try to prove it unsatisfiable Consider each literal in f as a boolean variable Consider each clause in f as a constraint on these variables Solve the underlying boolean CSP problem by using: Exhaustive global backtracking search of all complete variable assignments showing none satisfies all constraint in f Initial state: empty assignment of pre-ordered variables Search operator: Tentative assignment of next yet unassigned variable Li (ith literal in f) Apply truth table definitions to propagate constraints in which Li appears (clauses of f involving L) If propagation violates one constraint, backtrack on Li If propagation satisfies all constraints: iterate on Li+1 if Li was last literal in f, fail, KB  Query satisfiable, and thus KB | Query

16 FCPL theorem proving as boolean CSP backtracking search: example
Variables = {B1,1 , P1,2, P2,1} Constraints: {B1,1 , P1,2  B1,1 , P2,1  B1,1, B1,1  P1,2  P2,1 , P1,2} V = [?,?,?] C = [?,?,?,?,?] V = [0,?,?] C = [1,?,?,1,?] V = [1,?,?] C = [0,?,?,?,?] V = [0,0,?] C = [1,1,?,1,0] V = [0,1,?] C = [1,0,?,1,1] false false false

17 DPLL algorithm General purpose CSP backtracking search very inefficient for proving large CFPL theorems Davis, Putnam, Logemann & Loveland algorithm (DPPL): Specialization of CSP backtracking search Exploits specificity of CFPL theorem proving recast as CSP search To apply search completeness preserving heuristics Concepts: Pure symbol S: yet unassigned variable positive in all clauses or negated in all clauses Unit clause C: clause with all but one literal already assigned to false Heuristics: Pure symbol heuristic: assign pure symbols first Unit propagation: Assign unit clause literals first Recursively generate new ones Early termination heuristic: After assigning Li = true, propagate Cj = true Cj | Li  Cj (avoiding truth-table look-ups) Prune sub-tree below any node where Cj | Cj = false Clause caching

18 Satisfiability of formula as boolean CSP heuristic local stochastic search
DPLL is not restricted to proving entailment by proving unsatisfiability It can also prove satisfiability of a FCPL formula Many problems in computer science and AI can be recast as a satisfiability problem Heuristic local stochastic boolean CSP search more space-scalable than DPLL for satisfiability However since it is not exhaustive search, it cannot prove unsatisfiability (and thus entailment), only strongly suspect it WalkSAT Initial state: random assignment of pre-ordered variables Search operator: Pick a yet unsatisfied clause and one literal in it Flip the literal assignment At each step, randomly chose between to picking strategies: Pick literal which flip results in steepest decrease in number of yet unsatisfied clauses Random pick

19 Direct x indirect use of search for agent reasoning
Domain Specific Agent Decision Problem Search Model: State data structure Successor function Goal function Heuristic function Domain Independent Search Algorithm Agent Decision Problem Reasoning Component Developer Agent Application Developer Domain Specific Knowledge Base Model: Logic formulas true  d f  g  h  c ... Domain Independent Inference Engine Search Model State data structure Successor function Goal function Heuristic function

20 Horn CPL reasoning Practical limitations of FCPL reasoning:
For experts in most application domain (medicine, law, business, design, troubleshooting): Non-intuitiveness of FCPL formulas for knowledge acquisition Non-intuitiveness of proofs generated by FCPL algorithms for knowledge validation Theoretical limitation of FCPL reasoning: exponential in the size of the KB Syntactic limitation to Horn clauses overcome both limitations: KB becomes base of simple rules If p1 and ... and pn then c, with logical semantics p1  ...  pn  c Two algorithms are available, rule forward chaining and rule backward chaining, that are: Intuitive Sound and complete for HCPL Linear in the size of the KB For most application domains, loss of expressiveness can be overcome by addition of new symbols and clauses: ex, FCPL KB1 = p  q  c  d has no logical equivalent in HCPL in terms of alphabet {p,q,c,d} However KB2 = (p  q  notd  c)  (p  q  notc  d)  (c  notc  false)  (d  notd  false) is an HCPL formula logically equivalent to KB1

21 Propositional forward chaining
Repeated application of modus ponens until reaching a fixed point At each step i: Fire all rules (i.e., Horn clauses with at least one positive and one negative literal) with all premises already in KBi Add their respective conclusions to KBi+1 Fixed point k reached when KBk = KBk-1 KBk = {f | KB0 |= f}, i.e., all logical conclusions of KB0 If f  KBk, then KB0 |= f, otherwise, KB0 | f Naturally data-driven reasoning: Guided by fact (axioms) in KB0 Allows intuitive, direct implementation of reactive agents Generally inefficient for: Inefficient for specific entailment query Cumbersome for deliberative agent implementations Builds and-or proof graph bottom-up

22 Propositional forward chaining: example

23 Propositional forward chaining: example

24 Propositional forward chaining: example

25 Propositional forward chaining: example

26 Propositional forward chaining: example

27 Propositional forward chaining: example

28 Propositional forward chaining: example

29 Propositional backward chaining
Repeated application of resolution using: Unit input resolution strategy with negated query as initial set of support At each step i: Search KB0 for clause of the form p1 ... pn  g to resolve with clause g popped from the goal stack If there are several ones, pick one, push p1 ... pn to goal stack, and push other ones to alternative stack for consideration upon backtracking If there are none, backtrack (i.e., pop alternative stack) Terminates: Successfully when goal stack is empty As failure when goal stack is non empty but alternative stack is Naturally goal-driven reasoning: Guided by goal (theorem to prove) Allows intuitive, direct implementation of deliberative agents Generally: Inefficient for deriving all logical conclusions from KB Cumbersome implementation of reactive agents Builds and-or proof graph top-down

30 Propositional backward chaining: example
Goal Stack Q Alternative Stack

31 Propositional backward chaining: example
Goal Stack P Alternative Stack *

32 Propositional backward chaining: example
Goal Stack L M Alternative Stack *

33 Propositional backward chaining: example
Goal Stack A P M Alternative Stack A B * *

34 Propositional backward chaining: example
Goal Stack P M Alternative Stack A B * * * *

35 Propositional backward chaining: example
Goal Stack A B M Alternative Stack * * * *

36 Propositional backward chaining: example
Goal Stack M Alternative Stack * * * * * *

37 Propositional backward chaining: example
Goal Stack B L Alternative Stack * * * * * *

38 Propositional backward chaining: example
Goal Stack Alternative Stack * * * * * * *

39 Propositional backward chaining: example
Goal Stack Alternative Stack * * * * * * *

40 Propositional backward chaining: example
Goal Stack Alternative Stack * * * * * * *

41 Limitations of propositional logic
Ontological: Cannot represent knowledge intentionally No concise representation of generic relations (generic in terms of categories, space, time, etc.) ex, no way to concisely formalize the Wumpus world rule: “at any step during the exploration, the agent perceiving a stench makes him knows that there is a Wumpus in a location adjacent to his” Propositional logic: Requires conjunction of 100,000 equivalences to represent this rule for an exploration of at most 1000 steps of a cavern size 10x10 (stench1_1_1  wumpus1_1_2  wumpus1_2_1)   (stench1000_1_1  wumpus100_1_2  wumpus1000_2_1)   (stench1_10_10  wumpus1_9_10  wumpus1_10_9)   (stench1000_10_10  wumpus100_9_10  wumpus1000_9_10) Epistemological: Agent always completely confident of its positive or negative beliefs No explicit representation of ignorance (missing knowledge) Only way to represent uncertainty is disjunction Once held, agent belief cannot be questioned by new evidence (ex, from sensors)


Download ppt "Propositional Classical Deduction"

Similar presentations


Ads by Google