Presentation is loading. Please wait.

Presentation is loading. Please wait.

Logic-based TMS Main idea: Represent negation explicitly to permit the IE to express any logical constraint among nodes. Recall: JTMS and ATMS do not allow.

Similar presentations


Presentation on theme: "Logic-based TMS Main idea: Represent negation explicitly to permit the IE to express any logical constraint among nodes. Recall: JTMS and ATMS do not allow."— Presentation transcript:

1 Logic-based TMS Main idea: Represent negation explicitly to permit the IE to express any logical constraint among nodes. Recall: JTMS and ATMS do not allow negation. We must declare two unrelated nodes for liftable (N3) and  liftable (N16) (see TMS paper distributed in class.) To establish a connection between N3 and N16, we must create a new justification to support the contradiction node. N3 N16 |

2 Why is negation important? Consider the following formula: A1 & A2 & …& An  contradiction ≡  (A1 & A2 & … & An) ≡ ≡  A1 v  A2 v … v  An nogood If a nogood can be represented as a justification, contradictions will never be allowed. For that, however, all contradictions must be known in advance. Example: Consider the following nogoods  Q v  B  Q v  A Assume that Q holds, and we must choose from the following choice set {A, B, C}. From here we must be able to infer C immediately.

3 Representing arbitrary clauses: Example Consider the following clause A v B v C. Here –A, if B and C are False, –B, if A and C are False, –C, if A and B are False. JTMS and ATMS cannot represent negation. To encode this clause, we must declare the following contradictions (in JTMS): A &  A  _|_ B &  B  _|_ C &  C  _|_ The following justifications must now be added to complete the specification:  A &  B  C  B &  C  A  C &  A  B

4 LTMS: A more powerful TMS with its own inferential facility LTMS represents negation explicitly, which is why there is no need for two separate nodes to represent the datum and its negation. The label of the single node for both is now defined as follows: A IN OUT IN can’ t be :FALSE  A OUT :TRUE :UNKNOWN A

5 LTMS nodes Premise nodes: These are the only nodes in IE supplied clauses. Example A,  A Assumption nodes: These are nodes whose belief may change by an explicit IE operation.  An assumption is enabled if its label is :TRUE or :FALSE.  An assumption is retracted if its label is :UNKNOWN. NO Contradiction nodes here: Because all nogoods are properly defined, contradictions will never occur. Example Let A & B  contradiction. Then,  A v  B is a nogood, which makes B :FALSE, as soon as A become :TRUE. Note that if both, A and B, are enabled assumptions marked :TRUE, then nogood  A v  B does not hold, which in LTMS signals a contradiction. I.e. contradiction detection is handled by clauses being violated. To resolve such contradiction, we need a contradiction handler similar to the one in JTMS

6 LTMS labels Let C be a set of clauses (i.e. justifications, representing relations among beliefs supplied by the IE), which are disjunctions of literals with no repeated or complementary literals. Example  R v  P v Q. Then: n A node is labeled :TRUE iff C  A |== N i n A node is labeled :FALSE iff C  A |==  N i n A node is labeled :UNKNOWN, otherwise. Note that the following situation can also be occur (very undesirable, of course, and caused by the incompleteness of the underlying inference procedure, BCP): C  A |== N i C  A |==  N i In this case, N i is labeled arbitrary.

7 Logical specification of LTMS LTMS is responsible for the following tasks: 1.Provide labels for nodes 2.Detect contradictions (implied by violated constraints) 3.Provide explanations for labels RE Task # 1: LTMS nodes define a set of propositions, S. Let A  S be a set of Assumption nodes, and C be a set of clauses. Then, for any proposition P  S, label it: n :TRUE if it is derivable from C & S n :FALSE if its negation is derivable from C & S n :UNKNOWN, otherwise RE Task # 2: If C & A is unsatisfiable, signal a contradiction identified by the violated constraint. RE Task # 3: Produce explanations for every labeled node, even if C & A is unsatisfiable.

8 Converting propositional formulas into clauses This transformation is carried out in two steps: Step 1: Conversion of a propositional formula into conjunctive normal form (cnf). Step 2: Adding every resulting conjunct to the database as a clause. Re Step # 1 Conversion of a propositional formula into cnf: 1.Eliminate logical equivalencies by replacing all occurrences of (x ≡ y) with (x  y) & (y  x) 2.Eliminate implications by replacing all occurrences of (x  y) with (  x v y) Example Consider the following state described by the statements -- B is on A, and A is not a pyramid. -- There is nothing that B is ON, and at the Table same time that object is on B. BABA

9 Converting propositional formulas into clauses (cont.) To represent this state: (B  ON_B_A &  PYRAMID_A) & (ON_B_A &  ON_A_B) After eliminating the implication, we get: (  B v (ON_B_A &  PYRAMID_A)) & (ON_B_A &  ON_A_B) 3. Eliminate all EXCLUSIVE OR’s by replacing them with a conjunction (X1 v X2 v … v Xn) &  i≠j  (Xi & Xj) Example Let A be a number that can have a value from the set {1, 4,11, 72, 105}. This is represented by means of the following taxonomic formula: (A = 1) XOR (A = 4) XOR (A = 11) XOR (A = 72) XOR (A =105) It must be replaced by the equivalent construct: ( (A = 1) v (A = 4) v (A = 11) v (A = 72) v (A =105) ) & & (  ((A = 1) & (A = 4))) & (  ((A = 1) & (A = 11))) & … … & (  ((A = 11) & (A = 72)))

10 Converting propositional formulas into clauses (cont.) 4. Move negations down to atomic formulas by applying the following transformations:  (A v B) ≡  A &  B  (A & B) ≡  A v  B  A ≡ A Example  ((A = 1) & (A = 4)) ≡  (A = 1) v  (A = 4) 5. Move disjunction down to literals by applying the following transformation: A v (B & C) ≡ (A v B) & (A v C) Example (  B v (ON_B_A &  PYRAMID_A)) ≡ ≡ (  B v ON_B_A) & (  B v  PYRAMID_A)

11 Converting propositional formulas into clauses (cont.) RE Step # 2: Eliminate conjunction and add every resulting conjunct to the database as a clause. Example  B v ON_B_A  B v  PYRAMID_A ON_B_A  ON_A_B Any propositional formula supplied by the inference engine is converted by the LTMS into a set of clauses. LTMS manipulates set of clauses, C, and set of assumptions, A.

12 Boolean Constraint Propagation (BCP) Recall that LTMS is intended to answer queries about whether a node, N i, is :TRUE, :FALSE, or :UNKNOWN, i.e. if  E such that: 1.E  C  A 2.E is satisfiable (recall that a formula is satisfiable, if there exists an interpretation in which the formula is true). 3.E |== N i / (E |==  N i ). Then, N i is labeled :TRUE, and LTMS responds that N i is true; :FALSE, if E |==  N i ; and :UNKNOWN, if E |≠ N i & E |≠  N i. If C  A  contradiction (i.e. it is NOT satisfiable), LTMS will provide an arbitrary label for N i, but it must also provide an explanation for that label. This way LTMS can explain contradictions and help IE to get rid of them. How to implement this specification? Any propositional theorem prover augmented with a feature to handle contradictions in a specified way can do. One such method is Boolean Constraint Propagation.

13 Boolean Constraint Propagation (cont.) LTMS manipulates nodes (i.e. symbols), not literals. However, it is convenient to define labels for literals in order to simplify the description of BCP:  The literal X has the same label as symbol X.  The literal  X is labeled :TRUE, if symbol X is labeled :FALSE; :FALSE, if symbol X is labeled :TRUE; :UNKNOWN, if X is labeled :UNKNOWN. Assume that certain labeling is in place. Then, each C i  C is either:  Satisfied, if  X  C i labeled :TRUE  Violated, if none of the disjunts of C i is labeled :TRUE  Unit open, if  X  C i labeled :UNKNOWN, and all other literals are labeled :FALSE  Non-unit open, if more than one literal is labeled :UNKNOWN, and the rest are labeled :FALSE.

14 Boolean Constraint Propagation algorithm BCP is a forward chaining engine, which starts with all assumptions being labeled :TRUE or :FALSE, and all remaining symbols labeled :UNKNOWN. The goal is to re- label as many of the :UNKNOWN symbols to :TRUE or :FALSE as possible. Throughout this process, BCP maintains two stacks:  A stack, S, of clauses which are examined for satisfiability.  A stack, V, of violated clauses, which holds detected inconsistencies. BCP works as follows: 1.Place all clauses on stack S, and repeats until S is empty. 2.Pop C i off S. If: I.The current labeling satisfies C i, disregard C i (until the IE retracts an assumption). II.The current labeling violates C i, push it on stack V. III.The current labeling forces the label of some X  C i, where C i is unit open. X is labeled :TRUE, and all unsatisfied and unviolated clauses mentioning X, which are not currently on the stack are pushed on S for re-evaluation. If C i is now satisfied, disregard it. IV.If C i is not unit-open, do nothing.

15 BCP (example) Consider the following set of clauses: X v  Y R v  S Y v S v Z  Y v Q v R Let X and R be enabled assumptions, labeled :FALSE. Then, BCP works as follows: Step 1: Since X is :FALSE, X v  Y is unit open, and Y is labeled :FALSE. Disregard X v  Y. Step 2: Since Y got its label on the previous step, consider constraints mentioning Y. These are Y v S v Z and  Y v Q v R. The former is non-unit open – do nothing with it. The later is satisfied, disregard it. Step 3: Still on S are R v  S and Y v S v Z. Consider R v  S. Since R is :FALSE, this clause is init open, and S is labeled :FALSE. Disregard R v  S. Step 4: Since S got its label on the previous step, consider constraints mentioning S. The only one is Y v S v Z, which now is unit open and forces Z to :TRUE. Step 5: S is empty, V is empty – BCP halts.

16 BCP algorithm (cont.) If no node remains labeled :UNKNOWN after the BCP is completed, the labeling is said to be complete (not the case in our example above – Q is :UNKNOWN). In case of a partial labeling, LTMS does search in an attempt to find a complete labeling. Recall that the set of assumptions grows non-monotonically (an assumption can be enabled or retracted). Thus:  To update the existing labeling, if an assumption is enabled: I.Set the label of the node representing the assumption to :TRUE or :FALSE, accordingly. II.Call BCP. III.If V is not empty, signal the contradiction.  To update the existing labeling, if an assumption is retracted: I.Set the label of the node representing the assumption to :UNKNOWN. II.Call BCP. III.If a partial labeling is generated, try to complete it by performing search.

17 BCP algorithm (cont.) An existing labeling may have to be updated if a new clause is supplied by the IE. This procedure is carried out as follows: I.Add new clause, C i to C. If C i is violated, then push it on V and signal a contradiction. Otherwise, push it on S. II.Call BCP. III.If V is not empty, signal a contradiction. Results:  BCP is very efficient breadth-first search algorithm.  BCP is incomplete inference procedure – sometimes it fails to label a node :TRUE or :FALSE when it should. This is called literal incompleteness.  BCP is sound – it will never label a node :TRUE or :FALSE when it shouldn’t, nor does it signals a contradiction when there isn’t one.  BCP is complete for sets of Horn clauses.

18 Incompleteness of BCP: examples Example 1: (literal incompleteness) Consider the following set of clauses X v  Y If Y is :TRUE, X must be :TRUE. X v Y If Y is :FALSE, X must be :TRUE. That is, both clauses will be simultaneously satisfied only is X is :TRUE. But BCP fails to label X :TRUE, because X does not follow from any single one of these clauses alone. Example 2: (refutation incompleteness) Consider a case where LTMS fails to detect a contradiction. X v  Y X v Y  X v  Y  X v Y No labeling satisfies all four clauses, but BCP fails to detect it.

19 Well-founded explanation A well-founded explanation for a node label is a sequence of steps S1, …, Sk, where each step consists of N Here: –N is an integer (i.d. number of the step). – is a literal. – is a set of integers corresponding to steps earlier in this sequence (for assumptions, this set is empty). – can be: IE supplied clause A clause which logically follows from C. An assumption.

20 Well-founded explanation (cont.)  The conclusion of S k must correspond to the label of N i (the node explained).  A premise step has the form N X { } assumption Here X is a literal corresponding to an enabled assumption  A derivation step has the form N X A C i Here: X is a literal, A is a set of antecedent steps A  C i |== X

21 Well-founded explanation: example Consider the following set of clauses: X v Y  Y v Z  Z v R Assume that X is :FALSE. Then, the well-founded explanation for R is 1  X { } Assumption 2 Y { 1 } X v Y 3 Z { 2 }  Y v Z 4 R { 3 }  Z v R Well-founded explanation is especially useful when the DB is inconsistent, because although the labels are arbitrary it would identify the assumptions behind a given label. Similar to JTMS, a node may have more than one well-founded explanation (because is may belong to different clauses), but LTMS will find a single well-founded explanation.


Download ppt "Logic-based TMS Main idea: Represent negation explicitly to permit the IE to express any logical constraint among nodes. Recall: JTMS and ATMS do not allow."

Similar presentations


Ads by Google