Lectures 11 / 12 Knowledge Representation Propositional Logic CSE 573 Artificial Intelligence I Henry Kautz Fall 2001.

Slides:



Advertisements
Similar presentations
Artificial Intelligence Chapter 13 The Propositional Calculus Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Advertisements

Inference Rules Universal Instantiation Existential Generalization
UIUC CS 497: Section EA Lecture #2 Reasoning in Artificial Intelligence Professor: Eyal Amir Spring Semester 2004.
Automated Reasoning Systems For first order Predicate Logic.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 20
Logic Use mathematical deduction to derive new knowledge.
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
13 Automated Reasoning 13.0 Introduction to Weak Methods in Theorem Proving 13.1 The General Problem Solver and Difference Tables 13.2 Resolution.
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Logic.
From Propositional Logic to PROLOG CSE P573 Applications of Artificial Intelligence Henry Kautz Fall 2004.
Logic Concepts Lecture Module 11.
Artificial Intelligence Chapter 14. Resolution in the Propositional Calculus Artificial Intelligence Chapter 14. Resolution in the Propositional Calculus.
Intro to AI Fall 2004 © L. Joskowicz 1 Introduction to Artificial Intelligence LECTURE 7: Knowledge Representation and Logic Motivation Knowledge bases.
Outline Recap Knowledge Representation I Textbook: Chapters 6, 7, 9 and 10.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
Existential Graphs and Davis-Putnam April 3, 2002 Bram van Heuveln Department of Cognitive Science.
Knowledge Representation I (Propositional Logic) CSE 473.
Methods of Proof Chapter 7, second half.
Knoweldge Representation & Reasoning
Knowledge Representation II (Inference in Propositional Logic) CSE 473.
Knowledge Representation II (Inference in Propositional Logic) CSE 473 Continued…
Rutgers CS440, Fall 2003 Propositional Logic Reading: Ch. 7, AIMA 2 nd Ed. (skip )
Logic - Part 2 CSE 573. © Daniel S. Weld 2 Reading Already assigned R&N ch 5, 7, 8, 11 thru 11.2 For next time R&N 9.1, 9.2, 11.4 [optional 11.5]
To do Check slides for next lecture and move some to this lecture. Consider eliminating resolution Simplify discussion of inference – just talk about SAT.
Propositional Logic Reasoning correctly computationally Chapter 7 or 8.
Proof Systems KB |- Q iff there is a sequence of wffs D1,..., Dn such that Dn is Q and for each Di in the sequence: a) either Di is in KB or b) Di can.
Propositional Resolution Computational LogicLecture 4 Michael Genesereth Spring 2005.
Propositional Logic: Methods of Proof (Part II)
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 19, 2012.
CHAPTERS 7, 8 Oliver Schulte Logical Inference: Through Proof to Truth.
Fall 98 Introduction to Artificial Intelligence LECTURE 7: Knowledge Representation and Logic Motivation Knowledge bases and inferences Logic as a representation.
Pattern-directed inference systems
Logical Agents Logic Propositional Logic Summary
1 Knowledge Representation. 2 Definitions Knowledge Base Knowledge Base A set of representations of facts about the world. A set of representations of.
Logic in AI CSE 573. © Daniel S. Weld 2 Logistics Monday? Reading Ch 8 Ch 9 thru p 278 Section 10.3 Projects Due 11/10 Teams and project plan due by this.
Propositional Logic: Methods of Proof (Part II) This lecture topic: Propositional Logic (two lectures) Chapter (previous lecture, Part I) Chapter.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
S P Vimal, Department of CSIS, BITS, Pilani
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
LDK R Logics for Data and Knowledge Representation Propositional Logic: Reasoning First version by Alessandro Agostini and Fausto Giunchiglia Second version.
CS6133 Software Specification and Verification
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part B Propositional Logic.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Knowledge Repn. & Reasoning Lecture #9: Propositional Logic UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2005.
CS.462 Artificial Intelligence SOMCHAI THANGSATHITYANGKUL Lecture 04 : Logic.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
EA C461 Artificial Intelligence
Inference in Propositional Logic (and Intro to SAT)
Knowledge Representation and Reasoning
EA C461 – Artificial Intelligence Logical Agent
ARTIFICIAL INTELLIGENCE
EA C461 – Artificial Intelligence Logical Agent
Logical Inference: Through Proof to Truth
Logic Use mathematical deduction to derive new knowledge.
Artificial Intelligence
Artificial Intelligence: Agents and Propositional Logic.
CS 416 Artificial Intelligence
Back to “Serious” Topics…
Methods of Proof Chapter 7, second half.
Propositional Logic: Methods of Proof (Part II)
Applications of Propositional Reasoning: Diagnosis & Verificaiton
Presentation transcript:

Lectures 11 / 12 Knowledge Representation Propositional Logic CSE 573 Artificial Intelligence I Henry Kautz Fall 2001

How Did You Represent Your Knowledge of Chess?

Advantages of Procedural Representation of Knowledge

Consider Queries… 1.From the initial position, can a pawn move forward 2 squares? 2.From any other position, can a pawn move forward 2 squares? 3.When can a King move move than 1 square at a time?

What Other Problems with Procedural Approach? Consider representing large amounts of knowledge…

Declarative Knowledge Representation Non-procedural representations are called declarative. What is the standard CS approach to declarative knowledge representation? What difficulty would that approach have in representing our knowledge of chess?

What About English?

Desiderata 1.Declarative separate kx from use of kx 2.Expressive general rules as well as facts incomplete information 3.Concise – Generate many new conclusions exponential / infinite number 4.Effectively computable unambiguous, even without context

Basic Idea of Logic By starting with true assumptions, you can deduce true conclusions.

Truth Francis Bacon ( ) No pleasure is comparable to the standing upon the vantage-ground of truth. Thomas Henry Huxley ( ) Irrationally held truths may be more harmful than reasoned errors. John Keats ( ) Beauty is truth, truth beauty; that is all Ye know on earth, and all ye need to know. Blaise Pascal ( ) We know the truth, not only by the reason, but also by the heart. François Rabelais (c ) Speak the truth and shame the Devil. Daniel Webster ( ) There is nothing so powerful as truth, and often nothing so strange.

The Big Three 

Entailment m – something that determines whether a sentence S is true or false – a “state of affairs” m  S S is true in m m is a model of S S  T S entails T Every model of S is a model of T When S is true, then T must be true

Truth in the World vs Truth in a Model It is hard to formally talk about truth in the “real world” So m is generally a mathematical object that “mirrors” the world in some important way E.g.: m is a truth assignment – a function from sentences to {1, 0} Sentence m

Intended vs. Unintended Models Sentence m1 m2 ?

Satisfiability and Validity Fix a language L and a set of possible models M. S is satisfiable if it is true is some model: m  S S is unsatisfiable if it is false in all models. S is valid if it is true in all models:  S

Propositional Logic Ingredients of a sentence: 1.Propositions (variables) literal = a variable or a negated variable 2.Special symbols: true, false 3.Logical Connectives , , ,  Ingredients of a model: 1.Truth assignment: function that assigns true/false (1/0) to each variable 2.Always assigns symbol true 1, symbol false 0 3.Truth assignment of sentences computed by applying truth-tables for connectives

Truth Tables for Connectives    

Key Idea: Compositional Semantics Unlike English, Propositional Logic has compositional semantics: The truth of a sentence is a function of the truth of its subparts –Context is not relevant, once truth of propositions is determined! Compare: John believes Mary loves him.

Relation of Entailment and Implication S  T iff  S  T

Relation of Validity and Unsatisfiability  S iff  S is unsatisfiable: There is no m such that m   S If a sentence is true in all worlds, its negation is true in no worlds

Inference Mechanical process for computing new sentences Does not depend on understanding intended meaning of sentences! Many different possible systems of inference Modus ponens: { P, P  Q }  Q

Soundness and Completeness Sound: if S  T then S  T Complete: if S  T then S  T Refutation Complete: if S is unsatisfiable then S  false

Why Refutation Completeness is All We Need S  T iff  S  T iff  (S  T) is unsatisfiable iff  (  S  T) is unsatisfiable iff (S   T) is unsatisfiable iff (S   T)  false

Special Syntactic Forms General PL: ((q  r)  s))   (s  t) Clausal form (CNF – conjunctive normal form): (  q  r  s )  (  s   t) { (  q  r  s ), (  s   t) } empty clause () = false Binary clauses: 1 or 2 literals per clause { (  q  r), (  s   t) } Horn clauses: 0 or 1 positive literal per clause { (  q   r  s ), (  s   t) } { ((q  r)  s ), ((s  t)  false) }

Resolution { (p   ), (  p   ) }  R (    ) –Defined for clauses (CNF) – ,  any disjunctions of literals –Remove any repeated literals from conclusion { (p  q), (  p  q) }  R (q)

Resolution Proof DAG, where leaves are input clauses Internal nodes are resolvants Root is false (empty clause) (p  q)(  p  q) (q) (  q) ()()

Example If the unicorn is mythical, then it is immortal, but if it is not mythical, it is a mortal mammal. If the unicorn is either immortal or a mammal, then it is horned. Prove: the unicorn is horned. M = mythicalI = immortal A = mammalH = horned

Example (  A  H) (M  A) (  H) (  I  H) (  M) (  M  I) (  I)(  A) (M) ()()

Completeness of Resolution Resolution is refutation complete Why is this remarkable?

Conversion to Clausal Form  (q  r )   (s  t)

New Variable Trick Putting a formula in clausal form may increase its size exponentially But can avoid this by introducing dummy variables (a  b  c)  (d  e  f)  {(a  d),(a  e),(a  f), (b  d),(b  e),(b  f), (c  d),(c  e),(c  f) } (a  b  c)  (d  e  f)  {(g  h), (  a  b  c  g),(  g  a),(  g  b),(  g  c), (  d  e  f  h),(  h  d),(  h  e),(  h  f)} Dummy variables don’t change satisfiability!

Complexity of Inference Formal complexity of the problem of sound and complete inference is separate from the particular system of inference. In general: For clausal form: For binary clauses: For Horn clauses:

Is Resolution “Optimal”? For binary and Horn clauses: polynomial time Requires some further implementation tricks to be linear rather than quadratic In general: can require exponential time, even on certain easy problems…

Pigeon Hole Formulas PH Problem: Formula encodes that you cannot place n+1 pigeons in n holes (one per hole) Proposed Cook/Karp around 1971; “Resolved” by Armin Haken PH takes exponentially many resolution steps (no matter in what order) Does this prove NP  P? No, because there are other proof systems that do have short proofs for pigeon hole. Need to prove that every proof system has large proofs for some problem.

Using a Satisfiability Procedure for Inference Recall Davis-Putnam procedure: Backtracking CSP search to find a truth assignment that satisfies a formula How can we use it for inference, e.g. to prove S  T ?

Using a Satisfiability Procedure for Inference Recall Davis-Putnam procedure: Backtrack CSP search to find a truth assignment that satisfies a formula How can we use it for inference, e.g. to prove S  T ? S  T iff (S   T) is unsatisfiable Negate T, put it in clausal form, add to S Run DP: if no model found, theorem proved!

Search Tree vs Proof Tree For an unsat formula, the Davis-Putnam search tree can be converted to a resolution proof tree Replace each failed leaf with the clause that failed there Internal nodes are resolvants Thus, DP is a particular resolution strategy Asymptotically not the best resolution strategy, but usually best in practice – efficient use of memory!

Axiom Schemas Useful to allow schemas that stand for sets of sentences. Blowup: (index range) nesting for i, j in {1, 2, 3, 4} such that adjacent(i,j): for c in {R, B, G}: (  Xc,i   Xc,j)

Horn Theories Recall the special case of Horn clauses: { (  q   r  s ), (  s   t) } { ((q  r)  s ), ((s  t)  false) } Many problems naturally take the form of such if/then rules A very simple (linear time) form of inference is refutation complete for Horn theories: Unit resolution (one clause must be unit) = unit propagation = Arc consistency

Expert System for Automobile Diagnosis Knowledge Base: GasInTank  FuelLineOK  GasInEngine GasInEngine  GoodSpark  EngineRuns PowerToPlugs  PlugsClean  GoodSpark BatteryCharged  CablesOK  PowerToPlugs Observed:  EngineRuns, GasInTank, PlugsClean, BatteryCharged Prove:  FuelLineOK   CablesOK

Solution by Unit Propagation (Arc Consistency) Knowledge Base and Observations: (  GasInTank   FuelLineOK  GasInEngine) (  GasInEngine   GoodSpark  EngineRuns) (  PowerToPlugs   PlugsClean  GoodSpark) (  BatteryCharged   CablesOK  PowerToPlugs) (  EngineRuns) (GasInTank) (PlugsClean) (BatteryCharged) Negation of Conclusion: (FuelLineOK) (CablesOK)

How Do You Know What to Prove? In this example were given the diagnosis we wanted to prove:  FuelLineOK   CablesOK But, in general, how do you know what to prove? A powerful and widely-used technique for finding an hypothesis that explains an observed system fault (  EngineRuns) is Consistency Based Diagnosis.

Consistency-Based Diagnosis 1.Make some Observations O. 2.Initialize the Assumption Set A to assert that all components are working properly. 3.Check if the KB, A, O together are inconsistent (can deduce false). 4.If so, delete propositions from A until consistency is restored (cannot deduce false). The deleted propositions are a diagnosis. There may be many possible diagnoses

Example Is KB  Observations  Assumptions consistent? KB  {  EngineRuns, GasInTank, PlugsClean, BatteryCharged}  { FuelLineOK, CablesOK }  false Must restore consistency! KB  {  EngineRuns,GasInTank, PlugsClean, BatteryCharged}  {CablesOK }  false So  FuelLineOK is a possible diagnosis! KB  {  EngineRuns,GasInTank, PlugsClean, BatteryCharged}  {FuelLineOK}  false So  CablesOK is a possible diagnosis!

Complexity of Diagnosis If KB is Horn, then each consistency test takes linear time. Complexity = ways to delete propositions from Assumption Set that are considered. Single fault diagnosis – O(n 2 ) Double fault diagnosis – O(n 3 ) Triple fault diagnosis – O(n 4 ) …

Multiple Fault Diagnosis Knowledge Base: GasInTank  FuelLineOK  GasInEngine EmergencyTankFull  GasInEngine GasInEngine  GoodSpark  EngineRuns PowerToPlugs  PlugsClean  GoodSpark BatteryCharged  CablesOK  PowerToPlugs Observed:  EngineRuns, GasInTank, PlugsClean, BatteryCharged Assumptions: FuelLineOK, CablesOK, EmergencyTankFull

Example: Double Fault Is KB  Observations  Assumptions consistent? KB  {  EngineRuns, GasInTank, PlugsClean, BatteryCharged}  { FuelLineOK, CablesOK, EmergencyTankFull }  false Must restore consistency! KB  {  EngineRuns,GasInTank, PlugsClean, BatteryCharged}  {CablesOK, EmergencyTankFull }  false So  FuelLineOK is not a diagnosis! KB  {  EngineRuns,GasInTank, PlugsClean, BatteryCharged}  {CablesOK}  false So  FuelLineOK   EmergencyTankFull is double-fault diagnosis!

Improving Efficiency of Consistency-Based Diagnosis Suppose there are many assumptions that are irrelevant to the problem E.g.: AshTraysClean irrelevant to  EngineRuns Only want to try retracting relevant assumptions Approach: 1.Generate proof tree of inconsistency 2.Assumptions that appear as leafs in the tree (FuelLineOK, CablesOK) are relevant 3.Retract one relevant assumption Removing a leaf breaks this particular proof of unsatisfiability 4.Is formula now consistent? If so, done; if not, return to step (1). For a multiple-fault, will have to repeat!

Applications of Diagnostic Inference Many medical expert systems Circuit testing/debugging Xerox copy machines NASA Deep Space One Remote Agent yes, you know the slide…

Compiled into 2,000 variable SAT problem Real-time planning and diagnosis

Coming Up First Order Logic Ontologies Description Logics Reasoning About Change Planning (Handout)