Inductive Logic Programming: The Problem Specification Given: –Examples: first-order atoms or definite clauses, each labeled positive or negative. –Background.

Slides:



Advertisements
Similar presentations
Inductive Logic Programming
Advertisements

Partial Orderings Section 8.6.
1 CS 391L: Machine Learning: Rule Learning Raymond J. Mooney University of Texas at Austin.
General Ideas in Inductive Logic Programming FOPI-RG - 19/4/05.
Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
10 October 2006 Foundations of Logic and Constraint Programming 1 Unification ­An overview Need for Unification Ranked alfabeths and terms. Substitutions.
Relations Relations on a Set. Properties of Relations.
ILP : Inductive Logic Programming
First Order Logic Resolution
Artificial Intelligence Chapter 14. Resolution in the Propositional Calculus Artificial Intelligence Chapter 14. Resolution in the Propositional Calculus.
1 Applied Computer Science II Resolution in FOL Luc De Raedt.
CSE (c) S. Tanimoto, 2008 Propositional Logic
Costas Busch - RPI1 Single Final State for NFAs. Costas Busch - RPI2 Any NFA can be converted to an equivalent NFA with a single final state.
CPSC 322, Lecture 20Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Computer Science cpsc322, Lecture 20 (Textbook.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 21 Jim Martin.
Relational Data Mining in Finance Haonan Zhang CFWin /04/2003.
Approximation Algorithms
Fall 2004COMP 3351 Single Final State for NFA. Fall 2004COMP 3352 Any NFA can be converted to an equivalent NFA with a single final state.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 22 Jim Martin.
Inductive Logic Programming (for Dummies) Anoop & Hector.
Proof Systems KB |- Q iff there is a sequence of wffs D1,..., Dn such that Dn is Q and for each Di in the sequence: a) either Di is in KB or b) Di can.
Introduction to ILP ILP = Inductive Logic Programming = machine learning  logic programming = learning with logic Introduced by Muggleton in 1992.
DECIDABILITY OF PRESBURGER ARITHMETIC USING FINITE AUTOMATA Presented by : Shubha Jain Reference : Paper by Alexandre Boudet and Hubert Comon.
Advanced Topics in Propositional Logic Chapter 17 Language, Proof and Logic.
1 Knowledge Representation. 2 Definitions Knowledge Base Knowledge Base A set of representations of facts about the world. A set of representations of.
November 10, Machine Learning: Lecture 9 Rule Learning / Inductive Logic Programming.
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
Machine Learning Chapter 2. Concept Learning and The General-to-specific Ordering Tom M. Mitchell.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, January 22, 2001 William.
Predicate Calculus Syntax Countable set of predicate symbols, each with specified arity  0. For example: clinical data with multiple tables of patient.
Overview Concept Learning Representation Inductive Learning Hypothesis
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Thursday, November 29, 2001.
1 IS 2150 / TEL 2810 Introduction to Security James Joshi Associate Professor, SIS Lecture 3 September 15, 2009 Mathematical Review Security Policies.
LDK R Logics for Data and Knowledge Representation Propositional Logic: Reasoning First version by Alessandro Agostini and Fausto Giunchiglia Second version.
For Monday Finish chapter 19 No homework. Program 4 Any questions?
A Logic of Partially Satisfied Constraints Nic Wilson Cork Constraint Computation Centre Computer Science, UCC.
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
For Wednesday Read 20.4 Lots of interesting stuff in chapter 20, but we don’t have time to cover it all.
For Monday Finish chapter 19 Take-home exam due. Program 4 Any questions?
CS 5751 Machine Learning Chapter 10 Learning Sets of Rules1 Learning Sets of Rules Sequential covering algorithms FOIL Induction as the inverse of deduction.
Review of Propositional Logic Syntax
Computer Science CPSC 322 Lecture 22 Logical Consequences, Proof Procedures (Ch 5.2.2)
NMR98 - Logic Programming1 Learning with Extended Logic Programs Evelina Lamma (1), Fabrizio Riguzzi (1), Luís Moniz Pereira (2) (1)DEIS, University of.
Machine Learning Concept Learning General-to Specific Ordering
Web Science & Technologies University of Koblenz ▪ Landau, Germany Models of Definite Programs.
Set Theory Concepts Set – A collection of “elements” (objects, members) denoted by upper case letters A, B, etc. elements are lower case brackets are used.
Chap. 10 Learning Sets of Rules 박성배 서울대학교 컴퓨터공학과.
March 3, 2016Introduction to Artificial Intelligence Lecture 12: Knowledge Representation & Reasoning I 1 Back to “Serious” Topics… Knowledge Representation.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Review: Discrete Mathematics and Its Applications
CSE 2813 Discrete Structures
Models of Definite Programs
Partial Orderings CSE 2813 Discrete Structures.
Knowledge Representation and Reasoning
Resolution in the Propositional Calculus
The Propositional Calculus
IS 2150 / TEL 2810 Information Security & Privacy
Single Final State for NFA
Proper Refinement of Datalog Clauses using Primary Keys
IS 2150 / TEL 2810 Introduction to Security
Review: Discrete Mathematics and Its Applications
Models of Definite Programs
Back to “Serious” Topics…
Reasoning with the Propositional Calculus
Reasoning with the Propositional Calculus
Reasoning with the Propositional Calculus
Reasoning with the Propositional Calculus
Models of Definite Programs
IS 2150 / TEL 2810 Introduction to Security
IS 2150 / TEL 2810 Information Security & Privacy
Presentation transcript:

Inductive Logic Programming: The Problem Specification Given: –Examples: first-order atoms or definite clauses, each labeled positive or negative. –Background knowledge: in the form of a definite clause theory. –Language bias: constraints on the form of interesting new clauses.

ILP Specification (Continued) Find: –A hypothesis h that meets the language constraints and that, when conjoined with B, entails (implies) all of the positive examples but none of the negative examples. To handle real-world issues such as noise, we often relax the requirements, so that h need only entail significantly more positive examples than negative examples.

A Common Approach Use a greedy covering algorithm. –Repeat while some positive examples remain uncovered (not entailed): Find a good clause (one that covers as many positive examples as possible but no/few negatives). Add that clause to the current theory, and remove the positive examples that it covers. ILP algorithms use this approach but vary in their method for finding a good clause.

A Difficulty Problem: It is undecidable in general whether one definite clause implies another, or whether a definite clause together with a logical theory implies a ground atom. Approach: Use subsumption rather than implication.

Subsumption for Literals

Subsumption for Clauses

Least Generalization of Terms

Least Generalization of Terms (Continued) Examples: –lgg(a,a) = a –lgg(X,a) = Y –lgg(f(a,b),g(a)) = Z –lgg(f(a,g(a)),f(b,g(b))) = f(X,g(X)) lgg(t 1,t 2,t 3 ) = lgg(t 1,lgg(t 2,t 3 )) = lgg(lgg(t 1,t 2 ),t 3 ): justifies finding the lgg of a set of terms using the pairwise algorithm.

Least Generalization of Literals

Lattice of Literals Consider the following partially ordered set. Each member of the set is an equivalence class of literals, equivalent under variance. One member of the set is greater than another if and only if one member of the first set subsumes one member of the second (can be shown equivalent to saying: if and only if every member of the first set subsumes every member of the second).

Lattice of Literals (Continued) For simplicity, we now will identify each equivalence class with one (arbitrary) representative literal. Add elements TOP and BOTTOM to this set, where TOP is greater than every literal, and every literal is greater than BOTTOM. Every pair of literals has a least upper bound, which is their lgg.

Lattice of Literals (Continued) Every pair of literals has a greatest lower bound, which is their greatest common instance (the result of applying their most general unifier to either literal, or BOTTOM if no most general unifier exists.) Therefore, this partially ordered set satisfies the definition of a lattice.

Least Generalization of Clauses

Example

Lattice of Clauses We can construct a lattice of clauses in a manner analogous to our construction of literals. Again, the ordering is subsumption; again we group clauses into variants; and again we add TOP and BOTTOM elements. Again the least upper bound is the lgg, but the greatest lower bound is just the union (clause containing all literals from each).

Lattice of Clauses for the Given Hypothesis Language

Incorporating Background Knowledge: Saturation Recall that we wish to find a hypothesis clause h that together with the background knowledge B will entail the positive examples but not the negative examples. Consider an arbitrary positive example e. Our hypothesis h together with B should entail e: B  h ⊨ e. We can also write this as h ⊨ B  e.

Saturation (Continued) If e is an atom (atomic formula), and we only use atoms from B, then B  e is a definite clause. We call B  e the saturation of e with respect to B.

Saturation (Continued) Recall that we approximate entailment by subsumption. Our hypothesis h must be in that part of the lattice of clauses above (subsuming) B  e.

Alternative Derivation of Saturation From B  h ⊨ e by contraposition: B  {  e} ⊨  h. Again by contraposition: h ⊨  (B   e) So by DeMorgan’s Law: h ⊨  B  e If e is an atom (atomic formula), and we only use atoms from B, then  B  e is a definite clause.

Overview of Some ILP Algorithms GOLEM (bottom-up): saturates every positive example and then repeatedly takes lggs as long as the result does not cover a negative example. PROGOL, ALEPH (top-down): saturates first uncovered positive example, and then performs top-down admissible search of the lattice above this saturated example.

Algorithms (Continued) FOIL (top-down): performs greedy top- down search of the lattice of clauses (does not use saturation). LINUS/DINUS: strictly limit the representation language, convert the task to propositional logic, and use a propositional (single-table) learning algorithm.