NMR98 - Logic Programming1 Learning with Extended Logic Programs Evelina Lamma (1), Fabrizio Riguzzi (1), Luís Moniz Pereira (2) (1)DEIS, University of.

Slides:



Advertisements
Similar presentations
Inductive Logic Programming
Advertisements

Marco Gavanelli – Università di Ferrara, Italy Marco Alberti – Universidade nova de Lisboa, Portugal Evelina Lamma – Università di Ferrara, Italy.
General Ideas in Inductive Logic Programming FOPI-RG - 19/4/05.
Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
Updates plus Preferences Luís Moniz Pereira José Júlio Alferes Centro de Inteligência Artificial Universidade Nova de Lisboa Portugal JELIA’00, Málaga,
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
WFSX programming Prolog programming style, but with the WFSX semantics Requires: –A new proof procedure (different from SLDNF), complying with WFS, and.
Inductive Logic Programming: The Problem Specification Given: –Examples: first-order atoms or definite clauses, each labeled positive or negative. –Background.
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Well-founded Semantics with Disjunction João Alcântara, Carlos Damásio and Luís Moniz Pereira Centro de Inteligência Artificial.
João Alcântara, Carlos Damásio and Luís Pereira Centro de Inteligência Artificial (CENTRIA) Depto. Informática, Faculdade.
Combining Inductive and Analytical Learning Ch 12. in Machine Learning Tom M. Mitchell 고려대학교 자연어처리 연구실 한 경 수
© The McGraw-Hill Companies, Inc., Chapter 8 The Theory of NP-Completeness.
Knowledge in Learning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 19 Spring 2004.
Università di Milano-Bicocca Laurea Magistrale in Informatica
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
João Alcântara, Carlos Damásio and Luís Moniz Pereira Centro de Inteligência Artificial (CENTRIA) Depto. Informática,
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 21 Jim Martin.
Relational Data Mining in Finance Haonan Zhang CFWin /04/2003.
Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and Technology.
Models -1 Scientists often describe what they do as constructing models. Understanding scientific reasoning requires understanding something about models.
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
Adapted by Doug Downey from: Bryan Pardo, EECS 349 Fall 2007 Machine Learning Lecture 2: Concept Learning and Version Spaces 1.
José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of.
LP and Non-Monotonicity LP includes a non-monotonic form of default negation not L is true if L cannot (now) be proven This feature is used for representing.
Extended LPs In Normal LPs all the negative information is implicit. Though that’s desired in some cases (e.g. the database with flight connections), sometimes.
Methods of Proof Chapter 7, second half.
Reductio ad Absurdum Argumentation in Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto CENTRIA – Centro de Inteligência Artificial,
Machine Learning: Symbol-Based
ASP vs. Prolog like programming ASP is adequate for: –NP-complete problems –situation where the whole program is relevant for the problem at hands èIf.
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of Science and Technology.
Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 22 Jim Martin.
ASP vs. Prolog like programming ASP is adequate for: –NP-complete problems –situation where the whole program is relevant for the problem at hands èIf.
2007/9/15AIAI '07 (Aix-en-Provence, France)1 Reconsideration of Circumscriptive Induction with Pointwise Circumscription Koji Iwanuma 1 Katsumi Inoue 2.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
1 Knowledge Based Systems (CM0377) Lecture 12 (Last modified 2nd May 2002)
Introduction to ILP ILP = Inductive Logic Programming = machine learning  logic programming = learning with logic Introduced by Muggleton in 1992.
Learning by Answer Sets Chiaki Sakama Wakayama University, Japan Presented at AAAI Spring Symposium on Answer Set Programming, March 2001.
Theory Revision Chris Murphy. The Problem Sometimes we: – Have theories for existing data that do not match new data – Do not want to repeat learning.
1 Nonmonotonic Inductive Logic Programming Chiaki Sakama Wakayama University, Japan Invited Talk at LPNMR 2001.
Logical Agents Logic Propositional Logic Summary
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Machine Learning Chapter 2. Concept Learning and The General-to-specific Ordering Tom M. Mitchell.
Chapter 2: Concept Learning and the General-to-Specific Ordering.
Preference Revision via Declarative Debugging Pierangelo Dell’Acqua Dept. of Science and Technology - ITN Linköping University, Sweden EPIA’05, Covilhã,
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
MINERVA A Dynamic Logic Programming Agent Architecture João Alexandre Leite José Júlio Alferes Luís Moniz Pereira ATAL’01 CENTRIA – New University of Lisbon.
Concept Learning and the General-to-Specific Ordering 이 종우 자연언어처리연구실.
Outline Inductive bias General-to specific ordering of hypotheses
Overview Concept Learning Representation Inductive Learning Hypothesis
LDK R Logics for Data and Knowledge Representation Propositional Logic: Reasoning First version by Alessandro Agostini and Fausto Giunchiglia Second version.
For Monday Finish chapter 19 No homework. Program 4 Any questions?
For Wednesday Read 20.4 Lots of interesting stuff in chapter 20, but we don’t have time to cover it all.
For Monday Finish chapter 19 Take-home exam due. Program 4 Any questions?
CS 5751 Machine Learning Chapter 10 Learning Sets of Rules1 Learning Sets of Rules Sequential covering algorithms FOIL Induction as the inverse of deduction.
First-Order Logic and Inductive Logic Programming.
KR A Principled Framework for Modular Web Rule Bases and its Semantics Anastasia Analyti Institute of Computer Science, FORTH-ICS, Greece Grigoris.
April 3rd, 1998TAPD'98. Paris 2-3 April, Tabling Abduction José Alferes and Luís Moniz Pereira Univ. Évora and CENTRIA CENTRIA Univ. Nova Lisboa.
Computer Science CPSC 322 Lecture 22 Logical Consequences, Proof Procedures (Ch 5.2.2)
Inverse Entailment in Nonmonotonic Logic Programs Chiaki Sakama Wakayama University, Japan.
DEDUCTION PRINCIPLES AND STRATEGIES FOR SEMANTIC WEB Chain resolution and its fuzzyfication Dr. Hashim Habiballa University of Ostrava.
Learning Three-Valued Logical Programs Evelina Lamma 1, Fabrizio Riguzzi 1, Luis Moniz Pereira 2 1 DEIS, Università di Bologna 2 Centro de Inteligencia.
Chap. 10 Learning Sets of Rules 박성배 서울대학교 컴퓨터공학과.
October 19th, 2007L. M. Pereira and A. M. Pinto1 Approved Models for Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto Centre for Artificial.
Methods of Proof Chapter 7, second half.
Machine Learning Chapter 2
Machine Learning Chapter 2
Presentation transcript:

NMR98 - Logic Programming1 Learning with Extended Logic Programs Evelina Lamma (1), Fabrizio Riguzzi (1), Luís Moniz Pereira (2) (1)DEIS, University of Bologna, Italy (2)CENTRIA, Departamento de Informática Universidade Nova de Lisboa, Portugal

NMR98 - Logic Programming2 Summary Concept learning in a 3-valued setting Consider positive and negative examples as instances of two disjoint classes Learn a definition for both the target concept and its opposite Extended Logic Programs under WSFX Default negation to handle exceptions, and inconsistencies

NMR98 - Logic Programming3 2-valued vs 3-valued setting Explicit definition of the negated concept (De Raedt, Bruynooghe 1990) Requires the adoption of a more expressible class of programs a c b

NMR98 - Logic Programming4 Learning ELP Useful in partially known domains Extend Inductive Logic Programming framework to the case of ELP Learn a definition for concept p using E +, E - as training set Learn a definition for concept  p using E -, E + as training set

NMR98 - Logic Programming5 The new ILP problem Given –a set P of possible ELP programs (bias) –a set E + of positive examples –a set E - of negative examples –a consistent extended logic program  (the background knowledge) Find an ELP, P  P such that P   E +,  E - (completeness) P  E -,  E + (consistency)

NMR98 - Logic Programming6 Intersection of definitions E+E+ E-E- p-p- p+p+ Exceptions to the positive definition: negative atoms Exceptions to the negative definition: positive atoms Unseen atoms

NMR98 - Logic Programming7 Unseen atoms Unseen atoms are classified as unknown p(X)  p + (X), not  p(X).  p(X)  p - (X), not p(X). unless the concept is true and its opposite undefined: p(X)  p + (X), undefined(  p(X)).  p(X)  p - (X), undefined(p(X)).

NMR98 - Logic Programming8 Training set atoms They must be classified according to the training set Default literals, representing non- abnormality conditions, added to rules p(X)  p + (X), not ab p (X), not  p(X).  p(X)  p - (X), not ab  p (X), not p(X).

NMR98 - Logic Programming9 Example B: bird(a).has_wings(a). jet(b).has_wings(b). angel(c).has_wings(c). has_limbs(c). penguin(d).has_wings(d). has_limbs(d). dog(e).has_limbs(e). cat(f).has_limbs(f). E + = {flies(a)} E - = {flies(d), flies(e)}

NMR98 - Logic Programming10 flies + E+E+ abcf de flies - E-E-

NMR98 - Logic Programming11 flies + (X)  has_wings(X). flies - (X)  has_limbs(X). flies(X)  flies + (X),not ab flies+ (X),not  flies(X).  flies(X)  flies - (X), not flies(X). ab flies+ (d). flies(X)  flies + (X), undefined(  flies(X)).  flies(X)  flies - (X), undefined(flies(X)). Generalizing exceptions, we obtain: ab flies+ (X)  penguin(X).

NMR98 - Logic Programming12 The Learning Algorithm Input: E +, E -, B Output: H, learned theory –LearnHierarchy(E +, E -, B; H p ) LearnHierarchy(E -, E +, B; H  p ) –Obtain H by transforming H p and H  p into non- deterministic rules and adding clauses for undefined case Coverage of examples through SLX interpreter (Alferes, Damásio, Pereira, 1994)

NMR98 - Logic Programming13 LearnHierarchy Input: E +, E -, B Output: H, learned theory –Learn (E +, E -, B; H p ) –H= H p –For each r  H p : if some negative example p(t) is covered, specialize r through a default literal not ab r (t) Learn a definition for ab r (LearnHierarchy) and add it to H

NMR98 - Logic Programming14 Example B:penguin(1).penguin(2). bird(3).bird(4).bird(5). bird(X)  penguin(X). animal(6). animal(7). animal(8). animal(9). animal(10). animal(11). animal(12). animal(X)  bird(X). E + ={flies(3),flies(4),flies(5)} E - ={flies(1), flies(2), flies(6), flies(7), flies(8), flies(9),flies(10),flies(11),flies(12)}

NMR98 - Logic Programming15 Procedure Learn generates: (1) flies(X)  bird(X). covers E - 1 ={flies(1),flies(2)} (2) flies(X)  bird(X), not ab 2 (X). Learning a definition for exceptions: E + ={ab 2 (1),ab 2 (2)} E - ={ab 2 (3), ab 2 (4), ab 2 (5)} (3) ab 2 (X)  penguin(X). (consistent)

NMR98 - Logic Programming16 Learning the negative concept: (4)  flies(X)  animal(X). covers E - 4 ={flies(3),flies(4),flies(5)} (5)  flies(X)  animal(X),not ab 5 (X). Learning a definition for exceptions: E + ={ab 5 (3),ab 5 (4),ab 5 (5)} E - ={ab 5 (1),ab 5 (2),ab 5 (6),ab 5 (7),ab 5 (8),ab 5 (9),ab 5 (10), ab 5 (11),ab 5 (12)} (6) ab 5 (X)  bird(X). covers E - 7 ={ab 5 (1),ab 5 (2)}

NMR98 - Logic Programming17 (7) ab 5 (X)  bird(X), not ab 7 (X). Learning a definition for exceptions: E + ={ab 7 (1),ab 7 (2)} E - ={ab 7 (3),ab 7 (4),ab 7 (5)} (8) ab 7 (X)  penguin(X). The algorithm terminates by making the clauses for flies and  flies non-deterministic and by adding the clauses for the undefined case

NMR98 - Logic Programming18 Related work LELP (Inoue, Kudoh, 1997) relies on answer-set semantics –bottom-up approach, only –redundant clauses, in some cases We can choose whether to learn Least General Solution or Most General Solution for a concept

NMR98 - Logic Programming19 Least General vs Most General Solutions Bottom-up methods search the space of clauses specific to general –RLGG, Inverse Resolution, Inverse Entailment –GOLEM Top-down methods search the space of clauses general to specific –FOIL, Progol

NMR98 - Logic Programming20 LGS, MGS: Example B: bird(a).animal(a). cat(b).animal(b). E + = {flies(a)} E - = {flies(b)} flies + MGS (X)  bird(X). flies + LGS (X)  bird(X),animal(X). flies - MGS (X)  cat(X). flies - LGS (X)  cat(X),animal(X).

NMR98 - Logic Programming21 Example: Mixing LGS, MGS Concept about who is likely to attack one person, (maximize the concept and minimize its opposite) attacker1(X)  attacker + MGS (X), not  attacker1(X).  attacker1(X)  attacker - LGS (X), not attacker1(X). Concept about beggars (one wants to give money strictly to those appearing to need it - minimize the concept and maximize its opposite) beggar1(X)  beggar + LGS (X), not  beggar1(X).  beggar1(X)  beggar - MGS (X), not beggar1(X).

NMR98 - Logic Programming22 However, rejected beggars, may turn into attackers (maximize the concept and minimize its opposite) beggar2(X)  beggar + MGS (X), not  beggar2(X).  beggar2(X)  beggar - LGS (X), not beggar2(X). Concepts can be used in order to minimize the risk when carrying a lot of money run  lot_of_money, attacker1(X), not beggar2(X).  run  give_money. give_money  beggar1(X). give_money  attacker1(X), not beggar2(X).

NMR98 - Logic Programming23 n disjoint classes p 1 (X)  p 1 + (X),not ab p1+ (X),not p 2 (X),…, not p n (X). p 2 (X)  p 2 + (X),not ab p2+ (X),not p 1 (X),…, not p n (X). … p n (X)  p n + (X),not ab pn+ (X),not p 1 (X),…,not p n-1 (X). p 1 (X)  p 1 + (X), undef(p 2 (X)),…, undef(p n (X)). … p n (X)  p n + (X), undef(p 1 (X)),…, undef(p n-1 (X)).

NMR98 - Logic Programming24 Future Work System able to learn ELP, with various approaches in learning Implementation under development Integration with abductive reasoning in order to guess new examples and cope with exceptions (Lamma, Mello,Milano, Riguzzi, 1997) Extension to the case of multi-predicate learning Revision of theories in form of ELP