2 Inductive Logic Programming Representation scheme usedLogic ProgramsNeed toRecap logic programsSpecify the learning problemSpecify the operatorsWorry about search considerationsAlsoGo through a session with ProgolLook at applications
3 Remember Logic Programs? Subset of first order logicAll sentences are Horn clausesImplications where a conjunction of literals (body)Imply a single goal literal (head)Single facts can also be Horn clausesWith no bodyA logic program consists of:A set of Horn clausesILP theory and practice is highly formalBest way to progress and to show progress
4 Horn Clauses and Entailment Writing Horn Clauses:h(X,Y) b1(X,Y) b2(X) ... bn(X,Y,Z)Also replace conjunctions with a capital letterh(X,Y) b1, BAssume lower case letters are single literalsEntailment:When one logic program, L1 can be proved using another logic program L2We write: L2 L1Note that if L2 L1This does not mean that L2 entails that L1 is false
5 Logic Programs in ILP Start with background information, As a logic program labelled BAlso start with a set of positive examples of the concept required to learnRepresented as a logic program labelled E+And a set of negative examples of the concept required to learnRepresented as a logic program labelled E-ILP system will learn a hypothesisWhich is also a logic program, labelled H
6 Explaining Examples Example A Hypothesis H explains example e If logic program e is entailed by HSo, we prove e is trueExampleH: class(A, fish) :- has_gills(A)B: has_gills(trout)Positive example: class(trout, fish)Entailed by H B taken togetherNote that negative examples can also be entailedBy the hypothesis and background taken together
7 Prior Conditions on the Problem Problem must be satisfiable:Prior satisfiability: e E- (B e)So, the background does not entail any negative example (if it did, no hypothesis could rectify this)This does not mean that B entails that e is falseProblem must not already be solved:Prior necessity: e E+ (B e)If all the positive examples were entailed by the background, then we could take H = B.
8 Posterior Conditions on Hypothesis Taken with B, H should entail all positivesPosterior sufficiency: e E+ (B H e)Taken with B, H should entail no negativesPosterior satisfiability: e E- (B H e)If the hypothesis meets these two conditionsIt will have perfectly solved the problemSummary:All positives can be derived from B HBut no negatives can be derived from B H
9 Problem Specification Given logic programs E+, E-, BWhich meet the prior satisfiability and necessity conditionsLearn a logic program HSuch that B H meet the posterior satisfiabilty and sufficiency conditions
10 Moving in Logic Program Space Can use rules of inference to find new LPsDeductive rules of inferenceModus ponens, resolution, etc.Map from the general to the specifici.e., from L1 to L2 such that L1 L2Look today at inductive rules of inferenceWill invert the resolution ruleFour ways to do thisMap from the specific to the generali.e., from L1 to L2 such that L2 L1Inductive inference rules are not sound
11 Inverting Deductive Rules Man alternates 2 hats every dayWhenever he wears hat X, he gets a pain, hat Y is OKKnows that a hat having a pin in causes painInfers that his hat has a pin in itLooks and finds the hat X does have a pin in itUses Modus Ponens to prove thatHis pain is caused by a pin in hat XOriginal inference (pin in hat X) was unsoundCould be many reasons for the pain in his headWas induced so that Modus Ponens could be used
12 Inverting Resolution 1. Absorption rule of inference Rule written same as for deductive rulesInput above the line, and the inference below lineRemember that q is a single literalAnd that A, B are conjunctions of literalsCan prove that the original clausesFollow from the hypothesised clause by resolution
13 Proving Given clauses Exercise: translate into CNF Use the v diagram, And convince yourselvesUse the v diagram,because we don’t want to write as a rule of deductionSay that Absorption is a V-operator
16 Inverting Resolution 2. Identification Rule of inference:Resolution Proof:
17 Inverting Resolution 3. Intra Construction Rule of inference:Resolution Proof:
18 Predicate Invention Say that Intra-construction is a W-operator This has introduced the new symbol qq is a predicate which is resolved awayIn the resolution proofILP systems using intra-constructionPerform predicate inventionToy example:When learning the insertion sort algorithmILP system (Progol) invents concept of list insertion
19 Inverting Resolution 4. Inter Construction Rule of inference:Resolution Proof:PredicateInventionAgain
20 Generic Search Strategy Assume this kind of search:A set of current hypothesis, QH, is maintainedAt each search step, a hypothesis H is chosen from QHH is expanded using inference rulesWhich adds more current hypotheses to QHSearch stops when a termination condition is met by a hypothesisSome (of many) questions:Initialisation, choice of H, termination, how to expand…
21 Search (Extra Logical) Considerations Generality and Speciality There is a great deal of variation inSearch strategies between ILP programsDefinition of generality/specialityA hypothesis G is more general than hypothesis S iffG S. S is said to be more specific than GA deductive rule of inference maps a conjunction of clauses G onto a conjunction of clauses S, such that G S.These are specialisation rules (Modus Ponens, resolution…)An inductive rule of inference maps a conjunction of clauses S onto a conjunction of clauses G, such that G S.These are generalisation rules (absorption, identification…)
22 Search Direction ILP systems differ in their overall search strategy From Specific to GeneralStart with most specific hypothesisWhich explain a small number (possibly 1) of positivesKeep generalising to explain more positive examplesUsing generalisation rules (inductive) such as inverse resolutionAre careful not to allow any negatives to be explainedFrom General to SpecificStart with empty clause as hypothesisWhich explains everythingKeep specialising to exclude more and more negative examplesUsing specialisation rules (deductive) such as resolutionAre careful to make sure all positives are still explained
23 Pruning Remember that: If G is more general than S A set of current hypothesis, QH, is maintainedAnd each hypothesis explains a set of pos/neg exs.If G is more general than SThen G will explain more (>=) examples than SWhen searching from specific to generalCan prune any hypothesis which explains a negativeBecause further generalisation will not rectify this situationWhen searching from general to specificCan prune any hypothesis which doesn’t explain all positivesBecause further specialisation will not rectify this situation
24 Ordering There will be many current hypothesis in QH to choose from. Which is chosen first?ILP systems use a probability distributionWhich assigns a value P(H | B E) to each HA Bayesian measure is defined, based onThe number of positive/negative examples explainedWhen this is equal, ILP systems useA sophisticated Occam’s RazorDefined by Algorithmic Complexity theory or something similar
25 Language Restrictions Another way to reduce the searchSpecify what format clauses in hypotheses are allowed to haveOne possibilityRestrict the number of existential variables allowedAnother possibilityBe explicit about the nature of arguments in literalsWhich arguments in body literals areInstantiated (ground) termsVariables given in the head literalNew variablesSee Progol’s mode declarations
26 Example Session with Progol Animals datasetLearning task: learn rules which classify animals into fish, mammal, reptile, birdRules based on attributes of the animalsPhysical attributes: number of legs, covering (fur, feathers, etc.)Other attributes: produce milk, lay eggs, etc.16 animals are supplied7 attributes are supplied
27 Input file: mode declarations Mode declarations given at the top of the fileThese are language restrictionsDeclaration about the head of hypothesis clauses:- modeh(1,class(+animal,#class))Means hypothesis will be given an animal variable and will return a ground instantiation of classDeclaration about the body clauses:- modeb(1,has_legs(+animal,#nat))Means that it is OK to use has_legs predicate in bodyAnd that it will take the variable animal supplied in the head and return an instantiated natural number
28 Input file: type information Next comes information about types of objectEach ground variable (word) must be typedanimal(dog), animal(dolphin), … etc.class(mammal), class(fish), …etc.covering(hair), covering(none), … etc.habitat(land), habitat(air), … etc.
29 Input file: background concepts Next comes the logic program B, containing these predicates:has_covering/2, has_legs/2, has_milk/1,homeothermic/1, habitat/2, has_eggs/1, has_gills/1E.g.,has_covering(dog, hair), has_milk(platypus),has_legs(penguin, 2), homeothermic(dog),habitat(eagle, air), habitat(eagle, land),has_eggs(eagle), has_gills(trout), etc.
30 Input file: Examples Finally, E+ and E- are supplied Positives: class(lizard, reptile)class(trout, fish)class(bat, mammal), etc.Negatives::- class(trout, mammal):- class(herring, mammal):- class(platypus, reptile), etc.
31 Output file: generalisations We see Progol starting with the most specific hypothesis for the case when animal is a reptileStarts with the lizard reptile and finds most specific:class(A, reptile) :- has_covering(A,scales), has_legs(A,4), has_eggs(A),habitat(A, land)Then finds 12 generalisations of thisExamplesclass(A, reptile) :- has_covering(A, scales).class(A, reptile) :- has_eggs(A), has_legs(A, 4).Then chooses the best one:class(A, reptile) :- has_covering(A, scales), has_legs(A, 4).This process is repeated for fish, mammal and bird
32 Output file: Final Hypothesis class(A, reptile) :- has_covering(A,scales), has_legs(A,4).class(A, mammal) :- homeothermic(A), has_milk(A).class(A, fish) :- has_legs(A,0), has_eggs(A).class(A, reptile) :- has_covering(A,scales), habitat(A, land).class(A, bird) :- has_covering(A,feathers)Gets 100% predictive accuracy on training set
33 Some Applications of ILP (See notes for details) Finite Element Mesh DesignPredictive ToxicologyProtein Structure PredictionGenerating Program Invariants
Your consent to our cookies if you continue to use this website.