Presentation is loading. Please wait.

Presentation is loading. Please wait.

School of Computing Science Simon Fraser University Vancouver, Canada.

Similar presentations


Presentation on theme: "School of Computing Science Simon Fraser University Vancouver, Canada."— Presentation transcript:

1 School of Computing Science Simon Fraser University Vancouver, Canada

2 2 Learning Markov Logic Networks Outline Markov Logic Networks (MLNs) and motivation for this research Parameterized Bayes nets (PBNs) From PBN to MLN Learn-and-Join Algorithm for PBNs. Empirical Evaluation

3 Markov Logic Networks ( Domingos and Richardson ML 2006 ) Learning Markov Logic Networks 3 A logical KB is a set of hard constraints on the set of possible worlds MLNs make them soft constraints: When a world violates a formula, It becomes less probable, not impossible A Markov Logic Network (MLN) is a set of pairs (F, w) where F is a formula in first-order logic w is a real number 2.3If a student is intelligent then he is well ranked 0.7If a student has an intelligent friend then he is intelligent

4 Why MLNs are important Learning Markov Logic Networks 4 MLNs are proposed as a unifying framework for Statistical relational learning. Their authors show how other approaches are special cases of MLNs. They are popular Learning the structure of Markov logic networks Discriminative training of Markov logic networks Efficient weight learning for Markov logic networks Bottom-up learning of Markov logic network structure Mapping and revising Markov logic networks for transfer learning Discriminative structure and parameter learning for Markov logic networks Entity resolution with markov logic Event modeling and recognition using markov logic networks Hybrid markov logic networks Learning Markov logic network structure via hypergraph lifting Improving the accuracy and efficiency of map inference for markov logic

5 Limitation Learning Markov Logic Networks 5 Structure learning in MLNs is mostly ILP(Inductive Logic Programming) based The complexity of search space is exponential in the number of predicates For datasets with many descriptive attributes current MLN learning algorithms are infeasible as they either never terminate or are very slow

6 Parametrized Bayes Nets (Poole UAI 2003) Learning Markov Logic Networks 6 A functor is a function symbol with typed variables f(X), g(X,Y), R(X,Y). A PBN is a BN whose nodes are functors. We use PBNs with variables only. Not intelligence(Jack)

7 Overview Learning Markov Logic Networks 7

8 From PBN to FOL formula Learning Markov Logic Networks 8 Parameterized Bayes Nets(PBNs) can be converted to a set of first order formula easily. 1. Moralize PBN (marry parents, drop arrows). 2. For every CP-table value in PBN, add a formula ranking (S 1, R 1 ), intelligence(S 1, I 1 ) ranking (S 1, R 1 ), intelligence(S 1, I 2 ) ranking (S 1, R 2 ), intelligence(S 1, I 1 ) ranking (S 1, R 2 ), intelligence(S 1, I 2 ) Int =I 1 P1P1 P2P2 Int =I 2 P3P3 P4P4

9 Learning PBN Structure Required: single-table BN learner L. Takes as input Single data table. A set of edge constraints required edges A set of edge constraints forbidden edges Nodes: Descriptive attributes (e.g. intelligence(S)) Boolean relationship nodes (e.g., Registered(S,C)). Edges Learning correlations between attributes Learning Markov Logic Networks 9

10 Phase 1: Entity tables Learning Markov Logic Networks 10 BN learner L

11 Phase 2: relationship tables Learning Markov Logic Networks 11 S.NameC.numbergradesatisfactionintelligencerankingratingdifficultyPopularityTeach-ability Jack101A1313112 …. …………………… BN learner L

12 Phase 3: add Boolean relationship indicator variables 12

13 Datasets University Movielens Mutagenesis 13 Learning Markov Logic Networks

14 Systems Learning Markov Logic Networks 14 Moralized Bayes Nets(MBN) is a Parameterized Bayes nets (PBNs) converted into a Markov logic network. LHL(current implementation of structure learning in Alchemy) Const_LHL: Following the data reformatting used by Kok and Domingos 2007. Salary(Student, salary_value) => Salary_high(student) Salary_low(student) Salary_medium(student)

15 Evaluation Plan 15 Learning Markov Logic Networks

16 Evaluation Metrics(Default) Running time Accuracy How accurate our prediction is Conditional Log Likelihood (CLL) How confident we are with the prediction Area Under Curve (AUC) Avoid false negatives 16 Learning Markov Logic Networks

17 Running time Time in Minutes. NT = did not terminate. X+Y = PBN structure learning + MLN parameter learning 17 Learning Markov Logic Networks MBN

18 Accuracy 18 Learning Markov Logic Networks

19 Conditional Log likelihood 19 Learning Markov Logic Networks

20 Area Under Curve 20 Learning Markov Logic Networks

21 Future Work: Parameter Learning 21 Learning Markov Logic Networks

22 Summary Learning Markov Logic Networks 22 Key idea: learn directed model, convert to undirected to avoid cycles. New efficient structure learning algorithm for Parametrized Bayes Nets.  Fast and scalable (e.g., 5 min vs. 21 hr).  Substantial Improvements in all default Evaluation Metrics

23 Thank you! Learning Markov Logic Networks 23 Any questions?

24 Learning PBN Structure 24 Required: single-table BN learner L. Takes as input (T,RE,FE): Single data table. A set of edge constraints (forbidden/required edges). Nodes: Descriptive attributes (e.g. intelligence(S)) Boolean relationship nodes (e.g., Registered(S,C)). Required: single-table BN learner L. Takes as input (T,RE,FE): Single data table. A set of edge constraints (forbidden/required edges). Nodes: Descriptive attributes (e.g. intelligence(S)) Boolean relationship nodes (e.g., Registered(S,C)). 1.RequiredEdges, ForbiddenEdges := emptyset. 2.For each entity table E i : a)Apply L to E i to obtain BN G i. For two attributes X,Y from E i, b)If X → Y in G i, then RequiredEdges += X → Y. c)If X → Y not in G i, then ForbiddenEdges += X → Y. 3.For each relationship table join of size s = 1,..k a)Compute Rtable join, join with entity tables := J i. b)Apply L to (J i, RE, FE) to obtain BN G i. c)Derive additional edge constraints from G i. 4.Add relationship indicators: If edge X → Y was added when analyzing join R 1 join R 2 … join R m, add edges R i → Y. 1.RequiredEdges, ForbiddenEdges := emptyset. 2.For each entity table E i : a)Apply L to E i to obtain BN G i. For two attributes X,Y from E i, b)If X → Y in G i, then RequiredEdges += X → Y. c)If X → Y not in G i, then ForbiddenEdges += X → Y. 3.For each relationship table join of size s = 1,..k a)Compute Rtable join, join with entity tables := J i. b)Apply L to (J i, RE, FE) to obtain BN G i. c)Derive additional edge constraints from G i. 4.Add relationship indicators: If edge X → Y was added when analyzing join R 1 join R 2 … join R m, add edges R i → Y.

25 Restrictions  Learn-and-Join learns dependencies among attributes, not dependencies among relationships.  The structure is limited to certain patterns  Only works with many descriptive attributes  Parameter learning still a bottleneck. Learning Markov Logic Networks 25

26 Inheriting Edge Constraints From Subtables Learning Markov Logic Networks 26 Intuition: Evaluate dependencies on the smallest possible join. Statistical Motivation: Statistics can change between tables, e.g. distribution of students’ age may differ in Student table from Registration table. Computational Motivation: as larger join tables are formed, many edges need not be considered → fast learning.


Download ppt "School of Computing Science Simon Fraser University Vancouver, Canada."

Similar presentations


Ads by Google