Happy Mittal (Joint work with Prasoon Goyal, Parag Singla and Vibhav Gogate) IIT Delhi New Rules for Domain Independent Lifted.

Slides:



Advertisements
Similar presentations
Online Max-Margin Weight Learning with Markov Logic Networks Tuyen N. Huynh and Raymond J. Mooney Machine Learning Group Department of Computer Science.
Advertisements

Discriminative Training of Markov Logic Networks
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Discriminative Structure and Parameter.
Bayesian Abductive Logic Programs Sindhu Raghavan Raymond J. Mooney The University of Texas at Austin 1.
“Using Weighted MAX-SAT Engines to Solve MPE” -- by James D. Park Shuo (Olivia) Yang.
Online Max-Margin Weight Learning for Markov Logic Networks Tuyen N. Huynh and Raymond J. Mooney Machine Learning Group Department of Computer Science.
CPSC 322, Lecture 30Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30 March, 25, 2015 Slide source: from Pedro Domingos UW.
Markov Logic Networks: Exploring their Application to Social Network Analysis Parag Singla Dept. of Computer Science and Engineering Indian Institute of.
Undirected Probabilistic Graphical Models (Markov Nets) (Slides from Sam Roweis Lecture)
Markov Logic Networks Instructor: Pedro Domingos.
Department of Computer Science The University of Texas at Austin Probabilistic Abduction using Markov Logic Networks Rohit J. Kate Raymond J. Mooney.
Markov Logic: Combining Logic and Probability Parag Singla Dept. of Computer Science & Engineering Indian Institute of Technology Delhi.
1 Unsupervised Semantic Parsing Hoifung Poon and Pedro Domingos EMNLP 2009 Best Paper Award Speaker: Hao Xiong.
Review Markov Logic Networks Mathew Richardson Pedro Domingos Xinran(Sean) Luo, u
Speeding Up Inference in Markov Logic Networks by Preprocessing to Reduce the Size of the Resulting Grounded Network Jude Shavlik Sriraam Natarajan Computer.
Adbuctive Markov Logic for Plan Recognition Parag Singla & Raymond J. Mooney Dept. of Computer Science University of Texas, Austin.
Markov Networks.
Efficient Weight Learning for Markov Logic Networks Daniel Lowd University of Washington (Joint work with Pedro Domingos)
Unifying Logical and Statistical AI Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with Jesse Davis, Stanley Kok,
Speaker:Benedict Fehringer Seminar:Probabilistic Models for Information Extraction by Dr. Martin Theobald and Maximilian Dylla Based on Richards, M., and.
School of Computing Science Simon Fraser University Vancouver, Canada.
Learning Markov Network Structure with Decision Trees Daniel Lowd University of Oregon Jesse Davis Katholieke Universiteit Leuven Joint work with:
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
Statistical Relational Learning Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Recursive Random Fields Daniel Lowd University of Washington June 29th, 2006 (Joint work with Pedro Domingos)
CSE 574: Artificial Intelligence II Statistical Relational Learning Instructor: Pedro Domingos.
Recursive Random Fields Daniel Lowd University of Washington (Joint work with Pedro Domingos)
Unifying Logical and Statistical AI Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with Stanley Kok, Daniel Lowd,
Relational Models. CSE 515 in One Slide We will learn to: Put probability distributions on everything Learn them from data Do inference with them.
Markov Logic Networks: A Unified Approach To Language Processing Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with.
Markov Logic: A Simple and Powerful Unification Of Logic and Probability Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint.
1 Human Detection under Partial Occlusions using Markov Logic Networks Raghuraman Gopalan and William Schwartz Center for Automation Research University.
Learning, Logic, and Probability: A Unified View Pedro Domingos Dept. Computer Science & Eng. University of Washington (Joint work with Stanley Kok, Matt.
1 Learning the Structure of Markov Logic Networks Stanley Kok & Pedro Domingos Dept. of Computer Science and Eng. University of Washington.
Statistical Relational Learning Pedro Domingos Dept. Computer Science & Eng. University of Washington.
Boosting Markov Logic Networks
Deciding a Combination of Theories - Decision Procedure - Changki pswlab Combination of Theories Daniel Kroening, Ofer Strichman Presented by Changki.
Markov Logic Parag Singla Dept. of Computer Science University of Texas, Austin.
Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint.
Machine Learning For the Web: A Unified View Pedro Domingos Dept. of Computer Science & Eng. University of Washington Includes joint work with Stanley.
Web Query Disambiguation from Short Sessions Lilyana Mihalkova* and Raymond Mooney University of Texas at Austin *Now at University of Maryland College.
Markov Logic And other SRL Approaches
Transfer in Reinforcement Learning via Markov Logic Networks Lisa Torrey, Jude Shavlik, Sriraam Natarajan, Pavan Kuppili, Trevor Walker University of Wisconsin-Madison,
Efficient Solution Algorithms for Factored MDPs by Carlos Guestrin, Daphne Koller, Ronald Parr, Shobha Venkataraman Presented by Arkady Epshteyn.
Markov Logic and Deep Networks Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Tuffy Scaling up Statistical Inference in Markov Logic using an RDBMS
Markov Logic Networks Pedro Domingos Dept. Computer Science & Eng. University of Washington (Joint work with Matt Richardson)
1 Lifted First-Order Probabilistic Inference Rodrigo de Salvo Braz University of Illinois at Urbana-Champaign with Eyal Amir and Dan Roth.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
First-Order Logic and Inductive Logic Programming.
1 Markov Logic Stanley Kok Dept. of Computer Science & Eng. University of Washington Joint work with Pedro Domingos, Daniel Lowd, Hoifung Poon, Matt Richardson,
CPSC 322, Lecture 31Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 33 Nov, 25, 2015 Slide source: from Pedro Domingos UW & Markov.
Nikolaj Bjørner Microsoft Research DTU Winter course January 2 nd 2012 Organized by Flemming Nielson & Hanne Riis Nielson.
CPSC 322, Lecture 30Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30 Nov, 23, 2015 Slide source: from Pedro Domingos UW.
Markov Logic Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Markov Logic: A Representation Language for Natural Language Semantics Pedro Domingos Dept. Computer Science & Eng. University of Washington (Based on.
Progress Report ekker. Problem Definition In cases such as object recognition, we can not include all possible objects for training. So transfer learning.
Scalable Statistical Relational Learning for NLP William Y. Wang William W. Cohen Machine Learning Dept and Language Technologies Inst. joint work with:
A User-Guided Approach to Program Analysis Ravi Mangal, Xin Zhang, Mayur Naik Georgia Tech Aditya Nori Microsoft Research.
Happy Mittal Advisor : Parag Singla IIT Delhi Lifted Inference Rules With Constraints.
New Rules for Domain Independent Lifted MAP Inference
An Introduction to Markov Logic Networks in Knowledge Bases
Inference and search for the propositional satisfiability problem
Markov Logic Networks for NLP CSCI-GA.2591
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 29
Logic for Artificial Intelligence
Probabilistic Horn abduction and Bayesian Networks
Lifted First-Order Probabilistic Inference [de Salvo Braz, Amir, and Roth, 2005] Daniel Lowd 5/11/2005.
Markov Networks.
Presentation transcript:

Happy Mittal (Joint work with Prasoon Goyal, Parag Singla and Vibhav Gogate) IIT Delhi New Rules for Domain Independent Lifted MAP Inference

Overview Introduction Motivation First rule Second rule Experiments Conclusion and Future work

Markov Logic Networks (MLN) Real world problems characterized by ◦ Entities and Relationships & Uncertain Behavior Relational Models : Horn clauses, SQL, first-order logic Statistical Models: Markov networks, Bayesian networks Markov Logic [Richardson & Domingos 06] Weighted First Order Formulas Smoking leads to Cancer Friendship leads to Similar Smoking habits

Example: Friends & Smokers S(A)S(B)F(A,A)F(A,B)F(B,A)F(B,B)P exp(8.0) exp(6.0) exp(8.0) Ground formulas Weight of formula iNo. of true groundings of formula i in x

MAP/MPE Inference This is just the weighted MaxSAT problem Use weighted SAT solver (e.g., MaxWalkSAT [Kautz et al. 97] ) S(A)S(B)F(A,A)F(A,B)F(B,A)F(B,B)P exp(8.0) exp(6.0) exp(8.0)

Overview Introduction Motivation Notations and preliminaries First rule Second rule Experiments Conclusion

MAP/MPE Inference Naïve way : Solve on fully grounded theory Exponential in size of domain of predicates. Exploit symmetries in the theory (Lifted MAP inference) How ?

MAP/MPE Inference All are identical up to constant renaming. Solve one, get solutions for others also. But… Some Groundings are lost. Why ? (pred,pos) pairs (P,1) and (Q,1) are related Related (pred,pos) pairs with different variables in same clause => Problem … …

Overview Introduction Motivation Notations and preliminaries First rule Second rule Experiments Conclusion and Future work

First rule for lifted MAP inference Split over single occurrence (pred,pos) pairs MAP inference over M can be reduced to MAP inference over any of M1, M2, M3. M1 M2M3 Split over (Q,2)

First rule for lifted MAP inference M1 M2M3 Split over (R,1) MAP inference in single occurrence MLN is domain independent.

Overview Introduction Motivation Notations and preliminaries First rule Second rule Experiments Conclusion and Future work

Extreme Assignment Theorem : If Predicate P is single occurrence in M, then P has an extreme assignment in X MAP M1 M2M3

Exploiting Extremes Consider MLN : W1 : P(X) => Q(X,Y) W2 : Q(X,Y) ^ Q(Y,Z) => Q(X,Z) Pairs (Q,1) and (Q,2) not single occurrence But in MLN : W1 : P(X) => Q(X,Y) W2 : Q(X,Y) ^ Q(Y,Z) => Q(X,Z) (Q,1) and (Q,2) are single occurrence Hence solution on extremes. Q(X,Y) ^ Q(Y,Z) => Q(X,Z) is tautology at extremes. True ^ True => True False ^ False => False

Second rule for Lifting MAP Second Rule : If MLN is single occurrence after removing tautology at extremes, then MAP inference in original MLN is domain independent.

Overview Introduction Motivation Notations and preliminaries First rule Second rule Experiments Conclusion and Future Work

Experiments Compared our performance with Purely grounded version of benchmark MLNs Sarkhel et al. [2014]‘s non shared MLN approach Used ILP based solver Gurobi as base solver. Notation : GRB : purely grounded version NSLGRB : Sarkhel et al. [2014]’s non shared MLN approach SOLGRB : our approach (single occurrence lifted GRB)

Datasets and Methodology Used 2 benchmark MLNs : Data set No. of form ulas Single Occurr ence Taut. At extreme Example formulas IE7No (Mix)NoToken(t,p,c)=>InField(p,f,c) !Equal(f1,f2) ^ InField(p,f1,c) => !InField(p,f2,c) etc FS5No (Mix)YesS(x) => C(x) F(x,y) ^ F(y,z) => F(x,z) etc

Results Time to reach optimal IEFS

Results Cost of unsatisfied formulae with varying time (fixed domain size 500) IEFS

Results Size of ground theory with varying domain size. IEFS

Overview Introduction Motivation Notations and preliminaries First rule Second rule Experiments Conclusion and Future work

Presented two new rules for lifting MAP inference over MLN. Applicable to large classes of MLNs MAP inference becomes domain independent for single occurrence MLN. Can handle transitivity rules. Can we combine our new rules with existing techniques ? Incorporate evidence. Lift any structured CSP (in general any structured LP)

THANK YOU