Recursive Random Fields Daniel Lowd University of Washington June 29th, 2006 (Joint work with Pedro Domingos)

Slides:



Advertisements
Similar presentations
Slide 1 of 18 Uncertainty Representation and Reasoning with MEBN/PR-OWL Kathryn Blackmond Laskey Paulo C. G. da Costa The Volgenau School of Information.
Advertisements

Discriminative Training of Markov Logic Networks
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Discriminative Structure and Parameter.
CPSC 322, Lecture 30Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30 March, 25, 2015 Slide source: from Pedro Domingos UW.
Markov Logic Networks: Exploring their Application to Social Network Analysis Parag Singla Dept. of Computer Science and Engineering Indian Institute of.
Relational Representations Daniel Lowd University of Oregon April 20, 2015.
Undirected Probabilistic Graphical Models (Markov Nets) (Slides from Sam Roweis Lecture)
Markov Logic Networks Instructor: Pedro Domingos.
Department of Computer Science The University of Texas at Austin Probabilistic Abduction using Markov Logic Networks Rohit J. Kate Raymond J. Mooney.
Undirected Probabilistic Graphical Models (Markov Nets) (Slides from Sam Roweis)
Markov Logic: Combining Logic and Probability Parag Singla Dept. of Computer Science & Engineering Indian Institute of Technology Delhi.
Review Markov Logic Networks Mathew Richardson Pedro Domingos Xinran(Sean) Luo, u
Speeding Up Inference in Markov Logic Networks by Preprocessing to Reduce the Size of the Resulting Grounded Network Jude Shavlik Sriraam Natarajan Computer.
Markov Networks.
Efficient Weight Learning for Markov Logic Networks Daniel Lowd University of Washington (Joint work with Pedro Domingos)
Unifying Logical and Statistical AI Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with Jesse Davis, Stanley Kok,
Markov Logic: A Unifying Framework for Statistical Relational Learning Pedro Domingos Matthew Richardson
Speaker:Benedict Fehringer Seminar:Probabilistic Models for Information Extraction by Dr. Martin Theobald and Maximilian Dylla Based on Richards, M., and.
School of Computing Science Simon Fraser University Vancouver, Canada.
Learning Markov Network Structure with Decision Trees Daniel Lowd University of Oregon Jesse Davis Katholieke Universiteit Leuven Joint work with:
CSE 574 – Artificial Intelligence II Statistical Relational Learning Instructor: Pedro Domingos.
Statistical Relational Learning Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
CSE 574: Artificial Intelligence II Statistical Relational Learning Instructor: Pedro Domingos.
Recursive Random Fields Daniel Lowd University of Washington (Joint work with Pedro Domingos)
Unifying Logical and Statistical AI Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with Stanley Kok, Daniel Lowd,
Relational Models. CSE 515 in One Slide We will learn to: Put probability distributions on everything Learn them from data Do inference with them.
Markov Logic Networks: A Unified Approach To Language Processing Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with.
Markov Logic: A Simple and Powerful Unification Of Logic and Probability Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint.
Learning, Logic, and Probability: A Unified View Pedro Domingos Dept. Computer Science & Eng. University of Washington (Joint work with Stanley Kok, Matt.
On the Proper Treatment of Quantifiers in Probabilistic Logic Semantics Islam Beltagy and Katrin Erk The University of Texas at Austin IWCS 2015.
1 Learning the Structure of Markov Logic Networks Stanley Kok & Pedro Domingos Dept. of Computer Science and Eng. University of Washington.
Statistical Relational Learning Pedro Domingos Dept. Computer Science & Eng. University of Washington.
Pedro Domingos Dept. of Computer Science & Eng.
Boosting Markov Logic Networks
The Foundations: Logic and Proofs
Markov Logic Parag Singla Dept. of Computer Science University of Texas, Austin.
Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint.
Machine Learning For the Web: A Unified View Pedro Domingos Dept. of Computer Science & Eng. University of Washington Includes joint work with Stanley.
Dr. Matthew Iklé Department of Mathematics and Computer Science Adams State College Probabilistic Quantifier Logic for General Intelligence: An Indefinite.
Markov Logic And other SRL Approaches
Transfer in Reinforcement Learning via Markov Logic Networks Lisa Torrey, Jude Shavlik, Sriraam Natarajan, Pavan Kuppili, Trevor Walker University of Wisconsin-Madison,
Chapter 1, Part II: Predicate Logic With Question/Answer Animations.
Markov Logic and Deep Networks Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Lifted First-Order Probabilistic Inference Rodrigo de Salvo Braz SRI International joint work with Eyal Amir and Dan Roth.
Markov Logic Networks Pedro Domingos Dept. Computer Science & Eng. University of Washington (Joint work with Matt Richardson)
Learning to “Read Between the Lines” using Bayesian Logic Programs Sindhu Raghavan, Raymond Mooney, and Hyeonseo Ku The University of Texas at Austin July.
CS Introduction to AI Tutorial 8 Resolution Tutorial 8 Resolution.
Advice Taking and Transfer Learning: Naturally-Inspired Extensions to Reinforcement Learning Lisa Torrey, Trevor Walker, Richard Maclin*, Jude Shavlik.
Modeling Speech Acts and Joint Intentions in Modal Markov Logic Henry Kautz University of Washington.
Today’s Topics 12/1/15CS Fall 2015 (Shavlik©), Lecture 27, Week 131 Read Chapter 21 (skip Section 21.5) of textbook Exam THURSDAY Dec 17, 5:30-7:30pm.
Pedro Domingos University of Washington. Traditional Programming Machine Learning Computer Data Algorithm Output Computer Data Output Algorithm.
CPSC 322, Lecture 30Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30 Nov, 23, 2015 Slide source: from Pedro Domingos UW.
Markov Logic Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Happy Mittal (Joint work with Prasoon Goyal, Parag Singla and Vibhav Gogate) IIT Delhi New Rules for Domain Independent Lifted.
Markov Logic: A Representation Language for Natural Language Semantics Pedro Domingos Dept. Computer Science & Eng. University of Washington (Based on.
Scalable Statistical Relational Learning for NLP William Y. Wang William W. Cohen Machine Learning Dept and Language Technologies Inst. joint work with:
Happy Mittal Advisor : Parag Singla IIT Delhi Lifted Inference Rules With Constraints.
New Rules for Domain Independent Lifted MAP Inference
Soft Logic in Computer Science
An Introduction to Markov Logic Networks in Knowledge Bases
Scalable Statistical Relational Learning for NLP
Markov Logic Networks for NLP CSCI-GA.2591
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 29
Logical Inference: Through Proof to Truth
Logic for Artificial Intelligence
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 20
Lifted First-Order Probabilistic Inference [de Salvo Braz, Amir, and Roth, 2005] Daniel Lowd 5/11/2005.
Conditional Random Fields
Markov Networks.
Presentation transcript:

Recursive Random Fields Daniel Lowd University of Washington June 29th, 2006 (Joint work with Pedro Domingos)

One-Slide Summary Question: How to represent uncertainty in relational domains? State-of-the-Art: Markov logic [Richardson & Domingos, 2004]  Markov logic network (MLN) = first-order KB with weights: Problem: Only top-level conjunction and universal quantifiers are probabilistic Solution: Recursive random fields (RRFs)  RRF = MLN whose features are MLNs  Inference: Gibbs sampling, iterated conditional modes (ICM)  Learning: back-propagation

Example: Friends and Smokers Predicates: Smokes(x); Cancer(x); Friends(x,y) We wish to represent beliefs such as: Smoking causes cancer Friends of friends are friends (transitivity) Everyone has a friend who smokes [Richardson and Domingos, 2004]

First-Order Logic  Sm(x)   Ca(x)  Fr(x,y)  Fr(y,z) Fr(x,z)   x x  x,y,z  x x Fr(x,y) Sm(y)   y y Logical

Markov Logic  Sm(x)   Ca(x)  Fr(x,y)  Fr(y,z) Fr(x,z) 1/Z exp(  …)  x x  x,y,z  x x Fr(x,y) Sm(y)   y y Probabilistic Logical w1w1 w2w2 w3w3

Markov Logic  Sm(x)   Ca(x)  Fr(x,y)  Fr(y,z) Fr(x,z) 1/Z exp(  …)  x x  x,y,z  x x Fr(x,y) Sm(y)   y y Probabilistic Logical w1w1 w2w2 w3w3

Markov Logic  Sm(x)   Ca(x)  Fr(x,y)  Fr(y,z) Fr(x,z) 1/Z exp(  …)  x x  x,y,z  x x Fr(x,y) Sm(y)   y y Probabilistic Logical w1w1 w2w2 w3w3 This becomes a disjunction of n conjunctions.

Markov Logic  Sm(x)   Ca(x)  Fr(x,y)  Fr(y,z) Fr(x,z) 1/Z exp(  …)  x x  x,y,z  x x Fr(x,y) Sm(y)   y y Probabilistic Logical w1w1 w2w2 w3w3 In CNF, each grounding explodes into 2 n clauses!

Recursive Random Fields Sm(x) Ca(x) Fr(x,y)Fr(y,z) Fr(x,z) f0f0  x f 1,x Fr(x,y) Sm(y)  y f 4,x,y Probabilistic w1w1 w2w2 w3w3  x,y,z f 2,x,y,z  x f 3,x w4w4 w6w6 w5w5 w7w7 w8w8 w9w9 w 10 w 11 Where: f i,x = 1/Z i exp(  …)

RRF features are parameterized and are grounded using objects in the domain.  Leaves = predicates:  Recursive features are built up from other RRF features: The RRF Model

RRF features are parameterized and are grounded using objects in the domain.  Leaves = predicates:  Recursive features are built up from other RRF features: The RRF Model

Representing Logic: AND (x  y)  1/Z exp(w 1 x + w 2 y) 01n … P(World) # true literals

Representing Logic: OR (x  y)  1/Z exp(w 1 x + w 2 y) (x  y)   (  x   y)  −1/Z exp(−w 1 x + −w  y) De Morgan: (x  y)   (  x   y) 01n … P(World) # true literals

Representing Logic: FORALL (x  y)  1/Z exp(w 1 x + w 2 y) (x  y)   (  x   y)  −1/Z exp(−w 1 x + −w  y)  a: f(a)  1/Z exp(w x 1 + w x 2 + …) 01n … P(World) # true literals

Representing Logic: EXIST (x  y)  1/Z exp(w 1 x + w 2 y) (x  y)   (  x   y)  −1/Z exp(−w 1 x + −w  y)  a: f(a)  1/Z exp(w x 1 + w x 2 + …)  a: f(a)   (  a:  f(a)) −1/Z exp(−w x 1 + −w x 2 + …) 01n … P(World) # true literals

Distributions MLNs and RRFs can compactly represent DistributionMLNsRRFs Propositional MRFYes Deterministic KBYes Soft conjunctionYes Soft universal quantificationYes Soft disjunctionNoYes Soft existential quantificationNoYes Soft nested formulasNoYes

Inference and Learning Inference  MAP: iterated conditional modes (ICM)  Conditional probabilities: Gibbs sampling Learning  Back-propagation  RRF weight learning is more powerful than MLN structure learning  More flexible theory revision

Current Work: Probabilistic Integrity Constraints Want to represent probabilistic version of:

Conclusion Recursive random fields: + Compactly represent many distributions MLNs cannot + Make conjunctions, existentials, and nested formulas probabilistic + Offer new methods for structure learning and theory revision – Less intuitive than Markov logic