Natural Logic for Textual Inference Bill MacCartney and Christopher D. Manning NLP Group Stanford University 29 June 2007.

Slides:



Advertisements
Similar presentations
Artificial Intelligence
Advertisements

1 Knowledge Representation Introduction KR and Logic.
School of something FACULTY OF OTHER School of Computing FACULTY OF ENGINEERING Chunking: Shallow Parsing Eric Atwell, Language Research Group.
Brief Introduction to Logic. Outline Historical View Propositional Logic : Syntax Propositional Logic : Semantics Satisfiability Natural Deduction : Proofs.
1 Containment, Exclusion and Implicativity: A Model of Natural Logic for Textual Inference (MacCartney and Manning)  Every firm saw costs grow more than.
Logic Programming Automated Reasoning in practice.
COGEX at the Second RTE Marta Tatu, Brandon Iles, John Slavick, Adrian Novischi, Dan Moldovan Language Computer Corporation April 10 th, 2006.
COGEX at the Second RTE Marta Tatu, Brandon Iles, John Slavick, Adrian Novischi, Dan Moldovan Language Computer Corporation April 10 th, 2006.
Logic Use mathematical deduction to derive new knowledge.
CLASSICAL LOGIC and FUZZY LOGIC. CLASSICAL LOGIC In classical logic, a simple proposition P is a linguistic, or declarative, statement contained within.
Goals Determine the true value of statements with AND, OR, IF..THEN. Negate statements with the connectives above Construct truth tables Understand when.
Statistical NLP: Lecture 3
CAS LX 502 8a. Formal semantics Truth and meaning The basis of formal semantics: knowing the meaning of a sentence is knowing under what conditions.
Chunk Parsing CS1573: AI Application Development, Spring 2003 (modified from Steven Bird’s notes)
Robust Textual Inference via Graph Matching Aria Haghighi Andrew Ng Christopher Manning.
Introduction to Semantics To be able to reason about the meanings of utterances, we need to have ways of representing the meanings of utterances. A formal.
Two Related Approaches to the Problem of Textual Inference Bill MacCartney NLP Group Stanford University 6 March 2008.
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 6 Dr Souham Meshoul CAP492.
Modeling Semantic Containment and Exclusion in Natural Language Inference Bill MacCartney and Christopher D. Manning NLP Group Stanford University 22 August.
Automatic Classification of Semantic Relations between Facts and Opinions Koji Murakami, Eric Nichols, Junta Mizuno, Yotaro Watanabe, Hayato Goto, Megumi.
An Extended Model of Natural Logic Bill MacCartney and Christopher D. Manning NLP Group Stanford University 8 January 2009.
Two Aspects of the Problem of Natural Language Inference
Containment, Exclusion, and Implicativity: A Model of Natural Logic for Textual Inference Bill MacCartney and Christopher D. Manning NLP Group Stanford.
CS151 Complexity Theory Lecture 6 April 15, 2004.
Logical and Rule-Based Reasoning Part I. Logical Models and Reasoning Big Question: Do people think logically?
Meaning and Language Part 1.
Discrete Mathematics and its Applications
Copyright © Cengage Learning. All rights reserved.
Detection of Relations in Textual Documents Manuela Kunze, Dietmar Rösner University of Magdeburg C Knowledge Based Systems and Document Processing.
SI485i : NLP Set 9 Advanced PCFGs Some slides from Chris Manning.
February 2009Introduction to Semantics1 Logic, Representation and Inference Introduction to Semantics What is semantics for? Role of FOL Montague Approach.
Outline P1EDA’s simple features currently implemented –And their ablation test Features we have reviewed from Literature –(Let’s briefly visit them) –Iftene’s.
Empirical Methods in Information Extraction Claire Cardie Appeared in AI Magazine, 18:4, Summarized by Seong-Bae Park.
Artificial Intelligence: Its Roots and Scope
For Friday Finish chapter 23 Homework: –Chapter 22, exercise 9.
Natural Logic and Natural Language Inference Bill MacCartney Stanford University / Google, Inc. 8 April 2011.
Knowledge and Tree-Edits in Learnable Entailment Proofs Asher Stern, Amnon Lotan, Shachar Mirkin, Eyal Shnarch, Lili Kotlerman, Jonathan Berant and Ido.
Extracting Semantic Constraint from Description Text for Semantic Web Service Discovery Dengping Wei, Ting Wang, Ji Wang, and Yaodong Chen Reporter: Ting.
On the Issue of Combining Anaphoricity Determination and Antecedent Identification in Anaphora Resolution Ryu Iida, Kentaro Inui, Yuji Matsumoto Nara Institute.
November 2003CSA4050: Semantics I1 CSA4050: Advanced Topics in NLP Semantics I What is semantics for? Role of FOL Montague Approach.
Declarative vs Procedural Programming  Procedural programming requires that – the programmer tell the computer what to do. That is, how to get the output.
Pattern-directed inference systems
DEDUCTION PRINCIPLES AND STRATEGIES FOR SEMANTIC WEB Resolution principle, Description Logic and Fuzzy Description Logic Hashim Habiballa University of.
Computational Semantics Day 5: Inference Aljoscha.
A Systematic Exploration of the Feature Space for Relation Extraction Jing Jiang & ChengXiang Zhai Department of Computer Science University of Illinois,
Linguistic Essentials
1 CS 385 Fall 2006 Chapter 1 AI: Early History and Applications.
Programming Languages and Design Lecture 3 Semantic Specifications of Programming Languages Instructor: Li Ma Department of Computer Science Texas Southern.
Auckland 2012Kilgarriff: NLP and Corpus Processing1 The contribution of NLP: corpus processing.
Supertagging CMSC Natural Language Processing January 31, 2006.
Natural Language Processing Slides adapted from Pedro Domingos
Knowledge Structure Vijay Meena ( ) Gaurav Meena ( )
1 CSEP590 – Model Checking and Automated Verification Lecture outline for July 9, 2003.
Error Analysis of Two Types of Grammar for the purpose of Automatic Rule Refinement Ariadna Font Llitjós, Katharina Probst, Jaime Carbonell Language Technologies.
Chunk Parsing. Also called chunking, light parsing, or partial parsing. Method: Assign some additional structure to input over tagging Used when full.
Some Thoughts to Consider 5 Take a look at some of the sophisticated toys being offered in stores, in catalogs, or in Sunday newspaper ads. Which ones.
NLP. Introduction to NLP What is the meaning of: (5+2)*(4+3)? Parse tree N N N N + + E E E E F F* FE E E 49.
Recognising Textual Entailment Johan Bos School of Informatics University of Edinburgh Scotland,UK.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Propositional Logic: Logical Agents (Part I)
Statistical NLP: Lecture 3
Language, Logic, and Meaning
Recognizing Partial Textual Entailment
Automatic Detection of Causal Relations for Question Answering
Back to “Serious” Topics…
Chunk Parsing CS1573: AI Application Development, Spring 2003
Linguistic Essentials
CS246: Information Retrieval
Logical and Rule-Based Reasoning Part I
Presentation transcript:

Natural Logic for Textual Inference Bill MacCartney and Christopher D. Manning NLP Group Stanford University 29 June 2007

2 Inferences involving monotonicity Few states completely forbid casino gambling. OK Few western states completely forbid casino gambling. Few or no states completely forbid casino gambling. Few states completely forbid gambling. No Few states completely forbid casino gambling for kids. Few states or cities completely forbid casino gambling. Few states restrict gambling. What kind of textual inference system could predict this? Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

3 Textual inference: a spectrum of approaches robust, but shallow deep, but brittle natural logic lexical/ semantic overlap Jijkoun & de Rijke 2005 patterned relation extraction Romano et al pred-arg structure matching Hickl et al FOL & theorem proving Bos & Markert 2006 Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

4 What is natural logic? A logic whose vehicle of inference is natural language No formal notation:       Just words & phrases: All men are mortal … Focus on a ubiquitous category of inference: monotonicity I.e., reasoning about the consequences of broadening or narrowing the concepts or constraints in a proposition Precise, yet sidesteps difficulties of translating to FOL: idioms, intensionality and propositional attitudes, modalities, indexicals,reciprocals, scope ambiguities, quantifiers such as most, reciprocals, anaphoric adjectives, temporal and causal relations, aspect, unselective quantifiers, adverbs of quantification, donkey sentences, generic determiners, … Aristotle, Lakoff, van Benthem, Sánchez Valencia 1991 Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

5Outline Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

6 The entailment relation:  In natural logic, entailment is defined as an ordering relation over expressions of all semantic types (not just sentences) categorysemantic typeexample(s) common nouns etet penguin  bird adjectives etet tiny  small intransitive verbs etet hover  fly transitive verbs eeteet kick  strike temporal & locative modifiers (e  t)  (e  t) this morning  today in Beijing  in China connectives tttttt and  or quantifiers (e  t)  t (e  t)  (e  t)  t everyone  someone all  most  some Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

7 Monotonicity of semantic functions Upward-monotone (  M) The default: “bigger” inputs yield “bigger” outputs Example: broken. Since chair  furniture, broken chair  broken furniture Heuristic: in a  M context, broadening edits preserve truth Downward-monotone (  M) Negatives, restrictives, etc.: “bigger” inputs yield “smaller” outputs Example: doesn’t. While hover  fly, doesn’t fly  doesn’t hover Heuristic: in a  M context, narrowing edits preserve truth Non-monotone (#M) Superlatives, some quantifiers ( most, exactly n ): neither  M nor  M Example: most. While penguin  bird, most penguins # most birds Heuristic: in a #M context, no edits preserve truth In compositional semantics, meanings are seen as functions, and can have various monotonicity properties: Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

8 Downward monotonicity few athletes  few sprinters restrictive quantifiers: no, few, at most n prohibit weapons  prohibit guns negative & restrictive verbs: lack, fail, prohibit, deny without clothes  without pants prepositions & adverbs: without, except, only drug ban  heroin ban negative & restrictive nouns: ban, absence [of], refusal If stocks rise, we’ll get real paid  If stocks soar, we’ll get real paid the antecedent of a conditional didn’t dance  didn’t tango explicit negation: no, n’t Downward-monotone constructions are widespread! Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

9 Monotonicity of binary functions Some quantifiers are best viewed as binary functions Different arguments can have different monotonicities all  All ducks fly  All mallards fly  All ducks move some  Some mammals fly  Some animals fly  Some mammals move no  No dogs fly  No poodles fly  No dogs hover not every  Not every bird flies  Not every animal flies  Not every bird hovers Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

10 Composition of monotonicity Composition of functions  composition of monotonicity Sánchez Valencia: a precise monotonicity calculus for CG Few  forbid  states completely casino gambling +++––– Few states completely forbid casino gambling o MM MM #M MM MM MM MM MM MM Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

11 The NatLog System linguistic pre-processing alignment entailment classification textual inference problem prediction Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

12 Step 1: Linguistic Pre-processing Tokenize & parse input sentences (future: & NER & coref & …) Identify & project monotonicity operators Problem: PTB-style parse tree  semantic structure! Few states completely forbid casino gambling JJ NNS RB VBD NN NN NP ADVP NP VP S +++––– Solution: specify projections in PTB trees using Tregex Few  forbid  states completely casino gambling few pattern: JJ < /^[Ff]ew$/ arg1:  M on dominating NP __ >+(NP) (NP=proj !> NP) arg2:  M on dominating S __ >+(/.*/) (S=proj !> S) Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

13 Step 2: Alignment Alignment = a sequence of atomic edits [cf. Harmeling 07] Atomic edits over token spans: DEL, INS, SUB, ADV Limitations: no easy way to represent movement no alignments to non-contiguous sets of tokens Benefits: well-defined sequence of intermediate forms can use adaptation of Levenshtein string-edit DP We haven’t (yet) invested much effort here Few states completely forbid casino gambling Few states have completely prohibited gambling ADV SUBADVINSDEL Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

14 Step 3: Entailment Classification Atomic edits  atomic entailment problems Feature representation Basic features: edit type, monotonicity, “light edit” feature Lexical features for SUB edits: lemma sim, WN features Decision tree classifier Trained on small data set designed to exercise feature space Outputs an elementary entailment relation:   = # | Composition of atomic entailment predictions Fairly intuitive:  º   ,  º   #,  º =  =, etc. Composition yields global entailment prediction for problem Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

15 predict featurize Entailment model example typeINS monodown isLighttrue Few states completely forbid casino gambling. Few states have completely prohibited gambling. SUBINSDEL typeSUB monodown isLightfalse lemSim0.375 wnSyn1.0 wnAnto0.0 wnHypo0.0 typeDEL monoup isLightfalse compose = (equivalent)  (forward) = (equivalent)  (forward) Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

16 The FraCaS test suite FraCaS: mid-90s project in computational semantics 346 “textbook” examples of textual inference problems No delegate finished the report. Some delegate finished the report on time. Smith believed that ITEL had won the contract in ITEL won the contract in sections: quantifiers, plurals, anaphora, ellipsis, … 3 possible answers: yes, no, unknown (not balanced!) 55% single-premise, 45% multi-premise (excluded) unk no Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

17 Results on FraCaS §Category#Acc. 1Quantifiers Plurals Anaphora Ellipsis Adjectives Comparatives Temporal Verbs Attitudes “Applicable”: 1, 5, All sections yesunknototal yes6240—102 unk1545—60 no total guess gold by section confusion matrix Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

18 The RTE3 test suite RTE: more “natural” textual inference problems Much longer premises: average 35 words (vs. 11) Binary classification: yes and no RTE problems not ideal for NatLog Many kinds of inference not addressed by NatLog Big edit distance  propagation of errors from atomic model Maybe we can achieve high precision on a subset? Strategy: hybridize with broad-coverage RTE system As in Bos & Markert 2006 Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

19 A hybrid RTE system using NatLog NatLog pre-processing alignment classification {yes, no} Stanford pre-processing alignment classification [– , +  ] threshold (balanced) {yes, no} xx threshold (optimized) {yes, no} Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

20 Results on RTE3 RTE3 Development Set (800 problems) System% yes precisio n recall accurac y Stanford NatLog Hybrid, balanced Hybrid, optimized RTE3 Test Set (800 problems) System% yes precisio n recall accurac y Stanford NatLog Hybrid, balanced Hybrid, optimized extra problems (significant, p < 0.01) Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion

21Conclusion Natural logic enables precise reasoning about monotonicity, while sidestepping the difficulties of translating to FOL. The NatLog system successfully handles a broad range of such inferences, as demonstrated on the FraCaS test suite. Future work: Add proof search, to handle multiple-premise inference problems Consider using CCG parses to facilitate monotonicity projection Explore the use of more sophisticated alignment models Bring factive & implicative inferences into the NatLog framework :-) Thanks! Questions? Introduction Foundations of Natural Logic The NatLog System Experiments with FraCaS Experiments with RTE Conclusion