Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing Linguistically-based Textual Inferences Martin Forst Palo Alto Research Center Joint work with D. Bobrow, C. Condoravdi, L. Karttunen, T. H.

Similar presentations


Presentation on theme: "Computing Linguistically-based Textual Inferences Martin Forst Palo Alto Research Center Joint work with D. Bobrow, C. Condoravdi, L. Karttunen, T. H."— Presentation transcript:

1 Computing Linguistically-based Textual Inferences Martin Forst Palo Alto Research Center Joint work with D. Bobrow, C. Condoravdi, L. Karttunen, T. H. King, V. de Paiva, A. Zaenen LORIA, Nancy March 20, 2008

2 Overview Introduction Motivation Local Textual Inference PARC’s XLE system Process pipeline Abstract Knowledge Representation (AKR) Conceptual and temporal structure Contextual structure and instantiability Semantic relations Entailments and presuppositions Relative polarity Entailment and Contradiction (ECD) Demo!

3 Motivation  A measure of understanding a text is the ability to make inferences based on the information conveyed by it. We can test understanding by asking questions about the text.  A long-standing goal of computational linguistics is to build a system for answering natural language questions. If the question is Did Shackleton reach the South Pole?, the sentence Shackleton failed to get to the South Pole. contains the answer.  A successful QA system has to recognize semantic relations between sentences. None of the current search engines (Google, Yahoo!) is capable of delivering a simple NO answer in such cases. The system I describe in this talk makes the correct inference.

4 Local Textual Inference PASCAL RTE Challenge (Ido Dagan, Oren Glickman) 2005, 2006 PREMISE CONCLUSION TRUE/FALSE Rome is in Lazio province and Naples is in Campania. Rome is located in Lazio province. TRUE ( = entailed by the premise) Romano Prodi will meet the US President George Bush in his capacity as the president of the European commission. George Bush is the president of the European commission. FALSE (= not entailed by the premise)

5 PARC ECD (Entailment and Contradiction Detection) Text: Kim hopped. Hypothesis:Someone moved. Answer:YES. Text:Sandy touched Kim. Hypothesis:Sandy kissed Kim. Answer:UNKNOWN. Text:Sandy kissed Kim. Hypothesis:No one touched Kim. Answer:NO.

6 World Knowledge Romano Prodi will meet the US President George Bush in his capacity as the president of the European commission. George Bush is the president of the European commission. FALSE (= not entailed by the premise on the correct anaphoric resolution) G. Karas will meet F. Rakas in his capacity as the president of the European commission. F. Rakas is the president of the European commission. TRUE (= entailed by the premise on one anaphoric resolution)

7 Overview Introduction Motivation Local Textual Inference PARC’s XLE system Process pipeline Abstract Knowledge Representation (AKR) Conceptual and temporal structure Contextual structure and instantiability Semantic relations Entailments and presuppositions Relative polarity Entailment and Contradiction (ECD) Demo!

8 XLE System Architecture Text  (A)KR 1. Parse text to LFG c-/f-structure pairs c-structures are context-free trees; f-structures are AVMs Represent syntactic/semantic features (e.g. tense, number) Localize arguments (e.g. long-distance dependencies, control) 2. Rewrite f-structures to AKR clauses Collapse syntactic alternations (e.g. active-passive) Flatten embedded linguistic structure to clausal form Map to concepts and roles in some ontology Represent intensionality, scope, temporal relations Capture commitments of existence/occurrence 3. Rewrite AKR to target KR

9 XLE Pipeline ProcessOutput Text-BreakingDelimited Sentences NE recognitionType-marked Entities (names, dates, etc.) Morphological AnalysisWord stems + features LFG parsingFunctional Representation Semantic ProcessingScope, Predicate-argument structure AKR RulesAbstract Knowledge Representation AlignmentAligned T-H Concepts and Contexts Entailment and Contradiction Detection YES / NO / UNKNOWN

10 XLE Pipeline Mostly symbolic system Ambiguity-enabled through packed representation of analyses Filtering of dispreferred/improbable analyses is possible OT marks mostly on c-/f-structure pairs, but also on c-structures on semantic representations for selectional preferences Statistical models PCFG-based pruning of the chart of possible c-structures Log-linear model that selects n-best c-/f-structure pairs morphological analyses c-structures c-/f-structure pairs CSTRUCTURE OT marks PCFG-based chart pruning “general” OT marks log-linear model

11 F-structures vs. AKR Nested structure of f-structures vs. flat AKR F-structures make syntactically, rather than conceptually, motivated distinctions Syntactic distinctions canonicalized away in AKR Verbal predications and the corresponding nominalizations or deverbal adjectives with no essential meaning differences Arguments and adjuncts map to roles Distinctions of semantic importance are not encoded in f-structures Word senses Sentential modifiers can be scope taking (negation, modals, allegedly, predictably) Tense vs. temporal reference Nonfinite clauses have no tense but they do have temporal reference Tense in embedded clauses can be past but temporal reference is to the future

12 F-Structure to AKR Mapping F-Structure to AKR Mapping Input: F-structures Output: clausal, abstract KR Mechanism: packed term rewriting Rewriting system controls lookup of external ontologies via Unified Lexicon compositionally-driven transformation to AKR Transformations: Map words to Wordnet synsets Canonicalize semantically equivalent but formally distinct representations Make conceptual & intensional structure explicit Represent semantic contribution of particular constructions

13 Basic structure of AKR Conceptual Structure Predicate-argument structures Sense disambiguation Associating roles to arguments and modifiers Contextual Structure Clausal complements Negation Sentential modifiers Temporal Structure Representation of temporal expressions Tense, aspect, temporal modifiers

14 Conceptual Structure  Captures basic predicate-argument structures  Maps words to WordNet synsets  Assigns VerbNet roles subconcept(talk:4,[talk-1,talk-2,speak-3,spill-5,spill_the_beans-1,lecture-1]) role(Actor,talk:4,Ed:1) subconcept(Ed:1,[male-2]) alias(Ed:1,[Ed]) role(cardinality_restriction,Ed:1,sg) Shared by “Ed talked”, “Ed did not talk” and “Bill will say that Ed talked.”

15 Canonicalization in conceptual structure subconcept(tour:13,[tour-1]) role(Theme,tour:13,John:1) role(Location,tour:13,Europe:21) subconcept(Europe:21,[location-1]) alias(Europe:21,[Europe]) role(cardinality_restriction,Europe:21,sg) subconcept(John:1,[male-2]) alias(John:1,[John]) role(cardinality_restriction,John:1,sg) subconcept(travel:6,[travel-1,travel-2,travel- 3,travel-4,travel-5,travel-6]) role(Theme,travel:6,John:1) role(Location,travel:6,Europe:22) subconcept(Europe:22,[location-1]) alias(Europe:22,[Europe]) role(cardinality_restriction,Europe:22,sg) subconcept(John:1,[male-2]) alias(John:1,[John]) role(cardinality_restriction,John:1,sg) “John took a tour of Europe.” “ John traveled around Europe.”

16 Contextual Structure context(t) context(ctx(talk:29)) context(ctx(want:19)) top_context(t) context_relation(t,ctx(want:19),crel(Topic,say:6)) context_relation(ctx(want:19),ctx(talk:29),crel(Theme,want:19)) Bill said that Ed wanted to talk.  Use of contexts enables flat representations Contexts as arguments of embedding predicates  Contexts as scope markers

17 Concepts and Contexts  Concepts live outside of contexts.  Still we want to tie the information about concepts to the contexts they relate to.  Existential commitments Did something happen? e.g. Did Ed talk? Did Ed talk according to Bill? Does something exist? e.g. There is a cat in the yard. There is no cat in the yard.

18 Instantiability An instantiability assertion of a concept-denoting term in a context implies the existence of an instance of that concept in that context. An uninstantiability assertion of a concept-denoting term in a context implies there is no instance of that concept in that context. If the denoted concept is of type event, then existence/nonexistence corresponds to truth or falsity.

19 Negation Contextual structure context(t) context(ctx(talk:12))new context triggered by negation context_relation(t, ctx(talk:12), not:8) antiveridical(t,ctx(talk:12))interpretation of negation Local and lifted instantiability assertions instantiable(talk:12, ctx(talk:12)) uninstantiable (talk:12, t) entailment of negation “Ed did not talk”

20 Relations between contexts Generalized entailment: veridical If c2 is veridical with respect to c1, the information in c2 is part of the information in c1 Lifting rule: instantiable(Sk, c2) => instantiable(Sk, c1) Inconsistency: antiveridical If c2 is antiveridical with respect to c1, the information in c2 is incompatible with the info in c1 Lifting rule: instantiable(Sk, c2) => uninstantiable(Sk, c1) Consistency: averidical If c2 is averidical with respect to c1, the info in c2 is compatible with the information in c1 No lifting rule between contexts

21 Determinants of context relations Relation depends on complex interaction of Concepts Lexical entailment class Syntactic environment Example He didn’t remember to close the window. He doesn’t remember that he closed the window. He doesn’t remember whether he closed the window. He closed the window. Contradicted by 1 Implied by 2 Consistent with 3

22 Overview Introduction Motivation Local Textual Inference PARC’s XLE system Process pipeline Abstract Knowledge Representation (AKR) Conceptual and temporal structure Contextual structure and instantiability Semantic relations Entailments and presuppositions Relative polarity Entailment and Contradiction (ECD) Demo!

23 Embedded clauses The problem is to infer whether an embedded event is instantiable or uninstantiable on the top level. It is surprising that there are no WMDs in Iraq. It has been shown that there are no WMDs in Iraq. ==> There are no WMDs in Iraq.

24 Factives ClassInference Pattern Positive Negative ++/-+ forget that forget that X ⇝ X, not forget that X ⇝ X +-/-- pretend that pretend that X ⇝ not X, not pretend that X ⇝ not X

25 Implicatives ++/-- manage to +-/-+ fail to manage to X ⇝ X, not manage to X ⇝ not X fail to X ⇝ not X, not fail to X ⇝ X ++ force to force X to Y ⇝ Y +- prevent from prevent X from Ying ⇝ not Y -- be able to not be able to X ⇝ not X -+ hesitate to not hesitate to X ⇝ X ClassInference Pattern Two-way implicatives One-way implicatives

26 Implicatives under Factives It is surprising that Bush dared to lie. It is not surprising that Bush dared to lie. Bush lied.

27 Phrasal Implicatives Have Take Ability Noun Chance Noun Character Noun = --Implicative = ++/--Implicative Miss Chance Noun= +-/-+Implicative SeizeChance Noun= ++/--Implicative Chance Noun Effort Noun Asset Noun = ++/--Implicative Use Chance Noun Asset Noun = ++/--Implicative Waste Chance Noun Asset Noun = +-/-+Implicative = ++/--Implicative + + + + + + (ability/means) (chance/opportunity) (courage/nerve) (chance/opportunity) (money) (trouble/initiative) (chance/opportunity) (money) (chance/opportunity) (money) (chance/opportunity)

28 Phrasal Implicatives - Example Joe had the chutzpah to steal the money. ⇝ Joe stole the money. Two-way implicative with “character nouns” “character noun” (gall, gumption, audacity…)

29 Relative Polarity  Veridicality relations between contexts determined on the basis of a recursive calculation of the relative polarity of a given “embedded” context  Globality: The polarity of any context depends on the sequence of potential polarity switches stretching back to the top context  Top-down: Each complement-taking verb or other clausal modifier, based on its parent context's polarity, either switches, preserves or simply sets the polarity for its embedded context

30 Example: polarity propagation “Ed did not forget to force Dave to leave.” “Dave left.”

31 Ed subj obj subjcomp subj not force Dave leave forget Ed + - + + subj Dave leave

32 Overview Introduction Motivation Local Textual Inference PARC’s XLE system Process pipeline Abstract Knowledge Representation (AKR) Conceptual and temporal structure Contextual structure and instantiability Semantic relations Entailments and presuppositions Relative polarity Entailment and Contradiction (ECD) Demo!

33 AKR (Abstract Knowledge Representation)

34 More specific entails less specific

35 How ECD works Kim hopped. Someone moved. Text: Hypothesis: Alignment Specificity computation Elimination of H facts that are entailed by T facts. Kim hopped. Someone moved. Text: Hypothesis: Kim hopped.Text: Hypothesis: t t t t t t Context Someone moved.

36 Alignment and specificity computation Specificity computation Alignment Every (↓) (↑)Some (↑) (↑) Every boy saw a small cat. Every small boy saw a cat. Text: Hypothesis: Every boy saw a small cat. Every small boy saw a cat. Text: Hypothesis: Every boy saw a small cat. Every small boy saw a cat. Text: Hypothesis: t t t t t t Context

37 Elimination of entailed terms Every boy saw a small cat. Every small boy saw a cat. Text: Hypothesis: t t Every boy saw a small cat. Every small boy saw a cat. Text: Hypothesis: t t Every boy saw a small cat. Every small boy saw a cat. Text: Hypothesis: t t Context

38 Contradiction: instantiable --- uninstantiable

39 AKR modifications AKR0 P-AKR Q-AKR simplify augment Oswald killed Kennedy. => Kennedy died. Kim managed to hop. => Kim hopped. normalize The situation improved. The situation became better. =>

40 Conclusion  Local textual inference is a good test bed for computational semantics. It is task-oriented. It abstracts away from particular meaning representations and inference procedures. It allows for systems that make purely linguistic inferences, others may bring in world knowledge and statistical reasoning.  This is a good time to be doing computational semantics. Purely statistical approaches have plateaued. There is computing power for deeper processing. Success might even pay off in real money.

41 Demo

42 Credits ASKER team Daniel Bobrow Bob Cheslow Cleo Condoravdi Dick Crouch(now at Powerset) Martin Forst Ronald Kaplan(now at Powerset) Lauri Karttunen Valeria de Paiva Annie Zaenen Interns Rowan Nairn Matt Paden Karl Pichotta AQUAINT

43 References D. G. Bobrow, B. Cheslow, C. Condoravdi, L. Karttunen, T.H. King, R. Nairn, V. de Paiva, C. Price, and A. Zaenen. PARC's Bridge and Question Answering System, Proceedings of the Grammar Engineering Across Frameworks (GEAF07) Workshop, pp. 46-66, CSLI Publications. Bobrow, D. G., C. Condoravdi, V. de Paiva, L. Karttunen, T. H. King, L. Price, R. Nairn, L.Price, A. Zaenen. Precision-focused Textual Inference, Proceedings of ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pp. 16-21. Crouch, Dick and Tracy Holloway King. Semantics via F- Structure Rewriting. Proceedings of LFG06, CSLI On-line Publications, pp. 145-165. Rowan Nairn, Cleo Condoravdi and Lauri Karttunen. Computing Relative Polarity for Textual Inference. Proceedings of ICoS-5 (Inference in Computational Semantics). April 20-21, 2006. Buxton, UK. AQUAINT


Download ppt "Computing Linguistically-based Textual Inferences Martin Forst Palo Alto Research Center Joint work with D. Bobrow, C. Condoravdi, L. Karttunen, T. H."

Similar presentations


Ads by Google