Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Textual Entailment Dan Roth, University of Illinois, Urbana-Champaign USA ACL -2007 Ido Dagan Bar Ilan University Israel Fabio Massimo Zanzotto University.

Similar presentations


Presentation on theme: "1 Textual Entailment Dan Roth, University of Illinois, Urbana-Champaign USA ACL -2007 Ido Dagan Bar Ilan University Israel Fabio Massimo Zanzotto University."— Presentation transcript:

1 1 Textual Entailment Dan Roth, University of Illinois, Urbana-Champaign USA ACL -2007 Ido Dagan Bar Ilan University Israel Fabio Massimo Zanzotto University of Rome Italy

2 Page 2 1.Motivation and Task Definition 2.A Skeletal review of Textual Entailment Systems 3.Knowledge Acquisition Methods 4.Applications of Textual Entailment 5.A Textual Entailment view of Applied Semantics Outline

3 Page 3 I. Motivation and Task Definition

4 Page 4 Motivation Text applications require semantic inference A common framework for applied semantics is needed, but still missing Textual entailment may provide such framework

5 Page 5 Desiderata for Modeling Framework A framework for a target level of language processing should provide: 1)Generic (feasible) module for applications 2)Unified (agreeable) paradigm for investigating language phenomena Most semantics research is scattered WSD, NER, SRL, lexical semantics relations… (e.g. vs. syntax) Dominating approach - interpretation

6 Page 6 Natural Language and Meaning Meaning Language Ambiguity Variability

7 Page 7 Variability of Semantic Expression Model variability as relations between text expressions: Equivalence: text1  text2 (paraphrasing) Entailment: text1  text2 the general case Dow ends up Dow climbs 255 The Dow Jones Industrial Average closed up 255 Stock market hits a record high Dow gains 255 points

8 Page 8 Typical Application Inference: Entailment Overture’s acquisition by Yahoo Yahoo bought Overture Question Expected answer form Who bought Overture? >> X bought Overture text hypothesized answer entails Similar for IE: X acquire Y Similar for “semantic” IR: t: Overture was bought for … Summarization (multi-document) – identify redundant info MT evaluation (and recent ideas for MT) Educational applications

9 Page 9 KRAQ'05 Workshop - KNOWLEDGE and REASONING for ANSWERING QUESTIONS (IJCAI-05) CFP: Reasoning aspects: * information fusion, * search criteria expansion models * summarization and intensional answers, * reasoning under uncertainty or with incomplete knowledge, Knowledge representation and integration: * levels of knowledge involved (e.g. ontologies, domain knowledge), * knowledge extraction models and techniques to optimize response accuracy … but similar needs for other applications – can entailment provide a common empirical framework?

10 Page 10 Classical Entailment Definition Chierchia & McConnell-Ginet (2001): A text t entails a hypothesis h if h is true in every circumstance (possible world) in which t is true Strict entailment - doesn't account for some uncertainty allowed in applications

11 Page 11 “Almost certain” Entailments t: The technological triumph known as GPS … was incubated in the mind of Ivan Getting. h: Ivan Getting invented the GPS.

12 Page 12 Applied Textual Entailment A directional relation between two text fragments: Text (t) and Hypothesis (h): t entails h (t  h) if humans reading t will infer that h is most likely true Operational (applied) definition: Human gold standard - as in NLP applications Assuming common background knowledge – which is indeed expected from applications

13 Page 13 Probabilistic Interpretation Definition: t probabilistically entails h if: P(h is true | t) > P(h is true) t increases the likelihood of h being true ≡ Positive PMI – t provides information on h’s truth P(h is true | t ): entailment confidence The relevant entailment score for applications In practice: “most likely” entailment expected

14 Page 14 The Role of Knowledge For textual entailment to hold we require: text AND knowledge  h but knowledge should not entail h alone Systems are not supposed to validate h’s truth regardless of t (e.g. by searching h on the web)

15 Page 15 PASCAL Recognizing Textual Entailment (RTE) Challenges EU FP-6 Funded PASCAL Network of Excellence 2004-7 Bar-Ilan UniversityITC-irst and CELCT, Trento MITREMicrosoft Research

16 Page 16 Generic Dataset by Application Use 7 application settings in RTE-1, 4 in RTE-2/3 QA IE “Semantic” IR Comparable documents / multi-doc summarization MT evaluation Reading comprehension Paraphrase acquisition Most data created from actual applications output RTE-2/3: 800 examples in development and test sets 50-50% YES/NO split

17 Page 17 RTE Examples TEXTHYPOTHESISTASK ENTAIL- MENT 1 Regan attended a ceremony in Washington to commemorate the landings in Normandy. Washington is located in Normandy. IEFalse 2Google files for its long awaited IPO.Google goes public.IRTrue 3 …: a shootout at the Guadalajara airport in May, 1993, that killed Cardinal Juan Jesus Posadas Ocampo and six others. Cardinal Juan Jesus Posadas Ocampo died in 1993. QATrue 4 The SPD got just 21.5% of the vote in the European Parliament elections, while the conservative opposition parties polled 44.5%. The SPD is defeated by the opposition parties. IETrue

18 Page 18 Participation and Impact Very successful challenges, world wide: RTE-1 – 17 groups RTE-2 – 23 groups ~150 downloads RTE-3 – 25 groups Joint workshop at ACL-07 High interest in the research community Papers, conference sessions and areas, PhD’s, influence on funded projects Textual Entailment special issue at JNLE ACL-07 tutorial

19 Page 19 Methods and Approaches (RTE-2) Measure similarity match between t and h (coverage of h by t): Lexical overlap (unigram, N-gram, subsequence) Lexical substitution (WordNet, statistical) Syntactic matching/transformations Lexical-syntactic variations (“paraphrases”) Semantic role labeling and matching Global similarity parameters (e.g. negation, modality) Cross-pair similarity Detect mismatch (for non-entailment) Interpretation to logic representation + logic inference

20 Page 20 Dominant approach: Supervised Learning Features model similarity and mismatch Classifier determines relative weights of information sources Train on development set and auxiliary t-h corpora t,h Similarity Features: Lexical, n-gram,syntactic semantic, global Feature vector Classifier YES NO

21 Page 21 RTE-2 Results Average PrecisionAccuracyFirst Author (Group) 80.8%75.4%Hickl (LCC) 71.3%73.8%Tatu (LCC) 64.4%63.9%Zanzotto (Milan & Rome) 62.8%62.6%Adams (Dallas) 66.9%61.6%Bos (Rome & Leeds) 58.1%-60.5%11 groups 52.9%-55.6%7 groups Average: 60% Median: 59%

22 Page 22 Analysis For the first time: methods that carry some deeper analysis seemed (?) to outperform shallow lexical methods Cf. Kevin Knight’s invited talk at EACL-06, titled: Isn’t linguistic Structure Important, Asked the Engineer Still, most systems, which do utilize deep analysis, did not score significantly better than the lexical baseline

23 Page 23 Why? System reports point at: Lack of knowledge (syntactic transformation rules, paraphrases, lexical relations, etc.) Lack of training data It seems that systems that coped better with these issues performed best: Hickl et al. - acquisition of large entailment corpora for training Tatu et al. – large knowledge bases (linguistic and world knowledge)

24 Page 24 Some suggested research directions Knowledge acquisition Unsupervised acquisition of linguistic and world knowledge from general corpora and web Acquiring larger entailment corpora Manual resources and knowledge engineering Inference Principled framework for inference and fusion of information levels Are we happy with bags of features?

25 Page 25 Complementary Evaluation Modes “Seek” mode: Input: h and corpus Output: all entailing t ’s in corpus Captures information seeking needs, but requires post-run annotation (TREC-style) Entailment subtasks evaluations Lexical, lexical-syntactic, logical, alignment… Contribution to various applications QA – Harabagiu & Hickl, ACL-06; RE – Romano et al., EACL-06

26 Page 26 II. A Skeletal review of Textual Entailment Systems

27 Page 27 Textual Entailment Eyeing the huge market potential, currently led by Google, Yahoo took over search company Overture Services Inc. last year Yahoo acquired Overture Entails Subsumed by  Overture is a search company Google is a search company ………. Google owns Overture Phrasal verb paraphrasing Entity matching Semantic Role Labeling Alignment Integration How?

28 Page 28 A general Strategy for Textual Entailment Given a sentence T Decision Find the set of Transformations/Features of the new representation (or: use these to create a cost function) that allows embedding of H in T. Given a sentence H ee Re-represent T Lexical Syntactic Semantic Knowledge Base semantic; structural & pragmatic Transformations/rules Re-represent T Re-represent H Lexical Syntactic Semantic Re-represent T Representatio n

29 Page 29 Details of The Entailment Strategy Preprocessing Multiple levels of lexical pre- processing Syntactic Parsing Shallow semantic parsing Annotating semantic phenomena Representation Bag of words, n-grams through tree/graphs based representation Logical representations Knowledge Sources Syntactic mapping rules Lexical resources Semantic Phenomena specific modules RTE specific knowledge sources Additional Corpora/Web resources Control Strategy & Decision Making Single pass/iterative processing Strict vs. Parameter based Justification What can be said about the decision?

30 Page 30 The Case of Shallow Lexical Approaches Preprocessing Identify Stop Words Representation Bag of words Knowledge Sources Shallow Lexical resources – typically Wordnet Control Strategy & Decision Making Single pass Compute Similarity; use threshold tuned on a development set (could be per task) Justification It works

31 Page 31 Shallow Lexical Approaches (Example) Lexical/word-based semantic overlap: score based on matching each word in H with some word in T Word similarity measure: may use WordNet May take account of subsequences, word order ‘Learn’ threshold on maximum word-based match score Text: The Cassini spacecraft has taken images that show rivers on Saturn’s moon Titan. Hyp: The Cassini spacecraft has reached Titan. Text: NASA’s Cassini-Huygens spacecraft traveled to Saturn in 2006. Text: The Cassini spacecraft arrived at Titan in July, 2006. Clearly, this may not appeal to what we think as understanding, and it is easy to generate cases for which this does not work well. However, it works (surprisingly) well with respect to current evaluation metrics (data sets?)

32 Page 32 An Algorithm: LocalLexcialMatching For each word in Hypothesis, Text if word matches stopword – remove word if no words left in Hypothesis or Text return 0 numberMatched = 0; for each word W_H in Hypothesis for each word W_T in Text HYP_LEMMAS = Lemmatize(W_H); TEXT_LEMMAS = Lemmatize(W_T); Use Wordnet’s if any term in HYP_LEMMAS matches any term in TEXT_LEMMAS using LexicalCompare() numberMatched++; Return: numberMatched/|HYP_Lemmas|

33 Page 33 An Algorithm: LocalLexicalMatching (Cont.) LexicalCompare() if(LEMMA_H == LEMMA_T) return TRUE; if(HypernymDistanceFromTo(textWord, hypothesisWord) <= 3) return TRUE; if(MeronymyDistanceFromTo(textWord, hypothesisWord) <= 3) returnTRUE; if(MemberOfDistanceFromTo(textWord, hypothesisWord) <= 3) return TRUE: if(SynonymOf(textWord, hypothesisWord) return TRUE; Notes: LexicalCompare is Asymmetric & makes use of single relation type Additional differences could be attributed to stop word list (e.g, including aux verbs) Straightforward improvements such as bi-grams do not help. More sophisticated lexical knowledge (entities; time) should help. LLM Performance: RTE2: Dev: 63.00 Test: 60.50 RTE 3: Dev: 67.50 Test: 65.63

34 Page 34 Details of The Entailment Strategy (Again) Preprocessing Multiple levels of lexical pre- processing Syntactic Parsing Shallow semantic parsing Annotating semantic phenomena Representation Bag of words, n-grams through tree/graphs based representation Logical representations Knowledge Sources Syntactic mapping rules Lexical resources Semantic Phenomena specific modules RTE specific knowledge sources Additional Corpora/Web resources Control Strategy & Decision Making Single pass/iterative processing Strict vs. Parameter based Justification What can be said about the decision?

35 Page 35 Preprocessing Syntactic Processing: Syntactic Parsing (Collins; Charniak; CCG) Dependency Parsing (+types) Lexical Processing Tokenization; lemmatization For each word in Hypothesis, Text Phrasal verbs Idiom processing Named Entities + Normalization Date/Time arguments + Normalization Semantic Processing Semantic Role Labeling Nominalization Modality/Polarity/Factive Co-reference } often used only during decision making } Only a few systems

36 Page 36 Details of The Entailment Strategy (Again) Preprocessing Multiple levels of lexical pre- processing Syntactic Parsing Shallow semantic parsing Annotating semantic phenomena Representation Bag of words, n-grams through tree/graphs based representation Logical representations Knowledge Sources Syntactic mapping rules Lexical resources Semantic Phenomena specific modules RTE specific knowledge sources Additional Corpora/Web resources Control Strategy & Decision Making Single pass/iterative processing Strict vs. Parameter based Justification What can be said about the decision?

37 Page 37 Basic Representations Meaning Representation Raw Text Inference Representation Textual Entailment Local Lexical Syntactic Parse Semantic Representation Logical Forms Most approaches augment the basic structure defined by the processing level with additional annotation and make use of a tree/graph/frame-based system.

38 Page 38 Basic Representations (Syntax) Local Lexical Syntactic Parse Hyp: The Cassini spacecraft has reached Titan.

39 Page 39 Basic Representations (Shallow Semantics: Pred-Arg ) T: The government purchase of the Roanoke building, a former prison, took place in 1902. H: The Roanoke building, which was a former prison, was bought by the government in 1902. The govt. purchase… prison take placein 1902 ARG_0ARG_1ARG_2 PRED The government buy The Roanoke … prison ARG_0 ARG_1 PRED The Roanoke building be a former prison ARG_1 ARG_2 PRED purchase The Roanoke building ARG_1 PRED In 1902 AM_TMP Roth&Sammons’07

40 Page 40 Basic Representations (Logical Representation) [Bos & Markert] The semantic representation language is a first-order fragment a language used in Discourse Representation Theory (DRS), conveying argument structure with a neo-Davidsonian analysis and Including the recursive DRS structure to cover negation, disjunction, and implication.

41 Page 41 Representing Knowledge Sources Rather straight forward in the Logical Framework: Tree/Graph base representation may also use rule based transformations to encode different kinds of knowledge, sometimes represented as generic or knowledge based tree transformations.

42 Page 42 Representing Knowledge Sources (cont.) In general, there is a mix of procedural and rule based encodings of knowledge sources Done by hanging more information on parse tree or predicate argument representation [Example from LCC’s system] Or different frame-based annotation systems for encoding information, that are processed procedurally.

43 Page 43 Details of The Entailment Strategy (Again) Preprocessing Multiple levels of lexical pre- processing Syntactic Parsing Shallow semantic parsing Annotating semantic phenomena Representation Bag of words, n-grams through tree/graphs based representation Logical representations Knowledge Sources Syntactic mapping rules Lexical resources Semantic Phenomena specific modules RTE specific knowledge sources Additional Corpora/Web resources Control Strategy & Decision Making Single pass/iterative processing Strict vs. Parameter based Justification What can be said about the decision?

44 Page 44 Knowledge Sources The knowledge sources available to the system are the most significant component of supporting TE. Different systems draw differently the line between preprocessing capabilities and knowledge resources. The way resources are handled is also different across different approaches.

45 Page 45 Enriching Preprocessing In addition to syntactic parsing several approaches enrich the representation with various linguistics resources Pos tagging Stemming Predicate argument representation: verb predicates and nominalization Entity Annotation: Stand alone NERs with a variable number of classes Acronym handling and Entity Normalization: mapping mentions of the same entity mentioned in different ways to a single ID. Co-reference resolution Dates, times and numeric values; identification and normalization. Identification of semantic relations: complex nominals, genitives, adjectival phrases, and adjectival clauses. Event identification and frame construction.

46 Page 46 Lexical Resources Recognizing that a word or a phrase in S entails a word or a phrase in H is essential in determining Textual Entailment. Wordnet is the most commonly used resoruce In most cases, a Wordnet based similarity measure between words is used. This is typically a symmetric relation. Lexical chains over Wordnet are used; in some cases, care is taken to disallow some chains of specific relations. Extended Wordnet is being used to make use of Entities Derivation relation which links verbs with their corresponding nominalized nouns.

47 Page 47 Lexical Resources (Cont.) Lexical Paraphrasing Rules A number of efforts to acquire relational paraphrase rules are under way, and several systems are making use of resources such as DIRT and TEASE. Some systems seems to have acquired paraphrase rules that are in the RTE corpus person killed --> claimed one life hand reins over to --> give starting job to same-sex marriage --> gay nuptials cast ballots in the election -> vote dominant firm --> monopoly power death toll --> kill try to kill --> attack lost their lives --> were killed left people dead --> people were killed

48 Page 48 Semantic Phenomena A large number of semantic phenomena have been identified as significant to Textual Entailment. A large number of them are being handled (in a restricted way) by some of the systems. Very little quantification per-phenomena has been done, if at all. Semantic implications of interpreting syntactic structures [Braz et. al’05; Bar-Haim et. al. ’07] Conjunctions Jake and Jill ran up the hill Jake ran up the hill Jake and Jill met on the hill *Jake met on the hill Clausal modifiers But celebrations were muted as many Iranians observed a Shi'ite mourning month. Many Iranians observed a Shi'ite mourning month. Semantic Role Labeling handles this phenomena automatically

49 Page 49 Semantic Phenomena (Cont.) Relative clauses The assailants fired six bullets at the car, which carried Vladimir Skobtsov. The car carried Vladimir Skobtsov. Semantic Role Labeling handles this phenomena automatically Appositives Frank Robinson, a one-time manager of the Indians, has the distinction for the NL. Frank Robinson is a one-time manager of the Indians. Passive We have been approached by the investment banker. The investment banker approached us. Semantic Role Labeling handles this phenomena automatically Genitive modifier Malaysia's crude palm oil output is estimated to have risen.. The crude palm oil output of Malasia is estimated to have risen.

50 Page 50 Logical Structure Factivity : Uncovering the context in which a verb phrase is embedded The terrorists tried to enter the building. The terrorists entered the building. Polarity negative markers or a negation-denoting verb (e.g. deny, refuse, fail) The terrorists failed to enter the building. The terrorists entered the building. Modality/Negation Dealing with modal auxiliary verbs (can, must, should), that modify verbs’ meanings and with the identification of the scope of negation.modal auxiliary verbs Superlatives/Comperatives/Monotonicity: inflecting adjectives or adverbs. Quantifiers, determiners and articles

51 Page 51 Some Examples [ Braz et. al. IJCAI workshop’05;PARC Corpus] T: Legally, John could drive. H: John drove.. S: Bush said that Khan sold centrifuges to North Korea. H: Centrifuges were sold to North Korea.. S: No US congressman visited Iraq until the war. H: Some US congressmen visited Iraq before the war. S: The room was full of women. H: The room was full of intelligent women. S: The New York Times reported that Hanssen sold FBI secrets to the Russians and could face the death penalty. H: Hanssen sold FBI secrets to the Russians. S: All soldiers were killed in the ambush. H: Many soldiers were killed in the ambush.

52 Page 52 Details of The Entailment Strategy (Again) Preprocessing Multiple levels of lexical pre- processing Syntactic Parsing Shallow semantic parsing Annotating semantic phenomena Representation Bag of words, n-grams through tree/graphs based representation Logical representations Knowledge Sources Syntactic mapping rules Lexical resources Semantic Phenomena specific modules RTE specific knowledge sources Additional Corpora/Web resources Control Strategy & Decision Making Single pass/iterative processing Strict vs. Parameter based Justification What can be said about the decision?

53 Page 53 Control Strategy and Decision Making Single Iteration Strict Logical approaches are, in principle, a single stage computation. The pair is processed and transform into the logic form. Existing Theorem Provers act on the pair along with the KB. Multiple iterations Graph based algorithms are typically iterative. Following [Punyakanok et. al ’04] transformations are applied and entailment test is done after each transformation is applied. Transformation can be chained, but sometimes the order makes a difference. The algorithm can be a greedy algorithm or can be more exhaustive, and search for the best path found [Braz et. al’05;Bar-Haim et.al 07]

54 Page 54 Transformation Walkthrough [Braz et. al’05] T: The government purchase of the Roanoke building, a former prison, took place in 1902. H: The Roanoke building, which was a former prison, was bought by the government in 1902. Does ‘H’ follow from ‘T’?

55 Page 55 Transformation Walkthrough (1) T: The government purchase of the Roanoke building, a former prison, took place in 1902. H: The Roanoke building, which was a former prison, was bought by the government in 1902. The govt. purchase… prison take placein 1902 ARG_0ARG_1ARG_2 PRED The government buy The Roanoke … prison ARG_0 ARG_1 PRED The Roanoke building be a former prison ARG_1 ARG_2 PRED purchase The Roanoke building ARG_1 PRED In 1902 AM_TMP

56 Page 56 Transformation Walkthrough (2) T: The government purchase of the Roanoke building, a former prison, took place in 1902. The government purchase of the Roanoke building, a former prison, occurred in 1902. H: The Roanoke building, which was a former prison, was bought by the government. The govt. purchase… prison occur in 1902 ARG_0ARG_2 PRED Phrasal Verb Rewriter

57 Page 57 Transformation Walkthrough (3) T: The government purchase of the Roanoke building, a former prison, occurred in 1902. The government purchase the Roanoke building in 1902. H: The Roanoke building, which was a former prison, was bought by the government in 1902. The government purchase ARG_0 ARG_1 PRED Nominalization Promoter the Roanoke building, a former prison AM_TMP In 1902 NOTE: depends on earlier transformation: order is important!

58 Page 58 Transformation Walkthrough (4) T: The government purchase of the Roanoke building, a former prison, occurred in 1902. The Roanoke building be a former prison. H: The Roanoke building, which was a former prison, was bought by the government in 1902. The Roanoke building be ARG_1ARG_2 PRED Apposition Rewriter a former prison

59 Page 59 Transformation Walkthrough (5) T: The government purchase of the Roanoke building, a former prison, took place in 1902. H: The Roanoke building, which was a former prison, was bought by the government in 1902. The government buy The Roanoke … prison ARG_0 ARG_1 PRED The Roanoke building be a former prison ARG_1 ARG_2 PRED In 1902 AM_TMP The government purchase The Roanoke … prison ARG_0 ARG_1 PRED The Roanoke building be a former prison ARG_1 ARG_2 PRED In 1902 AM_TMP WordNet

60 Page 60 Characteristics Multiple paths => optimization problem Shortest or highest-confidence path through transformations Order is important; may need to explore different orderings Module dependencies are ‘local’; module B does not need access to module A’s KB/inference, only its output If outcome is “true”, the (optimal) set of transformations and local comparisons form a proof

61 Page 61 Summary: Control Strategy and Decision Making Despite the appeal of the Strict Logical approaches as of today, they do not work well enough. Bos & Markert: Strict logical approach is failing significantly behind good LLMs and multiple levels of lexical pre-processing Only incorporating rather shallow features and using it in the evaluation saves this approach. Braz et. al.: Strict graph based representation is not doing as well as LLM. Tatu et. al Results show that strict logical approach is inferior to LLMs, but when put together, it produces some gain. Using Machine Learning methods as a way to combine systems and multiple features has been found very useful.

62 Page 62 Hybrid/Ensemble Approaches Bos et al.: use theorem prover and model builder Expand models of T, H using model builder, check sizes of models Test consistency with background knowledge with T, H Try to prove entailment with and without background knowledge Tatu et al. (2006) use ensemble approach: Create two logical systems, one lexical alignment system Combine system scores using coefficients found via search (train on annotated data) Modify coefficients for different tasks Zanzotto et al. (2006) try to learn from comparison of structures of T, H for ‘true’ vs. ‘false’ entailment pairs Use lexical, syntactic annotation to characterize match between T, H for successful, unsuccessful entailment pairs Train Kernel/SVM to distinguish between match graphs

63 Page 63 Justification For most approaches justification is given only by the data Preprocessed Empirical Evaluation Logical Approaches There is a proof theoretic justification Modulo the power of the resources and the ability to map a sentence to a logical form. Graph/tree based approaches There is a model theoretic justification The approach is sound, but not complete, modulo the availably of resources.

64 Page 64 R - a knowledge representation language, with a well defined syntax and semantics or a domain D. For text snippets s, t: r s, r t - their representations in R. M(r s ), M(r t ) their model theoretic representations There is a well defined notion of subsumption in R, defined model theoretically u, v 2 R: u is subsumed by v when M(u) µ M(v) Not an algorithm; need a proof theory. Justifying Graph Based Approaches [Braz et. al 05]

65 Page 65 The proof theory is weak; will show r s µ r t only when they are relatively similar syntactically. r 2 R is faithful to s if M(r s ) = M(r) Definition: Let s, t, be text snippets with representations r s, r t 2 R. We say that s semantically entails t if there is a representation r 2 R that is faithful to s, for which we can prove that r µ r t Given r s need to generate many equivalent representations r’ s and test r’ s µ r t Defining Semantic Entailment (2) Cannot be done exhaustively How to generate alternative representations?

66 Page 66 A rewrite rule (l,r) is a pair of expressions in R such that l µ r Given a representation r s of s and a rule (r,l) for which r s µ l the augmentation of r s via (l,r) is r’ s = r s Æ r. Claim: r’ s is faithful to s. Proof: In general, since r’ s = r s Æ r then M(r’ s )= M(r s ) Å M(r) However, since r s µ l µ r then M(r s ) µ M(r). Consequently: M(r’ s )= M(r s ) And the augmented representation is faithful to s. Defining Semantic Entailment (3) rsrs l µ r, r s µ l µ r’ s = r s Æ r

67 Page 67 The claim suggests an algorithm for generating alternative (equivalent) representations and for semantic entailment. The resulting algorithm is a sound algorithm, but is not complete. Completeness depends on the quality of the KB of rules. The power of this algorithm is in the rules KB. l and r might be very different syntactically, but by satisfying model theoretic subsumption they provide expressivity to the re-representation in a way that facilitates the overall subsumption. Comments

68 Page 68 The problem of determining non-entailment is harder, mostly due to it’s structure. Most approaches determine non-entailment heuristically. Set a threshold for a cost function. If not met by the pair, say ‘now’ Several approach has identified specific features the hind on non- entialment. A model Theoretic approach for non-entailment has also been developed, although it’s effectiveness isn’t clear yet. Non-Entailment

69 Page 69 What are we missing? It is completely clear that the key resource missing is knowledge. Better resources translate immediately to better results. At this point existing resources seem to be lacking in coverage and accuracy. Not enough high quality public resources; no quantification. Some Examples Lexical Knowledge: Some cases are difficult to acquire systematically. A bought Y  A has/owns Y Many of the current lexical resources are very noisy. Numbers, quantitative reasoning Time and Date; Temporal Reasoning. Robust event based reasoning and information integration

70 Page 70 Textual Entailment as a Classification Task

71 Page 71 RTE as classification task RTE is a classification task: Given a pair we need to decide if T implies H or T does not implies H We can learn a classifier from annotated examples What do we need: A learning algorithm A suitable feature space

72 Page 72 Defining the feature space How do we define the feature space? Possible features “Distance Features” - Features of “some” distance between T and H “Entailment trigger Features” “Pair Feature” – The content of the T-H pair is represented Possible representations of the sentences Bag-of-words (possibly with n-grams) Syntactic representation Semantic representation T1T1 H1H1 “At the end of the year, all solid companies pay dividends.” “At the end of the year, all solid insurance companies pay dividends.” T 1  H 1

73 Page 73 Distance Features Possible features Number of words in common Longest common subsequence Longest common syntactic subtree … T H “At the end of the year, all solid companies pay dividends.” “At the end of the year, all solid insurance companies pay dividends.” T  HT  H

74 Page 74 Entailment Triggers Possible features from (de Marneffe et al., 2006) Polarity features presence/absence of neative polarity contexts (not,no or few, without) “Oil price surged”  “Oil prices didn’t grow” Antonymy features presence/absence of antonymous words in T and H “Oil price is surging”  “Oil prices is falling down” Adjunct features dropping/adding of syntactic adjunct when moving from T to H “all solid companies pay dividends”  “all solid companies pay cash dividends” …

75 Page 75 Pair Features Possible features Bag-of-word spaces of T and H Syntactic spaces of T and H T H “At the end of the year, all solid companies pay dividends.” “At the end of the year, all solid insurance companies pay dividends.” T  HT  H end_Tyear_Tsolid_Tcompanies_Tpay_Tdividends_T …… end_Hyear_Hsolid_Hcompanies_Hpay_Hdividends_H …… insurance_H TH

76 Page 76 Pair Features: what can we learn? Bag-of-word spaces of T and H We can learn: T implies H as when T contains “end”… T does not imply H when H contains “end”… end_Tyear_Tsolid_Tcompanies_Tpay_Tdividends_T …… end_Hyear_Hsolid_Hcompanies_Hpay_Hdividends_H …… insurance_H TH It seems to be totally irrelevant!!!

77 Page 77 (…) ML Methods in the possible feature spaces Possible Features Sentence representation Bag-of-words Semantic Distance Pair (Hickl et al., 2006) Syntactic Entailment Trigger (Zanzotto&Moschitti, 2006) (Bos&Markert, 2006) (Ipken et al., 2006) (Kozareva&Montoyo, 2006) (de Marneffe et al., 2006) (Herrera et al., 2006) (Rodney et al., 2006)

78 Page 78 Effectively using the Pair Feature Space Roadmap Motivation: Reason why it is important even if it seems not. Understanding the model with an example Challenges A simple example Defining the cross-pair similarity (Zanzotto, Moschitti, 2006)

79 Page 79 Observing the Distance Feature Space… T1T1 H1H1 “At the end of the year, all solid companies pay dividends.” “At the end of the year, all solid insurance companies pay dividends.” T 1  H 1 T1T1 H2H2 “At the end of the year, all solid companies pay dividends.” “At the end of the year, all solid companies pay cash dividends.” T 1  H 2 (Zanzotto, Moschitti, 2006) % common syntactic dependencies % common words T 1  H 1 In a distance feature space… … the two pairs are very likely the same point T 1  H 2

80 Page 80 What can happen in the pair feature space? T1T1 H1H1 “At the end of the year, all solid companies pay dividends.” “At the end of the year, all solid insurance companies pay dividends.” T 1  H 1 T1T1 H2H2 “At the end of the year, all solid companies pay dividends.” “At the end of the year, all solid companies pay cash dividends.” T 1  H 2 T3T3 H3H3 “All wild animals eat plants that have scientifically proven medicinal properties.” “All wild mountain animals eat plants that have scientifically proven medicinal properties.” T 3  H 3 S2S2 S1S1 < (Zanzotto, Moschitti, 2006)

81 Page 81 Observations Some examples are difficult to be exploited in the distance feature space… We need a space that considers the content and the structure of textual entailment examples Let us explore: the pair space! … using the Kernel Trick: define the space defining the distance K( P 1, P 2 ) instead of defining the feautures T 1  H 1 T 1  H 2 K( T 1  H 1,T 1  H 2 )

82 Page 82 Target How do we build it: Using a syntactic interpretation of sentences Using a similarity among trees K T (T’,T’’): this similarity counts the number of subtrees in common between T’ and T’’ This is a syntactic pair feature space Question: do we need something more? Page 82 (Zanzotto, Moschitti, 2006) Cross-pair similarity K S ((T’,H’),(T’’,H’’))  K T (T’,T’’)+ K T (H’,H’’)

83 Page 83 Observing the syntactic pair feature space Can we use syntactic tree similarity? (Zanzotto, Moschitti, 2006)

84 Page 84 Observing the syntactic pair feature space Can we use syntactic tree similarity? (Zanzotto, Moschitti, 2006)

85 Page 85 Observing the syntactic pair feature space Can we use syntactic tree similarity? Not only! (Zanzotto, Moschitti, 2006)

86 Page 86 Observing the syntactic pair feature space Can we use syntactic tree similarity? Not only! We want to use/exploit also the implied rewrite rule (Zanzotto, Moschitti, 2006) a b cd a b cd a b cd a b cd

87 Page 87 Exploiting Rewrite Rules To capture the textual entailment recognition rule (rewrite rule or inference rule), the cross-pair similarity measure should consider: the structural/syntactical similarity between, respectively, texts and hypotheses the similarity among the intra-pair relations between constituents How to reduce the problem to a tree similarity computation? (Zanzotto, Moschitti, 2006)

88 Page 88 Exploiting Rewrite Rules (Zanzotto, Moschitti, 2006)

89 Page 89 Exploiting Rewrite Rules Intra-pair operations (Zanzotto, Moschitti, 2006)

90 Page 90 Exploiting Rewrite Rules Intra-pair operations  Finding anchors (Zanzotto, Moschitti, 2006)

91 Page 91 Exploiting Rewrite Rules Intra-pair operations  Finding anchors  Naming anchors with placeholders (Zanzotto, Moschitti, 2006)

92 Page 92 Exploiting Rewrite Rules Intra-pair operations  Finding anchors  Naming anchors with placeholders  Propagating placeholders (Zanzotto, Moschitti, 2006)

93 Page 93 Exploiting Rewrite Rules Intra-pair operations  Finding anchors  Naming anchors with placeholders  Propagating placeholders Cross-pair operations (Zanzotto, Moschitti, 2006)

94 Page 94 Cross-pair operations  Matching placeholders across pairs Exploiting Rewrite Rules Intra-pair operations  Finding anchors  Naming anchors with placeholders  Propagating placeholders (Zanzotto, Moschitti, 2006)

95 Page 95 Exploiting Rewrite Rules Cross-pair operations  Matching placeholders across pairs  Renaming placeholders Intra-pair operations  Finding anchors  Naming anchors with placeholders  Propagating placeholders

96 Page 96 Intra-pair operations  Finding anchors  Naming anchors with placeholders  Propagating placeholders Exploiting Rewrite Rules Cross-pair operations  Matching placeholders across pairs  Renaming placeholders  Calculating the similarity between syntactic trees with co-indexed leaves

97 Page 97 Intra-pair operations  Finding anchors  Naming anchors with placeholders  Propagating placeholders Exploiting Rewrite Rules Cross-pair operations  Matching placeholders across pairs  Renaming placeholders  Calculating the similarity between syntactic trees with co-indexed leaves (Zanzotto, Moschitti, 2006)

98 Page 98 Exploiting Rewrite Rules The initial example: sim(H 1,H 3 ) > sim(H 2,H 3 )? (Zanzotto, Moschitti, 2006)

99 Page 99 Defining the Cross-pair similarity The cross pair similarity is based on the distance between syntatic trees with co-indexed leaves: where C is the set of all the correspondences between anchors of (T’,H’) and (T’’,H’’) t(S, c) returns the parse tree of the hypothesis (text) S where placeholders of these latter are replaced by means of the substitution c i is the identity substitution K T (t 1, t 2 ) is a function that measures the similarity between the two trees t 1 and t 2. (Zanzotto, Moschitti, 2006)

100 Page 100 Defining the Cross-pair similarity

101 Page 101 Refining Cross-pair Similarity Controlling complexity We reduced the size of the set of anchors using the notion of chunk Reducing the computational cost Many subtree computations are repeated during the computation of K T (t 1, t 2 ). This can be exploited for a better dynamic progamming algorithm (Moschitti&Zanzotto, 2007) Focussing on information within a pair relevant for the entailment: Text trees are pruned according to where anchors attach (Zanzotto, Moschitti, 2006)

102 Page 102 BREAK (30 min)

103 Page 103 III. Knowledge Acquisition Methods

104 Page 104 Knowledge Acquisition for TE What kind of knowledge we need? Explicit Knowledge (Structured Knowledge Bases) Relations among words (or concepts) Symmetric: Synonymy, cohypohymy Directional: hyponymy, part of, … Relations among sentence prototypes Symmetric: Paraphrasing Directional : Inference Rules/Rewrite Rules Implicit Knowledge Relations among sentences Symmetric: paraphrasing examples Directional: entailment examples

105 Page 105 Acquisition of Explicit Knowledge

106 Page 106 Acquisition of Explicit Knowledge The questions we need to answer What? What we want to learn? Which resources do we need? Using what? Which are the principles we have? How? How do we organize the “knowledge acquisition” algorithm

107 Page 107 Acquisition of Explicit Knowledge: what? Types of knowledge Symmetric Co-hyponymy Between words: cat  dog Synonymy Between words: buy  acquire Sentence prototypes (paraphrasing) : X bought Y  X acquired Z% of the Y’s shares Directional semantic relations Words: cat  animal, buy  own, wheel partof car Sentence prototypes : X acquired Z% of the Y’s shares  X owns Y

108 Page 108 Acquisition of Explicit Knowledge : Using what? Underlying hypothesis Harris’ Distributional Hypothesis (DH) (Harris, 1964) “Words that tend to occur in the same contexts tend to have similar meanings.” Robison’s Point-wise Assertion Patterns (PAP) (Robison, 1970) “It is possible to extract relevant semantic relations with some pattern.” sim(w 1,w 2 )  sim(C(w 1 ), C(w 2 )) w 1 is in a relation r with w 2 if the context pattern(w 1, w 2 )

109 Page 109 Words or Forms Context (Feature) Space sim w (W 1,W 2 )  sim ctx (C(W 1 ), C(W 2 )) w 1 = constitute w 2 = compose C(w 1 ) C(w 2 ) Distributional Hypothesis (DH) Corpus: source of contexts … sun is constituted of hydrogen … …The Sun is composed of hydrogen …

110 Page 110 Point-wise Assertion Patterns (PAP) w 1 is in a relation r with w 2 if the contexts patterns r (w 1, w 2 ) relationw 1 part_of w 2 patterns “ w 1 is constituted of w 2 ” “ w 1 is composed of w 2 ” Corpus: source of contexts … sun is constituted of hydrogen … …The Sun is composed of hydrogen … part_of(sun,hydrogen) selects correct vs incorrect relations among words Statistical Indicator S corpus (w 1,w 2 )

111 Page 111 Words or Forms Context (Feature) Space w 1 = constitute w 2 = compose C(w 1 ) C(w 2 ) DH and PAP cooperate Corpus: source of contexts … sun is constituted of hydrogen … …The Sun is composed of hydrogen … Distributional Hypothesis Point-wise assertion Patterns

112 Page 112 Knowledge Acquisition: Where methods differ? On the “word” side Target equivalence classes: Concepts or Relations Target forms: words or expressions On the “context” side Feature Space Similarity function Words or Forms Context (Feature) Space w 1 = cat w 2 = dog C(w 1 ) C(w 2 )

113 Page 113 KA4TE: a first classification of some methods Types of knowledge Underlying hypothesis Distributional Hypothesis Point-wise assertion Patterns Symmetric Directional ISA patterns (Hearst, 1992) Verb Entailment (Zanzotto et al., 2006) Concept Learning (Lin&Pantel, 2001a) Inference Rules (DIRT) (Lin&Pantel, 2001b) Relation Pattern Learning (ESPRESSO) (Pantel&Pennacchiotti, 2006) Hearst ESPRESSO (Pantel&Pennacchiotti, 2006) Noun Entailment (Geffet&Dagan, 2005) TEASE (Szepktor et al.,2004)

114 Page 114 Noun Entailment Relation Type of knowledge: directional relations Underlying hypothesis: distributional hypothesis Main Idea: distributional inclusion hypothesis (Geffet&Dagan, 2006) w 1  w 2 if All the prominent features of w 1 occur with w 2 in a sufficiently large corpus Words or Forms Context (Feature) Space ++++ ++ + ++ w1w1 w2w2 C(w 1 ) C(w 2 ) w1  w2w1  w2 I(C(w 2 )) I(C(w 1 ))

115 Page 115 Verb Entailment Relations Type of knowledge: oriented relations Underlying hypothesis: point-wise assertion patterns Main Idea: win  play ? player wins ! (Zanzotto, Pennacchiotti, Pazienza, 2006) relation v1  v2v1  v2 patterns “agentive_nominalization( v 2 ) v 1 ” Point-wise Mutual information Statistical Indicator S  (v 1,v 2 )

116 Page 116 Verb Entailment Relations Understanding the idea Selectional restriction fly(x)  has_wings(x) in general v(x)  c(x)(if x is the subject of v then x has the property c) Agentive nominalization “agentive noun is the doer or the performer of an action v’” “X is player” may be read as play(x) c(x) is clearly v’(x) if the property c is derived by v’ with an agentive nominalization (Zanzotto, Pennacchiotti, Pazienza, 2006) Skipped

117 Page 117 Verb Entailment Relations Understanding the idea Given the expression player wins  Seen as a selctional restriction win(x)  play(x)  Seen as a selectional preference P(play(x)|win(x)) > P(play(x)) Skipped

118 Page 118 Knowledge Acquisition for TE: How? The algorithmic nature of a DH+PAP method Direct Starting point: target words Indirect Starting point: context feature space Iterative Interplay between the context feature space and the target words

119 Page 119 Words or Forms Context (Feature) Space sim(w 1,w 2 )  sim(C(w 1 ), C(w 2 )) w 1 = cat w 2 = dog C(w 1 ) C(w 2 ) Direct Algorithm sim(w 1, w 2 ) I(C(w 1 )) I(C(w 2 )) sim(I(C(w 1 )), I(C(w 2 ))) sim(w 1,w 2 )  sim(I(C(w 1 )), I(C(w 2 ))) 1.Select target words w i from the corpus or from a dictionary 2.Retrieve contexts of each w i and represent them in the feature space C(w i ) 3.For each pair (w i, w j ) 1.Compute the similarity sim(C(w i ), C(w j )) in the context space 2.If sim(w i, w j )= sim(C(w i ), C(w j ))>  w i and w j belong to the same equivalence class W sim(C(w 1 ), C(w 2 ))

120 Page 120 1.Given an equivalence class W, select relevant contexts and represent them in the feature space 2.Retrieve target words (w 1, …, w n ) that appear in these contexts. These are likely to be words in the equivalence class W 3.Eventually, for each w i, retrieve C(w iI ) from the corpus 4.Compute the centroid I(C(W)) 5.For each for each w i, if sim(I(C(W), w i ) { "@context": "http://schema.org", "@type": "ImageObject", "contentUrl": "http://images.slideplayer.com/8/1381578/slides/slide_120.jpg", "name": "Page 120 1.Given an equivalence class W, select relevant contexts and represent them in the feature space 2.Retrieve target words (w 1, …, w n ) that appear in these contexts.", "description": "These are likely to be words in the equivalence class W 3.Eventually, for each w i, retrieve C(w iI ) from the corpus 4.Compute the centroid I(C(W)) 5.For each for each w i, if sim(I(C(W), w i )

121 Page 121 1.For each word w i in the equivalence class W, retrieve the C(w i ) contexts and represent them in the feature space 2.Extract words w j that have contexts similar to C(w i ) 3.Extract contexts C(w j ) of these new words 4.For each for each new word w j, if sim(C(W), w j )> , put w j in W. Words or Forms Context (Feature) Space sim(w 1,w 2 )  sim(C(w 1 ), C(w 2 )) w 1 = cat w 2 = dog C(w 1 ) Iterative Algorithm C(w 2 ) sim(C(w 1 ), C(w 2 )) sim(w 1, w 2 ) sim(w 1,w 2 )  sim(I(C(w 1 )), I(C(w 2 )))

122 Page 122 Knowledge Acquisition using DH and PAH Direct Algorithms Concepts from text via clustering (Lin&Pantel, 2001) Inference rules – aka DIRT (Lin&Pantel, 2001) … Indirect Algorithms Hearst’s ISA patterns (Hearst, 1992) Question Answering patterns (Ravichandran&Hovy, 2002) … Iterative Algorithms Entailment rules from Web – aka TEASE (Szepktor et al., 2004) Espresso (Pantel&Pennacchiotti, 2006) …

123 Page 123 TEASE Type: Iterative algorithm On the “word” side Target equivalence classes: fine-grained relations Target forms: verb with arguments On the “context” side Feature Space Innovations with respect to reasearches < 2004 First direct algorithm for extracting rules prevent(X,Y) X_{filler}:mi?,Y_{filler}:mi? call indictable subj obj mod X Y finally mod (Szepktor et al., 2004)

124 Page 124 TEASE WE B Lexicon Input template: X  subj -accuse- obj  Y Sample corpus for input template: Paula Jones accused Clinton… BBC accused Blair… Sanhedrin accused St.Paul… … Anchor sets: {Paula Jones  subj; Clinton  obj} {Sanhedrin  subj; St.Paul  obj} … Sample corpus for anchor sets: Paula Jones called Clinton indictable… St.Paul defended before the Sanhedrin … Templates: X call Y indictable Y defend before X … TEASE Anchor Set Extraction (ASE) Template Extraction (TE) iterate (Szepktor et al., 2004) Skipped

125 Page 125 TEASE Innovations with respect to reasearches < 2004 First direct algorithm for extracting rules A feature selection is done to assess the most informative features Extracted forms are clustered to obtain the most general sentence prototype of a given set of equivalent forms (Szepktor et al., 2004) call {1} indictable {1} subj {1} obj {1} mod {1} X {1} Y {1} harassment {1} for {1} S1:S1: call {2} indictable {2} subj {2} obj {2} mod {2} X {2} Y {2} S2:S2: finally {2} mod {2} call {1,2} indictable {1,2} subj {1,2} obj {1,2} mod {1,2} X {1,2} Y {1,2} harassment {1} for {1} finally {2} mod {2} Skipped

126 Page 126 Espresso Type: Iterative algorithm On the “word” side Target equivalence classes: relations Target forms: expressions, sequences of tokens Innovations with respect to reasearches < 2006 A measure to determine specific vs. general patterns (ranking in the equivalent forms) Y is composed by X, Y is made of X compose(X,Y) (Pantel&Pennacchiotti, 2006)

127 Page 127 Espresso (leader, panel) (city, region) (oxygen, water) Y is composed by X X, Y Y is part of Y 1.0 Y is composed by X 0.8 Y is part of X 0.2 X, Y (tree, land) (oxygen, hydrogen) (atom, molecule) (leader, panel) (range of information, FBI report) (artifact, exhibit) … 1.0 (tree, land) 0.9 (atom, molecule) 0.7 (leader, panel) 0.6 (range of information, FBI report) 0.6 (artifact, exhibit) 0.2 (oxygen, hydrogen) (Pantel&Pennacchiotti, 2006) Skipped

128 Page 128 Espresso Innovations with respect to reasearches < 2006 A measure to determine specific vs. general patterns (ranking in the equivalent forms) Both pattern and instance selections are performed Different Use of General and specific patterns in the iterative algorithm (Pantel&Pennacchiotti, 2006) 1.0 Y is composed by X 0.8 Y is part of X 0.2 X, Y Skipped

129 Page 129 Acquisition of Implicit Knowledge

130 Page 130 Acquisition of Implicit Knowledge The questions we need to answer What? What we want to learn? Which resources do we need? Using what? Which are the principles we have?

131 Page 131 Acquisition of Implicit Knowledge: what? Types of knowledge Symmetric Nearly Synonymy between sentences Acme Inc. bought Goofy ltd.  Acme Inc. acquired 11% of the Goofy ltd.’s shares Directional semantic relations Entailment between sentences Acme Inc. acquired 11% of the Goofy ltd.’s shares  Acme Inc. owns Goofy ltd. Note: ALSO TRICKY NOT-ENTAILMENT ARE RELEVANT

132 Page 132 Acquisition of Implicit Knowledge : Using what? Underlying hypothesis Structural and content similarity “Sentences are similar if they share enough content” A revised Point-wise Assertion Patterns “Some patterns of sentences reveal relations among sentences” sim(s 1,s 2 ) according to relations from s 1 and s 2

133 Page 133 A first classification of some methods Types of knowledge Underlying hypothesis Structural and content similarity Revised Point-wise assertion Patterns Symmetric Directional Relations among sentences (Hickl et al., 2006) Paraphrase Corpus (Dolan&Quirk, 2004) entails not entails Relations among sentences (Burger&Ferro, 2005)

134 Page 134 Entailment relations among sentences Type of knowledge: directional relations (entailment) Underlying hypothesis: revised point-wise assertion patterns Main Idea: in headline news items, the first sentence/paragraph generally entails the title (Burger&Ferro, 2005) relation s2  s1s2  s1 patterns “News Item Title(s 1 ) First_Sentence(s 2 )” This pattern works on the structure of the text

135 Page 135 Entailment relations among sentences examples from the web New York Plan for DNA Data in Most Crimes Eliot Spitzer is proposing a major expansion of New York’s database of DNA samples to include people convicted of most crimes, while making it easier for prisoners to use DNA to try to establish their innocence. … Title Body Chrysler Group to Be Sold for $7.4 Billion DaimlerChrysler confirmed today that it would sell a controlling interest in its struggling Chrysler Group to Cerberus Capital Management of New York, a private equity firm that specializes in restructuring troubled companies. … Title Body

136 Page 136 Tricky Not-Entailment relations among sentences Type of knowledge: directional relations (tricky not- entailment) Underlying hypothesis: revised point-wise assertion patterns Main Idea: in a text, sentences with a same name entity generally do not entails each other Sentences connected by “on the contrary”, “but”, … do not entail each other (Hickl et al., 2006) relation s1  s2s1  s2 pattern s s 1 and s 2 are in the same text and share at least a named entity “s 1. On the contrary, s 2 ”

137 Page 137 Tricky Not-Entailment relations among sentences examples from (Hickl et al., 2006) One player losing a close friend is Japanese pitcher Hideki Irabu, who was befriended by Wells during spring training last year. Irabu said he would take Wells out to dinner when the Yankees visit Toronto. T H According to the professor, present methods of cleaning up oil slicks are extremely costly and are never completely efficient. T HIn contrast, he stressed, Clean Mag has a 100 percent pollution retrieval rate, is low cost and can be recycled.

138 Page 138 He used a Phillips head to tighten the screw. The bank owner tightened security after a spat of local crimes. The Federal Reserve will aggressively tighten monetary policy. Context Sensitive Paraphrasing ………. Loosen Strengthen Step up Toughen Improve Fasten Impose Intensify Ease Beef up Simplify Curb Reduce Loosen Strengthen Step up Toughen Improve Fasten Impose Intensify Ease Beef up Simplify Curb Reduce

139 Context Sensitive Paraphrasing Can speak replace command? The general commanded his troops. The general spoke to his troops. The soloist commanded attention. The soloist spoke to attention.

140 Context Sensitive Paraphrasing Need to know when one word can paraphrase another, not just if. Given a word v and its context in sentence S, and another word u: Can u replace v in S and have S keep the same or entailed meaning. Is the new sentence S’ where u has replaced v entailed by previous sentence S The general commanded [V] his troops. [Speak = U] The general spoke to his troops. YES The soloist commanded [V ] attention. [Speak = U] The soloist spoke to attention. NO

141 Related Work Paraphrase generation: Given a sentence or phrase, generate paraphrases of that phrase which have the same or entailed meaning in some context. [DIRT;TEASE] A sense disambiguation task – w/o naming the sense Dagan et. al’06 Kauchak & Barzilay (in the context of improving MT evaluation) SemEval word Substitution Task; Pantel et. al ‘06 In these cases, this was done by learning (in a supervised way) a single classifier per word u

142 Context Sensitive Paraphrasing [Connor&Roth ’07] Use a single global binary classifier f(S,v,u) ! {0,1} Unsupervised, bootstrapped, learning approach Key: the use of a very large amount of unlabeled data to derive a reliable supervision signal that is then used to train a supervised learning algorithm. Features are amount of overlap between contexts u and v have both been seen with Include context sensitivity by restricting to contexts similar to S Are both u and v seen in contexts similar to local context S Allows running the classifier on previously unseen pairs (u,v)

143 Page 143 IV. Applications of Textual Entailment

144 Page 144 Relation Extraction (Romano et al. EACL-06) Identify different ways of expressing a target relation Examples: Management Succession, Birth - Death, Mergers and Acquisitions, Protein Interaction Traditionally performed in a supervised manner Requires dozens-hundreds examples per relation Examples should cover broad semantic variability Costly - Feasible??? Little work on unsupervised approaches

145 Page 145 Proposed Approach Input Template X prevent Y Entailment Rule Acquisition Templates X prevention for Y, X treat Y, X reduce Y Syntactic Matcher Relation Instances TEASE Transformation Rules

146 Page 146 Dataset Bunescu 2005 Recognizing interactions between annotated proteins pairs 200 Medline abstracts Input template : X interact with Y

147 Page 147 Manual Analysis - Results 93% of interacting protein pairs can be identified with lexical syntactic templates %Phenomenon% 8relative clause34transparent head 7co-reference24apposition 7coordination24conjunction 2passive form13set # templatesR(%)# templatesR(%) 3960210 7370420 10780630 141901140 1751002150 Frequency of syntactic phenomena: Number of templates vs. recall (within 93%):

148 Page 148 TEASE Output for X interact with Y A sample of correct templates learned: X binding to YX bind to Y X Y interactionX activate Y X attach to YX stimulate Y X interaction with YX couple to Y X trap Yinteraction between X and Y X recruit YX become trapped in Y X associate with YX Y complex X be linked to YX recognize Y X target YX block Y

149 Page 149 Iterative - taking the top 5 ranked templates as input Morph - recognizing morphological derivations (cf. semantic role labeling vs. matching) RecallExperiment 39%input 49%input + iterative 63%input + iterative + morph TEASE Potential Recall on Training Set

150 Page 150 Performance vs. Supervised Approaches Supervised: 180 training abstracts

151 Page 151 Textual Entailment for Question Answering Sanda Harabagiu and Andrew Hickl (ACL-06) : Methods for Using Textual Entailment in Open-Domain Question Answering Typical QA architecture – 3 stages: 1)Question processing 2)Passage retrieval 3)Answer processing  Incorporated their RTE-2 entailment system at stages 2&3, for filtering and re-ranking

152 Page 152 Integrated three methods 1)Test entailment between question and final answer – filter and re-rank by entailment score 2)Test entailment between question and candidate retrieved passage – combine entailment score in passage ranking 3)Test entailment between question and Automatically Generated Questions (AGQ) created from candidate paragraph  Utilizes earlier method for generating Q-A pairs from paragraph  Correct answer should match that of an entailed AGQ  TE is relatively easy to integrate at different stages  Results: 20% accuracy increase

153 Page 153 Answer Validation Exercise @ CLEF 2006-7 Peñas et al., Journal of Logic and Computation (to appear) Allow textual entailment systems to validate (and prioritize) the answers of QA systems participating at CLEF AVE participants receive: 1)question and answer – need to generate full hypothesis 2)supporting passage – should entail the answer hypothesis Methodologically: Enables to measure TE systems contribution to QA performance, across many QA systems TE developers do not need to have full-blown QA system

154 Page 154 V. A Textual Entailment view of Applied Semantics

155 Page 155 Classical Approach = Interpretation Stipulated Meaning Representation (by scholar) Language (by nature) Variability Logical forms, word senses, semantic roles, named entity types, … - scattered interpretation tasks Feasible/suitable framework for applied semantics?

156 Page 156 Textual Entailment = Text Mapping Assumed Meaning (by humans) Language (by nature) Variability

157 Page 157 General Case – Inference Meaning Representation Language Inference Interpretation Textual Entailment Entailment mapping is the actual applied goal - but also a touchstone for understanding! Interpretation becomes possible means Varying representation levels may be investigated

158 Page 158 Some perspectives Issues with semantic interpretation Hard to agree on a representation language Costly to annotate semantic representations for training Difficult to obtain - is it more difficult than needed? Textual entailment refers to texts Texts are theory neutral Amenable for unsupervised learning “Proof is in the pudding” test

159 Page 159 Entailment as an Applied Semantics Framework The new view: formulate (all?) semantic problems as entailment tasks Some semantic problems are traditionally investigated as entailment tasks But also… Revised definitions of old problems Exposing many new ones

160 Page 160 Some Classical Entailment Problems Monotonicity – traditionally approached via entailment Given that: dog  animal Upward monotone: Some dogs are nice  Some animals are nice Downward monotone: No animals are nice  No dogs are nice Some formal approaches – via interpretation to logical form Natural logic – avoids interpretation to FOL (cf. Stanford @ RTE-3) Noun compound relation identification a novel by Tolstoy  Tolstoy wrote a novel Practically an entailment task, when relations are represented lexically (rather than as interpreted semantic notions)

161 Page 161 Revised definition of an Old Problem: Sense Ambiguity Classical task definition - interpretation: Word Sense Disambiguation What is the RIGHT set of senses? Any concrete set is problematic/subjective … but WSD forces you to choose one A lexical entailment perspective: Instead of identifying an explicitly stipulated sense of a word occurrence... identify whether a word occurrence (i.e. its implicit sense) entails another word occurrence, in context Dagan et al. (ACL-2006)

162 Page 162 Synonym Substitution Source = recordTarget = disc This is anyway a stunning disc, thanks to the playing of the Moscow Virtuosi with Spivakov. He said computer networks would not be affected and copies of information should be made on floppy discs. Before the dead soldier was placed in the ditch his personal possessions were removed, leaving one disc on the body for identification purposes. positive negative

163 Page 163 Unsupervised Direct: kNN-ranking Test example score: Average Cosine similarity of target example with k most similar (unlabeled) instances of source word Rational: positive examples of target will be similar to some source occurrence (of corresponding sense) negative target examples won’t be similar to source examples Rank test examples by score A classification slant on language modeling

164 Page 164 Results (for synonyms): Ranking  kNN improves 8-18% precision up to 25% recall

165 Page 165 Other Modified and New Problems Lexical entailment vs. classical lexical semantic relationships synonym ⇔ synonym hyponym ⇒ hypernym (but much beyond WN – e.g. “ medical technology ” ) meronym ⇐ ? ⇒ holonym – depending on meronym type, and context boil on elbow ⇒ boil on arm vs. government voted ⇒ minister voted Named Entity Classification – by any textual type Which pickup trucks are produced by Mitsubishi? Magnum  pickup truck Argument mapping for nominalizations (derivations) X’s acquisition of Y  X acquired Y X’s acquisition by Y  Y acquired X Transparent head sell to an IBM division  sell to IBM sell to an IBM competitor ⇏ sell to IBM …

166 Page 166 The importance of analyzing entailment examples Few systematic manual data analysis works were reported Vanderwende et al. at RTE-1 workshop Bar-Haim et al. at ACL-05 EMSEE Workshop Within Romano et al. at EACL-06 Xerox Parc Data set; Braz et. IJCAI workshop’05 Contribute a lot to understanding and defining entailment phenomena and sub-problems Should be done (and reported) much more…

167 Page 167 Unified Evaluation Framework Defining semantic problems as entailment problems facilitates unified evaluation schemes (vs. current state) Possible evaluation schemes: 1)Evaluate on the general TE task, while creating corpora which focus on target sub-tasks  E.g. a TE dataset with many sense-matching instances  Measure impact of sense-matching algorithms on TE performance 2)Define TE-oriented subtasks, and evaluate directly on sub-task  E.g. a test collection manually annotated for sense-matching  Advantages: isolate sub-problem; researchers can investigate individual problems without needing a full-blown TE system (cf. QA research)  Such datasets may be derived from datasets of type (1)  Facilitates common inference goal across semantic problems

168 Page 168 Summary: Textual Entailment as Goal The essence of the textual entailment paradigm:  Base applied semantic inference on entailment “engines” and KBs  Formulate various semantic problems as entailment sub-tasks Interpretation and “mapping” methods may compete/complement at various levels of representations Open question: which inferences can be represented at “language” level? require logical or specialized representation and inference? (temporal, spatial, mathematical, …)

169 Page 169 Textual Entailment ≈ Human Reading Comprehension From a children’s English learning book (Sela and Greenberg): Reference Text: “…The Bermuda Triangle lies in the Atlantic Ocean, off the coast of Florida. …” Hypothesis (True/False?): The Bermuda Triangle is near the United States ???

170 Page 170 Cautious Optimism: Approaching the Desiderata? 1)Generic (feasible) module for applications 2)Unified (agreeable) paradigm for investigating language phenomena Thank you!


Download ppt "1 Textual Entailment Dan Roth, University of Illinois, Urbana-Champaign USA ACL -2007 Ido Dagan Bar Ilan University Israel Fabio Massimo Zanzotto University."

Similar presentations


Ads by Google