1 Textual Entailment as a Framework for Applied Semantics Ido DaganBar-Ilan University, Israel Joint works with: Oren Glickman, Idan Szpektor, Roy Bar.

Slides:



Advertisements
Similar presentations
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Advertisements

COGEX at the Second RTE Marta Tatu, Brandon Iles, John Slavick, Adrian Novischi, Dan Moldovan Language Computer Corporation April 10 th, 2006.
COGEX at the Second RTE Marta Tatu, Brandon Iles, John Slavick, Adrian Novischi, Dan Moldovan Language Computer Corporation April 10 th, 2006.
Toward Dependency Path based Entailment Rodney Nielsen, Wayne Ward, and James Martin.
Recognizing Textual Entailment Challenge PASCAL Suleiman BaniHani.
1 The PASCAL Recognizing Textual Entailment Challenges - RTE-1,2,3 Ido DaganBar-Ilan University, Israel with …
Baselines for Recognizing Textual Entailment Ling 541 Final Project Terrence Szymanski.
Sentiment Analysis An Overview of Concepts and Selected Techniques.
LEDIR : An Unsupervised Algorithm for Learning Directionality of Inference Rules Advisor: Hsin-His Chen Reporter: Chi-Hsin Yu Date: From EMNLP.
Query Dependent Pseudo-Relevance Feedback based on Wikipedia SIGIR ‘09 Advisor: Dr. Koh Jia-Ling Speaker: Lin, Yi-Jhen Date: 2010/01/24 1.
GENERATING AUTOMATIC SEMANTIC ANNOTATIONS FOR RESEARCH DATASETS AYUSH SINGHAL AND JAIDEEP SRIVASTAVA CS DEPT., UNIVERSITY OF MINNESOTA, MN, USA.
Automatic Metaphor Interpretation as a Paraphrasing Task Ekaterina Shutova Computer Lab, University of Cambridge NAACL 2010.
Textual Entailment as a Framework for Applied Semantics
The Unreasonable Effectiveness of Data Alon Halevy, Peter Norvig, and Fernando Pereira Kristine Monteith May 1, 2009 CS 652.
Search Engines and Information Retrieval
IR Challenges and Language Modeling. IR Achievements Search engines  Meta-search  Cross-lingual search  Factoid question answering  Filtering Statistical.
Gimme’ The Context: Context- driven Automatic Semantic Annotation with CPANKOW Philipp Cimiano et al.
1 Empirical Learning Methods in Natural Language Processing Ido Dagan Bar Ilan University, Israel.
Information Extraction and Ontology Learning Guided by Web Directory Authors:Martin Kavalec Vojtěch Svátek Presenter: Mark Vickers.
Understanding Text Meaning in Information Applications
Introduction to Machine Learning Approach Lecture 5.
Ontology Learning and Population from Text: Algorithms, Evaluation and Applications Chapters Presented by Sole.
AQUAINT Kickoff Meeting – December 2001 Integrating Robust Semantics, Event Detection, Information Fusion, and Summarization for Multimedia Question Answering.
Overview of the Fourth Recognising Textual Entailment Challenge NIST-Nov. 17, 2008TAC Danilo Giampiccolo (coordinator, CELCT) Hoa Trang Dan (NIST)
Knowledge and Tree-Edits in Learnable Entailment Proofs Asher Stern and Ido Dagan (earlier partial version by Roy Bar-Haim) Download at:
Title Extraction from Bodies of HTML Documents and its Application to Web Page Retrieval Microsoft Research Asia Yunhua Hu, Guomao Xin, Ruihua Song, Guoping.
Search Engines and Information Retrieval Chapter 1.
Empirical Methods in Information Extraction Claire Cardie Appeared in AI Magazine, 18:4, Summarized by Seong-Bae Park.
Tree Kernels for Parsing: (Collins & Duffy, 2001) Advanced Statistical Methods in NLP Ling 572 February 28, 2012.
The Second PASCAL Recognising Textual Entailment Challenge Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampicollo, Bernardo Magnini, Idan.
Modeling Documents by Combining Semantic Concepts with Unsupervised Statistical Learning Author: Chaitanya Chemudugunta America Holloway Padhraic Smyth.
Assessing the Impact of Frame Semantics on Textual Entailment Authors: Aljoscha Burchardt, Marco Pennacchiotti, Stefan Thater, Manfred Pinkal Saarland.
Knowledge and Tree-Edits in Learnable Entailment Proofs Asher Stern, Amnon Lotan, Shachar Mirkin, Eyal Shnarch, Lili Kotlerman, Jonathan Berant and Ido.
1 Textual Entailment: A Perspective on Applied Text Understanding Ido DaganBar-Ilan University, Israel Joint works with: Oren Glickman, Idan Szpektor,
Scott Duvall, Brett South, Stéphane Meystre A Hands-on Introduction to Natural Language Processing in Healthcare Annotation as a Central Task for Development.
Question Answering.  Goal  Automatically answer questions submitted by humans in a natural language form  Approaches  Rely on techniques from diverse.
Data Mining Chapter 1 Introduction -- Basic Data Mining Tasks -- Related Concepts -- Data Mining Techniques.
1 Statistical NLP: Lecture 9 Word Sense Disambiguation.
Universit at Dortmund, LS VIII
Recognizing textual entailment: Rational, evaluation and approaches Source:Natural Language Engineering 15 (4) Author:Ido Dagan, Bill Dolan, Bernardo Magnini.
Recognizing Names in Biomedical Texts: a Machine Learning Approach GuoDong Zhou 1,*, Jie Zhang 1,2, Jian Su 1, Dan Shen 1,2 and ChewLim Tan 2 1 Institute.
A Bootstrapping Method for Building Subjectivity Lexicons for Languages with Scarce Resources Author: Carmen Banea, Rada Mihalcea, Janyce Wiebe Source:
Automatic Image Annotation by Using Concept-Sensitive Salient Objects for Image Content Representation Jianping Fan, Yuli Gao, Hangzai Luo, Guangyou Xu.
Collocations and Information Management Applications Gregor Erbach Saarland University Saarbrücken.
Automatic Set Instance Extraction using the Web Richard C. Wang and William W. Cohen Language Technologies Institute Carnegie Mellon University Pittsburgh,
Indirect Supervision Protocols for Learning in Natural Language Processing II. Learning by Inventing Binary Labels This work is supported by DARPA funding.
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
A Scalable Machine Learning Approach for Semi-Structured Named Entity Recognition Utku Irmak(Yahoo! Labs) Reiner Kraft(Yahoo! Inc.) WWW 2010(Information.
CSKGOI'08 Commonsense Knowledge and Goal Oriented Interfaces.
Authors: Marius Pasca and Benjamin Van Durme Presented by Bonan Min Weakly-Supervised Acquisition of Open- Domain Classes and Class Attributes from Web.
Multi-level Bootstrapping for Extracting Parallel Sentence from a Quasi-Comparable Corpus Pascale Fung and Percy Cheung Human Language Technology Center,
Mining Dependency Relations for Query Expansion in Passage Retrieval Renxu Sun, Chai-Huat Ong, Tat-Seng Chua National University of Singapore SIGIR2006.
Towards Entailment Based Question Answering: ITC-irst at Clef 2006 Milen Kouylekov, Matteo Negri, Bernardo Magnini & Bonaventura Coppola ITC-irst, Centro.
Discovering Relations among Named Entities from Large Corpora Takaaki Hasegawa *, Satoshi Sekine 1, Ralph Grishman 1 ACL 2004 * Cyberspace Laboratories.
Data Mining and Decision Support
From Words to Senses: A Case Study of Subjectivity Recognition Author: Fangzhong Su & Katja Markert (University of Leeds, UK) Source: COLING 2008 Reporter:
NTU & MSRA Ming-Feng Tsai
11 A Classification-based Approach to Question Routing in Community Question Answering Tom Chao Zhou 1, Michael R. Lyu 1, Irwin King 1,2 1 The Chinese.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks EMNLP 2008 Rion Snow CS Stanford Brendan O’Connor Dolores.
Overview of Statistical NLP IR Group Meeting March 7, 2006.
Instance Discovery and Schema Matching With Applications to Biological Deep Web Data Integration Tantan Liu, Fan Wang, Gagan Agrawal {liut, wangfa,
Relation Extraction (RE) via Supervised Classification See: Jurafsky & Martin SLP book, Chapter 22 Exploring Various Knowledge in Relation Extraction.
Text Based Information Retrieval
Kenneth Baclawski et. al. PSB /11/7 Sa-Im Shin
Recognizing Partial Textual Entailment
Statistical NLP: Lecture 9
iSRD Spam Review Detection with Imbalanced Data Distributions
Automatic Detection of Causal Relations for Question Answering
Statistical NLP : Lecture 9 Word Sense Disambiguation
Presentation transcript:

1 Textual Entailment as a Framework for Applied Semantics Ido DaganBar-Ilan University, Israel Joint works with: Oren Glickman, Idan Szpektor, Roy Bar Haim, Maayan Geffet, Moshe Koppel Bar Ilan University Shachar MirkinHebrew University, Israel Hristo Tanev, Bernardo Magnini, Alberto Lavelli, Lorenza RomanoITC-irst, Italy Bonaventura Coppola, Milen Kouylekov University of Trento and ITC-irst, Italy Danilo GiampiccoloCELCT, Italy

2 Talk Perspective: A Framework for “Applied Semantics” The textual entailment task – what and why? Evaluation – PASCAL RTE Challenges Modeling approach: –Knowledge acquisition –Inference –Applications An appealing framework for semantic inference –Cf. syntax, MT – clear task, methodology and community

3 Natural Language and Meaning Meaning Language Ambiguity Variability

4 Variability of Semantic Expression Model variability as relations between text expressions: Equivalence: expr1  expr2 (paraphrasing) Entailment: expr1  expr2 – the general case –Incorporates inference as well Dow ends up Dow climbs 255 The Dow Jones Industrial Average closed up 255 Stock market hits a record high Dow gains 255 points

5 Typical Application Inference Overture’s acquisition by Yahoo Yahoo bought Overture Question Expected answer form Who bought Overture? >> X bought Overture Similar for IE: X buy Y Similar for “ semantic ” IR: t: Overture was bought … Summarization (multi-document) – identify redundant info MT evaluation (and recent ideas for MT) Educational applications text hypothesized answer entails

6 KRAQ'05 Workshop - KNOWLEDGE and REASONING for ANSWERING QUESTIONS (IJCAI-05) CFP: –Reasoning aspects: * information fusion, * search criteria expansion models * summarization and intensional answers, * reasoning under uncertainty or with incomplete knowledge, –Knowledge representation and integration: * levels of knowledge involved (e.g. ontologies, domain knowledge), * knowledge extraction models and techniques to optimize response accuracy … but similar needs for other applications – can entailment provide a common empirical task?

7 Classical Entailment Definition Chierchia & McConnell-Ginet (2001): A text t entails a hypothesis h if h is true in every circumstance (possible world) in which t is true Strict entailment - doesn't account for some uncertainty allowed in applications

8 “Almost certain” Entailments t: The technological triumph known as GPS … was incubated in the mind of Ivan Getting. h: Ivan Getting invented the GPS. t: According to the Encyclopedia Britannica, Indonesia is the largest archipelagic nation in the world, consisting of 13,670 islands. h: 13,670 islands make up Indonesia.

9 Applied Textual Entailment Directional relation between two text fragments: Text (t) and Hypothesis (h): t entails h (t  h) if, typically, a human reading t would infer that h is most likely true Operational (applied) definition: –Human gold standard - as in NLP applications –Assuming common background knowledge – which is indeed expected from applications!

10 Probabilistic Interpretation Definition: t probabilistically entails h if: –P(h is true | t) > P(h is true) t increases the likelihood of h being true ≡ Positive PMI – t provides information on h’s truth P(h is true | t ): entailment confidence –The relevant entailment score for applications –In practice: “most likely” entailment expected

11 PASCAL Recognizing Textual Entailment (RTE) Challenges FP-6 Funded PASCAL NOE Bar-Ilan UniversityITC-irst and CELCT, Trento MITREMicrosoft Research

12 Generic Dataset by Application Use 7 application settings in RTE-1, 4 in RTE-2/3 –QA – IE – “Semantic” IR – Comparable documents / multi-doc summarization – MT evaluation – Reading comprehension – Paraphrase acquisition Most data created from actual applications output RTE-2: 800 examples in development and test sets 50-50% YES/NO split

13 Some Examples TEXTHYPOTHESISTASK ENTAIL- MENT 1 Regan attended a ceremony in Washington to commemorate the landings in Normandy. Washington is located in Normandy. IEFalse 2Google files for its long awaited IPO.Google goes public.IRTrue 3 …: a shootout at the Guadalajara airport in May, 1993, that killed Cardinal Juan Jesus Posadas Ocampo and six others. Cardinal Juan Jesus Posadas Ocampo died in QATrue 4 The SPD got just 21.5% of the vote in the European Parliament elections, while the conservative opposition parties polled 44.5%. The SPD is defeated by the opposition parties. IETrue

14 Participation and Impact Very successful challenges, world wide: –RTE-1 – 17 groups –RTE-2 – 23 groups 30 groups in total ~150 downloads! –RTE-3 underway Workshop planned at ACL-07 High interest in the research community –Papers, conference keywords, sessions and areas, PhD’s, influence on funded projects –Textual Entailment special issue at JNLE

15 Methods and Approaches Measure similarity match between t and h (coverage of h by t): –Lexical overlap (unigram, N-gram, subsequence) –Lexical substitution (WordNet, statistical) –Syntactic matching/transformations –Lexical-syntactic variations (“paraphrases”) –Semantic role labeling and matching –Global similarity parameters (e.g. negation, modality) Cross-pair similarity Detect mismatch (for non-entailment) Logical interpretation and inference (vs. matching)

16 Dominant approach: Supervised Learning Features model similarity and mismatch Classifier determines relative weights of information sources Train on development set and auxiliary t-h corpora t,h Similarity Features: Lexical, n-gram,syntactic semantic, global Feature vector Classifier YES NO

17 Results Average PrecisionAccuracyFirst Author (Group) 80.8%75.4%Hickl (LCC) 71.3%73.8%Tatu (LCC) 64.4%63.9%Zanzotto (Milan & Rome) 62.8%62.6%Adams (Dallas) 66.9%61.6%Bos (Rome & Leeds) 58.1%-60.5%11 groups 52.9%-55.6%7 groups Average: 60% Median: 59%

18 Analysis For the first time: deeper methods (semantic/ syntactic/ logical) clearly outperform shallow methods (lexical/n-gram) Cf. Kevin Knight’s invited talk in EACL, titled: Isn’t linguistic Structure Important, Asked the Engineer Still, most systems based on deep analysis did not score significantly better than the lexical baseline

19 Why? System reports point at: –Lack of knowledge (syntactic transformation rules, paraphrases, lexical relations, etc.) –Lack of training data It seems that systems that coped better with these issues performed best: –Hickl et al. - acquisition of large entailment corpora for training –Tatu et al. – large knowledge bases (linguistic and world knowledge)

20 Some suggested research directions Knowledge acquisition –Unsupervised acquisition of linguistic and world knowledge from general corpora and web –Acquiring larger entailment corpora –Manual knowledge engineering for concise knowledge E.g. syntactic transformations, logical axioms Inference –Principled framework for inference and fusing information levels –Are we happy with bags of features?

21 Complementary Evaluation Modes Entailment subtasks evaluations –Lexical, lexical-syntactic, logical, alignment… “Seek” mode: –Input: h and corpus –Output: All entailing t’s in corpus –Captures information seeking needs, but requires post- run annotation (TREC style) Contribution to specific applications! –Cf. QA - Harabagiu & Hickl, ACL-06; RE – EACL-06

22 Where are we?

23 Our Own Research Directions Acquisition Inference Applications

24 Learning Entailment Rules Text: Aspirin prevents Heart Attacks Q: What reduces the risk of Heart Attacks? Entailment Rule: X prevent Y ⇨ X reduce risk of Y Hypothesis: Aspirin reduces the risk of Heart Attacks  Need a large knowledge base of entailment rules template

25 TEASE – Algorithm Flow WE B Lexicon Input template: X  sub j -accuse- obj  Y Sample corpus for input template: Paula Jones accused Clinton… Sanhedrin accused St.Paul… … Anchor sets: {Paula Jones  subj; Clinton  obj} {Sanhedrin  subj; St.Paul  obj} … Sample corpus for anchor sets: Paula Jones called Clinton indictable… St.Paul defended before the Sanhedrin … Templates: X call Y indictable Y defend before X … TEASE Anchor Set Extraction (ASE) Template Extraction (TE) iterate

26 Sample of Extracted Anchor-Sets for X prevent Y X=‘sunscreens’, Y=‘sunburn’ X=‘sunscreens’, Y=‘skin cancer’ X=‘vitamin e’, Y=‘heart disease’ X=‘aspirin’, Y=‘heart attack’ X=‘vaccine candidate’, Y=‘infection’ X=‘universal precautions’, Y=‘HIV’ X=‘safety device’, Y=‘fatal injuries’ X=‘hepa filtration’, Y=‘contaminants’ X=‘low cloud cover’, Y= ‘measurements’ X=‘gene therapy’, Y=‘blindness’ X=‘cooperation’, Y=‘terrorism’ X=‘safety valve’, Y=‘leakage’ X=‘safe sex’, Y=‘cervical cancer’ X=‘safety belts’, Y=‘fatalities’ X=‘security fencing’, Y=‘intruders’ X=‘soy protein’, Y=‘bone loss’ X=‘MWI’, Y=‘pollution’ X=‘vitamin C’, Y=‘colds’

27 Sample of Extracted Templates for X prevent Y X reduce Y X protect against Y X eliminate Y X stop Y X avoid Y X for prevention of Y X provide protection against Y X combat Y X ward Y X lower risk of Y X be barrier against Y X fight Y X reduce Y risk X decrease the risk of Y relationship between X and Y X guard against Y X be cure for Y X treat Y X in war on Y X in the struggle against Y X a day keeps Y away X eliminate the possibility of Y X cut risk Y X inhibit Y

28 Experiment and Evaluation 48 randomly chosen input verbs 1392 templates extracted ; human judgments Encouraging Results: Future work: precision, estimate probabilities Average Yield per verb 29 correct templates per verb Average Precision per verb 45.3%

29 Acquiring Lexical Entailment Relations COLING-04, ACL-05 Lexical entailment via distributional similarity –Individual features characterize semantic properties –Obtain characteristic features via bootstrapping –Test characteristic feature inclusion (vs. overlap) COLING-ACL-06 Integrate pattern-based extraction –NP such as NP1, NP2, … –Complementary information to distributional evidence –Integration using ML with minimal supervision (10 words)

30 Acquisition Example Does not overlap traditional ontological relations Top-ranked entailments for “ company ” : firm, bank, group, subsidiary, unit, business, supplier, carrier, agency, airline, division, giant, entity, financial institution, manufacturer, corporation, commercial bank, joint venture, maker, producer, factory …

31 Analysis Discovering Different Relation Types 80% of hyponymy relations – from the pattern-based method 75% of the synonyms – from distributional similarity Determining Entailment Direction Typically, distributional similarity is non-directional The integrated method correctly identified the single correct entailment direction of 73% of distributional pairs Filtering Co-hyponyms Most co-hyponyms originate in the distributional candidates 65% of co-hyponyms were successfully filtered Precision of distributional pairs nearly doubled

32 Initial Probabilistic Lexical Co-occurrence Models Alignment-based (RTE-1 & ACL-05 Workshop) –The probability that a term in h is entailed by a particular term in t Bayesian classification (AAAI-05) –The probability that a term in h is entailed by (fits in) the entire text of t –An unsupervised text categorization setting – each term is a category Demonstrate directions for probabilistic modeling and unsupervised estimation

33 Manual Syntactic Transformations Example: ‘X prevent Y ’ sunscreen which prevents moles andsunburns () subj obj conj mod N1 N2and N2 rel mod conj  Sunscreen, which prevents moles and sunburns, …. prevent subj obj X Y

34 Syntactic Variability Phenomena Template: X activate Y ExamplePhenomenon Y is activated by XPassive form X activates its companion, YApposition X activates Z and YConjunction X activates two proteins: Y and ZSet X, which activates YRelative clause X binds and activates YCoordination X activates a fragment of YTransparent head X is a kinase, though it activates YCo-reference

35 Takeout Promising potential for creating huge entailment knowledge bases –Mostly by unsupervised approaches

36 Application: Unsupervised Relation Extraction EACL 2006

37 Relation Extraction Subfield of Information Extraction Identify different ways of expressing a target relation –Examples: Management Succession, Birth - Death, Mergers and Acquisitions, Protein Interaction Traditionally performed in a supervised manner –Requires dozens-hundreds examples per relation –Examples should cover broad semantic variability Costly - Feasible??? Little work on unsupervised approaches

38 Our Goals Entailment Approach for Relation Extraction Unsupervised Relation Extraction System Evaluation Framework for Entailment Rule Acquisition and Matching

39 Proposed Approach Input Template X prevent Y Entailment Rule Acquisition Templates X prevention for Y, X treat Y, X reduce Y Syntactic Matcher Relation Instances TEASE Transformation Rules

40 Dataset Bunescu 2005 Recognizing interactions between annotated proteins pairs –200 Medline abstracts –Gold standard dataset of protein pairs Input template : X interact with Y

41 Manual Analysis - Results 93% of interacting protein pairs can be identified with lexical syntactic templates %Phenomenon% 8relative clause34transparent head 7co-reference24apposition 7coordination24conjunction 2passive form13set # templatesR(%)# templatesR(%) Occurrence percentage of each syntactic phenomenon: Number of templates vs. recall (within 93%):

42 TEASE Output for X interact with Y A sample of correct templates learned: X binding to YX bind to Y X Y interactionX activate Y X attach to YX stimulate Y X interaction with YX couple to Y X trap Yinteraction between X and Y X recruit YX become trapped in Y X associate with YX Y complex X be linked to YX recognize Y X target YX block Y

43 Iterative - taking the top 5 ranked templates as input Morph - recognizing morphological derivations (cf. semantic role labeling vs. matching) RecallExperiment 39%input 49%input + iterative 63%input + iterative + morph TEASE algorithm - Potential Recall on Training Set

44 Results for Full System Problems: Dependency parser and syntactic matching errors No morphological derivation recognition TEASE precision (incorrect templates) F1F1 PrecisionRecall %18%Input %29%input + iterative

45 Vs Supervised Approaches 180 training abstracts

46 Inference Goal: infer hypothesis from text –Match and apply entailment rules –Heuristically bridge inference gaps –Cost/probability based Current approaches: mapping language constructs –Vs. semantic interpretation –Lexical-syntactic structures as meaning representation Amenable for unsupervised learning –Entailment rule transformations over syntactic trees

47 Textual Entailment as a Framework for Applied Semantic Inference

48 Classical Approach = Interpretation Stipulated Meaning Representation (by scholar) Language (by nature) Variability  Logical forms, word senses, semantic roles, named entity types, … - scattered works  Feasible/suitable framework for applied semantics?

49 Textual Entailment = Text Mapping Assumed Meaning (by humans) Language (by nature) Variability  Entailment mapping is the actual applied goal - but also a touchstone for understanding!  Interpretation becomes a possible mean

50 Adding inference to the picture Mapping between different meanings that can be inferred from each other

51 Textual Entailment ≈ Human Reading Comprehension From a children’s English learning book (Sela and Greenberg): Reference Text: “…The Bermuda Triangle lies in the Atlantic Ocean, off the coast of Florida. …” Hypothesis (True/False?): The Bermuda Triangle is near the United States ???

52 Opens up a framework for investigating semantics Classical problems can be cast (linguistics) –All boys are nice  All tall boys are nice But also… A new slant at old problems Revealing many new ones

53 Making sense of (implicit) senses What is the RIGHT set of senses? –Any concrete set is problematic/subjective –… but WSD forces you to choose one A lexical entailment perspective: –Instead of identifying an explicitly stipulated sense of a word occurrence … –identify whether a word occurrence (i.e. its implicit sense) entails another word occurrence, in context –ACL-2006

54 Lexical Matching for Applications Sense equivalence T1: IKEA announced a new comfort chair Q: announcement of new models of chairs T2: MIT announced a new CS chair position T1: IKEA announced a new comfort chair Q: announcement of new models of furniture T2: MIT announced a new CS chair position Sense entailment in substitution

55 Investigated Methods Matching: indirect direct Learning:supervised unsupervised Task:classificationranking

56 Unsupervised Direct: kNN-ranking Test example score: Average Cosine similarity with k most similar training examples Rational: –positive examples will be similar to some source occurrence (of corresponding sense) –negative examples won’t be similar to source Rank test examples by score –A classification slant on language modeling

57 Results (for synonyms): Ranking  kNN improves 8-18% precision up to 25% recall

58 Other problems Named Entity Classification –Which pickup trucks are produced by Mitsubishi? Magnum  pickup truck Lexical semantic relationships (e.g. Wordnet) –Which relations contribute to entailment inference? How? Lexical reference – lexical-projection of entailment –(EMNLP-06 ) …

59 Distinguish Goal from Means The essence of the proposal - TE as goal: –Adopt textual entailment problems as the test goal for semantic models –Base applied semantic inference on entailment “engines” Interpretations and mapping methods may compete Open question: which inference –can be represented at language level? –requires logical or specialized representation and inference? (temporal, mathematical, spatial, …)

60 Meeting the knowledge challenge – by a coordinated effort? A vast amount of “entailment rules” needed Speculation: is it possible to have a public effort for knowledge acquisition? –Simple, uniform representations –Assuming mostly automatic acquisition (millions of rules?) –Human Genome Project analogy First step: RTE-3 Resources Pool at ACLWiki

61 Optimistic Conclusions: Textual Entailment… A promising framework for applied semantics –Application-independent abstraction –Text mapping as goal rather than interpretation –Amenable for empirical evaluation Defines new semantic problems to work on May be modeled probabilistically Appealing potential for knowledge acquisition Thank you!