Recognising Textual Entailment Johan Bos School of Informatics University of Edinburgh Scotland,UK.

Slides:



Advertisements
Similar presentations
EVALITA 2009 Recognizing Textual Entailment (RTE) Italian Chapter Johan Bos 1, Fabio Massimo Zanzotto 2, Marco Pennacchiotti 3 1 University of Rome La.
Advertisements

COGEX at the Second RTE Marta Tatu, Brandon Iles, John Slavick, Adrian Novischi, Dan Moldovan Language Computer Corporation April 10 th, 2006.
ICE1341 Programming Languages Spring 2005 Lecture #6 Lecture #6 In-Young Ko iko.AT. icu.ac.kr iko.AT. icu.ac.kr Information and Communications University.
COGEX at the Second RTE Marta Tatu, Brandon Iles, John Slavick, Adrian Novischi, Dan Moldovan Language Computer Corporation April 10 th, 2006.
Identifying the tradeoffs in textual entailment: deep representation versus shallow entailment Randy Goebel Alberta Innovates Centre for Machine Learning.
Recognizing Textual Entailment Challenge PASCAL Suleiman BaniHani.
Baselines for Recognizing Textual Entailment Ling 541 Final Project Terrence Szymanski.
Recognizing Textual Entailment Progress towards RTE 4 Scott Settembre University at Buffalo, SNePS Research Group
Applications Chapter 9, Cimiano Ontology Learning Textbook Presented by Aaron Stewart.
A Probabilistic Framework for Information Integration and Retrieval on the Semantic Web by Livia Predoiu, Heiner Stuckenschmidt Institute of Computer Science,
Shallow Processing: Summary Shallow Processing Techniques for NLP Ling570 December 7, 2011.
Textual Entailment Using Univariate Density Model and Maximizing Discriminant Function “Third Recognizing Textual Entailment Challenge 2007 Submission”
Introduction to CL Session 1: 7/08/2011. What is computational linguistics? Processing natural language text by computers  for practical applications.
Using Maximal Embedded Subtrees for Textual Entailment Recognition Sophia Katrenko & Pieter Adriaans Adaptive Information Disclosure project Human Computer.
UNED at PASCAL RTE-2 Challenge IR&NLP Group at UNED nlp.uned.es Jesús Herrera Anselmo Peñas Álvaro Rodrigo Felisa Verdejo.
Third Recognizing Textual Entailment Challenge Potential SNeRG Submission.
© Johan Bos April When logical inference helps in determining textual entailment ( and when it doesn’t) Johan Bos & Katja Markert Linguistic Computing.
DOG I : an Annotation System for Images of Dog Breeds Antonis Dimas Pyrros Koletsis Euripides Petrakis Intelligent Systems Laboratory Technical University.
Longbiao Kang, Baotian Hu, Xiangping Wu, Qingcai Chen, and Yan He Intelligent Computing Research Center, School of Computer Science and Technology, Harbin.
Overview of the Fourth Recognising Textual Entailment Challenge NIST-Nov. 17, 2008TAC Danilo Giampiccolo (coordinator, CELCT) Hoa Trang Dan (NIST)
Slide Image Retrieval: A Preliminary Study Guo Min Liew and Min-Yen Kan National University of Singapore Web IR / NLP Group (WING)
Answer Validation Exercise - AVE QA subtrack at Cross-Language Evaluation Forum 2007 UNED (coord.) Anselmo Peñas Álvaro Rodrigo Valentín Sama Felisa Verdejo.
A New Approach for Cross- Language Plagiarism Analysis Rafael Corezola Pereira, Viviane P. Moreira, and Renata Galante Universidade Federal do Rio Grande.
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Data Mining Joyeeta Dutta-Moscato July 10, Wherever we have large amounts of data, we have the need for building systems capable of learning information.
Processing of large document collections Part 2 (Text categorization) Helena Ahonen-Myka Spring 2006.
Assessing the Impact of Frame Semantics on Textual Entailment Authors: Aljoscha Burchardt, Marco Pennacchiotti, Stefan Thater, Manfred Pinkal Saarland.
Working with Discourse Representation Theory Patrick Blackburn & Johan Bos Lecture 5 Applying DRT.
Scott Duvall, Brett South, Stéphane Meystre A Hands-on Introduction to Natural Language Processing in Healthcare Annotation as a Central Task for Development.
Sampletalk Technology Presentation Andrew Gleibman
A Language Independent Method for Question Classification COLING 2004.
Recognizing textual entailment: Rational, evaluation and approaches Source:Natural Language Engineering 15 (4) Author:Ido Dagan, Bill Dolan, Bernardo Magnini.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Instance Filtering for Entity Recognition Advisor : Dr.
Classification Techniques: Bayesian Classification
LOGO Finding High-Quality Content in Social Media Eugene Agichtein, Carlos Castillo, Debora Donato, Aristides Gionis and Gilad Mishne (WSDM 2008) Advisor.
Indirect Supervision Protocols for Learning in Natural Language Processing II. Learning by Inventing Binary Labels This work is supported by DARPA funding.
October 2005CSA3180 NLP1 CSA3180 Natural Language Processing Introduction and Course Overview.
The interface between model-theoretic and corpus-based semantics
Lecture 21 Computational Lexical Semantics Topics Features in NLTK III Computational Lexical Semantics Semantic Web USCReadings: NLTK book Chapter 10 Text.
Chapter 23: Probabilistic Language Models April 13, 2004.
1Ellen L. Walker Category Recognition Associating information extracted from images with categories (classes) of objects Requires prior knowledge about.
Evaluating Answer Validation in multi- stream Question Answering Álvaro Rodrigo, Anselmo Peñas, Felisa Verdejo UNED NLP & IR group nlp.uned.es The Second.
Text Categorization With Support Vector Machines: Learning With Many Relevant Features By Thornsten Joachims Presented By Meghneel Gore.
Iterative similarity based adaptation technique for Cross Domain text classification Under: Prof. Amitabha Mukherjee By: Narendra Roy Roll no: Group:
Support Vector Machines and Kernel Methods for Co-Reference Resolution 2007 Summer Workshop on Human Language Technology Center for Language and Speech.
Answer Mining by Combining Extraction Techniques with Abductive Reasoning Sanda Harabagiu, Dan Moldovan, Christine Clark, Mitchell Bowden, Jown Williams.
QAR Question Answer Relationships. What is QAR? QAR stands for: Q- Question A- Answer R- Relationships –Using QAR we can determine question types to help.
SALSA-WS 09/05 Approximating Textual Entailment with LFG and FrameNet Frames Aljoscha Burchardt, Anette Frank Computational Linguistics Department Saarland.
Overview of Statistical NLP IR Group Meeting March 7, 2006.
NeSy-2006, ECAI-06 Workshop 29 August, 2006, Riva del Garda, Italy Jim Prentzas & Ioannis Hatzilygeroudis Construction of Neurules from Training Examples:
Usefulness of Quality Click- through Data for Training Craig Macdonald, ladh Ounis Department of Computing Science University of Glasgow, Scotland, UK.
A Semantic Text Classification Based on DBpedia Rujiang Bai, Junhua Liao Shandong University of Technology Library Zibo , China { brj,
Neural Machine Translation
Sentence Modeling Representation of sentences is the heart of Natural Language Processing A sentence model is a representation and analysis of semantic.
Chapter 11: Learning Introduction
Wei Wei, PhD, Zhanglong Ji, PhD, Lucila Ohno-Machado, MD, PhD
瞿裕忠 南京大学计算机系 关于“文本蕴涵”的读书报告 瞿裕忠 南京大学计算机系.
Finding Clusters within a Class to Improve Classification Accuracy
Classification Techniques: Bayesian Classification
Computers & Programming Languages
Paraphrase Generation Using Deep Learning
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
iSRD Spam Review Detection with Imbalanced Data Distributions
Authors: Wai Lam and Kon Fan Low Announcer: Kyu-Baek Hwang
CS246: Information Retrieval
What is the Entrance Exams Task
Sanjna Kashyap 11th March 2019
Topic: Semantic Text Mining
Bug Localization with Combination of Deep Learning and Information Retrieval A. N. Lam et al. International Conference on Program Comprehension 2017.
Extracting Why Text Segment from Web Based on Grammar-gram
Presentation transcript:

Recognising Textual Entailment Johan Bos School of Informatics University of Edinburgh Scotland,UK

Textual Entailment Example 1 (TRUE) Text: His family has steadfastily denied the charges. Hypothesis: The charges were denied by his family.

Textual Entailment Example 2 (TRUE) Text: In 1998, the general Assembly of the Nippon Sei Ko Kai (Anglican Church in Japan) voted to accept female priests. Hypothesis: The Anglican church in Japan approved the ordination of women.

Textual Entailment Example 3 (FALSE) Text: The city Tenochtitlan grew rapidly and was the center of the Aztec’s great empire. Hypothesis: Tenochtitlan quickly spread over the island, marshes, and swamps.

Textual Entailment Example 4 (FALSE) Text: Clinton’s new book is not big seller here. Hypothesis: Clinton’s book is a big seller.

Approach in a Nutshell Compute semantic representations for Text and Hypothesis Use logical inference (theorem proving) to determiner whether T entails H Compare this with a shallow approach (word overlap) Use Machine Learning to combine logical inference with shallow word overlap

Talk Outline Compositional Semantics –Discourse Representation Theory –Combinatorial Categorial Grammar –Lambda Calculus as “glue” Inference –Theorem Proving –Model Building –Approximating Entailment Evaluation

Use dataset (training and test set) of the RTE challenge organised by PASCAL network Baseline is 50% Dataset is based on different tasks –CD (Comparable documents) –QA (Question answering) –IE (Information extraction) –MR (machine translation –RC (reading comprehension –PP (paraphrase acquisition –IR (information retrieval)

Machine Learning Each entailment example pair is expressed as a feature vector Train a decision tree for classification into TRUE and FALSE WEKA (Witten & Frank 2000) Test on the test set Evaluation measures: –Accuracy (% correct judgements) –CWS (confidence-weighted score)

Features Shallow features –Word overlap between text and hypothesis –Length of text and hypothesis Deep semantic features –uninformative/inconsistent (theorem prover) –Domainsize, modelsize, and absolute and relative differences between text and hypothesis (model builder)

Results AccuracyCWS Shallow Deep Hybrid (S+D) Hybrid+Task

Conclusions Hybrid approach combines shallow analysis with both theorem proving and model building and achieves high accuracy scores (compared to other systems) Need more work on computing appropriate background knowledge Future work also includes task-based evaluation, for instance in a QA system

Acknowledgements Joined work with –James Curran (Sydney University) –Steve Clark (University of Oxford) –Katja Markert (University of Leeds) –Patrick Blackburn (LORIA, Nancy)