Automatic Evaluation of Linguistic Quality in Multi- Document Summarization Pitler, Louis, Nenkova 2010 Presented by Dan Feblowitz and Jeremy B. Merrill.

Slides:



Advertisements
Similar presentations
Ani Nenkova Lucy Vanderwende Kathleen McKeown SIGIR 2006.
Advertisements

1 Opinion Summarization Using Entity Features and Probabilistic Sentence Coherence Optimization (UIUC at TAC 2008 Opinion Summarization Pilot) Nov 19,
Specialized models and ranking for coreference resolution Pascal Denis ALPAGE Project Team INRIA Rocquencourt F Le Chesnay, France Jason Baldridge.
Automatic summarization Dragomir R. Radev University of Michigan
Towards Twitter Context Summarization with User Influence Models Yi Chang et al. WSDM 2013 Hyewon Lim 21 June 2013.
Comparing Twitter Summarization Algorithms for Multiple Post Summaries David Inouye and Jugal K. Kalita SocialCom May 10 Hyewon Lim.
TextMap: An Intelligent Question- Answering Assistant Project Members:Ulf Hermjakob Eduard Hovy Chin-Yew Lin Kevin Knight Daniel Marcu Deepak Ravichandran.
Processing of large document collections Part 6 (Text summarization: discourse- based approaches) Helena Ahonen-Myka Spring 2006.
1 Discourse, coherence and anaphora resolution Lecture 16.
Discourse Martin Hassel KTH NADA Royal Institute of Technology Stockholm
SI485i : NLP Set 11 Distributional Similarity slides adapted from Dan Jurafsky and Bill MacCartney.
GENERATING AUTOMATIC SEMANTIC ANNOTATIONS FOR RESEARCH DATASETS AYUSH SINGHAL AND JAIDEEP SRIVASTAVA CS DEPT., UNIVERSITY OF MINNESOTA, MN, USA.
Text Specificity and Impact on Quality of News Summaries Annie Louis & Ani Nenkova University of Pennsylvania June 24, 2011.
Predicting Text Quality for Scientific Articles Annie Louis University of Pennsylvania Advisor: Ani Nenkova.
Predicting Text Quality for Scientific Articles AAAI/SIGART-11 Doctoral Consortium Annie Louis : Louis A. and Nenkova A Automatically.
1 I256: Applied Natural Language Processing Marti Hearst Oct 2, 2006.
Generative Models of Discourse Eugene Charniak Brown Laboratory for Linguistic Information Processing BL IP L.
CS 4705 Lecture 21 Algorithms for Reference Resolution.
Information Retrieval and Extraction 資訊檢索與擷取 Chia-Hui Chang National Central University
Query session guided multi- document summarization THESIS PRESENTATION BY TAL BAUMEL ADVISOR: PROF. MICHAEL ELHADAD.
What is Readability?  A characteristic of text documents..  “the sum total of all those elements within a given piece of printed material that affect.
Empirical Methods in Information Extraction Claire Cardie Appeared in AI Magazine, 18:4, Summarized by Seong-Bae Park.
Lecture 12: 22/6/1435 Natural language processing Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
A Compositional Context Sensitive Multi-document Summarizer: Exploring the Factors That Influence Summarization Ani Nenkova, Stanford University Lucy Vanderwende,
Learning to Predict Readability using Diverse Linguistic Features Rohit J. Kate 1 Xiaoqiang Luo 2 Siddharth Patwardhan 2 Martin Franz 2 Radu Florian 2.
From Extracting to Abstracting Generating Quasi- abstractive Summaries Zhuli Xie Application & Software Research Center Motorola Labs Barbara Di Eugenio,
A Comparison of Features for Automatic Readability Assessment Lijun Feng 1 Matt Huenerfauth 1 Martin Jansche 2 No´emie Elhadad 3 1 City University of New.
Illinois-Coref: The UI System in the CoNLL-2012 Shared Task Kai-Wei Chang, Rajhans Samdani, Alla Rozovskaya, Mark Sammons, and Dan Roth Supported by ARL,
Scott Duvall, Brett South, Stéphane Meystre A Hands-on Introduction to Natural Language Processing in Healthcare Annotation as a Central Task for Development.
Incorporating Extra-linguistic Information into Reference Resolution in Collaborative Task Dialogue Ryu Iida Shumpei Kobayashi Takenobu Tokunaga Tokyo.
LexRank: Graph-based Centrality as Salience in Text Summarization
A Machine Learning Approach to Sentence Ordering for Multidocument Summarization and Its Evaluation D. Bollegala, N. Okazaki and M. Ishizuka The University.
1 Towards Automated Related Work Summarization (ReWoS) HOANG Cong Duy Vu 03/12/2010.
Summarization of XML Documents K Sarath Kumar. Outline I.Motivation II.System for XML Summarization III.Ranking Model and Summary Generation IV.Example.
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
Collocations and Information Management Applications Gregor Erbach Saarland University Saarbrücken.
BioSnowball: Automated Population of Wikis (KDD ‘10) Advisor: Dr. Koh, Jia-Ling Speaker: Lin, Yi-Jhen Date: 2010/11/30 1.
REFERENTIAL CHOICE AS A PROBABILISTIC MULTI-FACTORIAL PROCESS Andrej A. Kibrik, Grigorij B. Dobrov, Natalia V. Loukachevitch, Dmitrij A. Zalmanov
Processing of large document collections Part 6 (Text summarization: discourse- based approaches) Helena Ahonen-Myka Spring 2005.
1 Sentence Extraction-based Presentation Summarization Techniques and Evaluation Metrics Makoto Hirohata, Yousuke Shinnaka, Koji Iwano and Sadaoki Furui.
Summarization Focusing on Polarity or Opinion Fragments in Blogs Yohei Seki Toyohashi University of Technology Visiting Scholar at Columbia University.
Department of Software and Computing Systems Research Group of Language Processing and Information Systems The DLSIUAES Team’s Participation in the TAC.
An Iterative Approach to Extract Dictionaries from Wikipedia for Under-resourced Languages G. Rohit Bharadwaj Niket Tandon Vasudeva Varma Search and Information.
Summarizing Encyclopedic Term Descriptions on the Web from Coling 2004 Atsushi Fujii and Tetsuya Ishikawa Graduate School of Library, Information and Media.
Creating Subjective and Objective Sentence Classifier from Unannotated Texts Janyce Wiebe and Ellen Riloff Department of Computer Science University of.
Inference Protocols for Coreference Resolution Kai-Wei Chang, Rajhans Samdani, Alla Rozovskaya, Nick Rizzolo, Mark Sammons, and Dan Roth This research.
Number Sense Disambiguation Stuart Moore Supervised by: Anna Korhonen (Computer Lab)‏ Sabine Buchholz (Toshiba CRL)‏
Multi-level Bootstrapping for Extracting Parallel Sentence from a Quasi-Comparable Corpus Pascale Fung and Percy Cheung Human Language Technology Center,
L ITERATURE REVIEW RESEARCH METHOD FOR ACADEMIC PROJECT I.
Support Vector Machines and Kernel Methods for Co-Reference Resolution 2007 Summer Workshop on Human Language Technology Center for Language and Speech.
Finding document topics for improving topic segmentation Source: ACL2007 Authors: Olivier Ferret (18 route du Panorama, BP6) Reporter:Yong-Xiang Chen.
Divided Pretreatment to Targets and Intentions for Query Recommendation Reporter: Yangyang Kang /23.
A Novel Relational Learning-to- Rank Approach for Topic-focused Multi-Document Summarization Yadong Zhu, Yanyan Lan, Jiafeng Guo, Pan Du, Xueqi Cheng Institute.
Event-Based Extractive Summarization E. Filatova and V. Hatzivassiloglou Department of Computer Science Columbia University (ACL 2004)
LexPageRank: Prestige in Multi-Document Text Summarization Gunes Erkan, Dragomir R. Radev (EMNLP 2004)
Pastra and Saggion, EACL 2003 Colouring Summaries BLEU Katerina Pastra and Horacio Saggion Department of Computer Science, Natural Language Processing.
An evolutionary approach for improving the quality of automatic summaries Constantin Orasan Research Group in Computational Linguistics School of Humanities,
1 ICASSP Paper Survey Presenter: Chen Yi-Ting. 2 Improved Spoken Document Retrieval With Dynamic Key Term Lexicon and Probabilistic Latent Semantic Analysis.
Summarizing Contrastive Viewpoints in Opinionated Text Michael J. Paul, ChengXiang Zhai, Roxana Girju EMNLP ’ 10 Speaker: Hsin-Lan, Wang Date: 2010/12/07.
Evaluating NLP Features for Automatic Prediction of Language Impairment Using Child Speech Transcripts Khairun-nisa Hassanali 1, Yang Liu 1 and Thamar.
A Survey on Automatic Text Summarization Dipanjan Das André F. T. Martins Tolga Çekiç
GRAPH BASED MULTI-DOCUMENT SUMMARIZATION Canan BATUR
Korean version of GloVe Applying GloVe & word2vec model to Korean corpus speaker : 양희정 date :
Learning Compression & Linguistic Quality
An Empirical Study of Learning to Rank for Entity Search
Summarization: Overview
Entity- & Topic-Based Information Ordering
Improving a Pipeline Architecture for Shallow Discourse Parsing
Multilingual Summarization with Polytope Model
Presented by Nick Janus
Presentation transcript:

Automatic Evaluation of Linguistic Quality in Multi- Document Summarization Pitler, Louis, Nenkova 2010 Presented by Dan Feblowitz and Jeremy B. Merrill

Motivation Automatic evaluation of content selection is already done. Automatic evaluation of content selection is already done. ROUGE: automated metric for info content. (Lin and Hovy, 2003; Lin, 2004) ROUGE: automated metric for info content. (Lin and Hovy, 2003; Lin, 2004) No automatic evaluation of linguistic quality available. No automatic evaluation of linguistic quality available. We want fluent and easy-to-read summaries. We want fluent and easy-to-read summaries. How to test? How to test?

Intuitions: Aspects of Ling Quality Grammaticality Grammaticality The Police found no second armed man. LOS ANGELES -- A sniping incident Sunday damaged helicopter. The Police found no second armed man. LOS ANGELES -- A sniping incident Sunday damaged helicopter. Non-redundancy Non-redundancy Bill Clinton ate a banana yesterday. Bill Clinton liked it. Bill Clinton was in Los Angeles. Bill Clinton ate a banana yesterday. Bill Clinton liked it. Bill Clinton was in Los Angeles. Referential Clarity Referential Clarity The beer scavvy participant, a 20-year-old male, was arrested Saturday. “This was really irresponsible,” she said. The beer scavvy participant, a 20-year-old male, was arrested Saturday. “This was really irresponsible,” she said. Focus Focus To show solidarity with dining hall workers, Bill Clinton ate a banana. He was at Frary. Frary contains a mural by some Mexican muralist. To show solidarity with dining hall workers, Bill Clinton ate a banana. He was at Frary. Frary contains a mural by some Mexican muralist. Structure and Coherence Structure and Coherence Harvey Mudd was founded in It is a engineering college. It has eight dorms. Its founder was named Harvey. Harvey Mudd was founded in It is a engineering college. It has eight dorms. Its founder was named Harvey.

Correlation Among Aspects Referential Clarity, Focus and Structure are significantly correlated with each other. (Along with a few more significant correlations.) Referential Clarity, Focus and Structure are significantly correlated with each other. (Along with a few more significant correlations.) Linguistic quality rankings correlate positively with content quality rankings. Linguistic quality rankings correlate positively with content quality rankings. Human rankers. Human rankers.

Goal Find automated measures that correlate with the intuition-based aspects. Find automated measures that correlate with the intuition-based aspects. System-level evaluation System-level evaluation Input-level evaluation Input-level evaluation

Automated Measures Language Modeling: Gigaword corpus /1-,2-,3-gram Language Modeling: Gigaword corpus /1-,2-,3-gram Entity explanation: Named Entities, NP Syntax Entity explanation: Named Entities, NP Syntax Cohesive devices: demonstratives, pronouns, definite descriptions, sentence-initial discourse connectives Cohesive devices: demonstratives, pronouns, definite descriptions, sentence-initial discourse connectives Sentence fluency: length, fragments, etc. Sentence fluency: length, fragments, etc. Coh-Metrix: Psycholinguistic readability measures Coh-Metrix: Psycholinguistic readability measures Word Coherence Word Coherence Treat adjacent sentences as parallel texts Treat adjacent sentences as parallel texts Calculate “translation model’’ in each direction Calculate “translation model’’ in each direction

Automated Measures (cont) Continuity Continuity Summarization specific : Measures likelihood that discourse connectives retain their context. Does previous sentence in summary match previous sentence in input? Summarization specific : Measures likelihood that discourse connectives retain their context. Does previous sentence in summary match previous sentence in input? Cosine similarity of words across adjacent sentences. Cosine similarity of words across adjacent sentences. Coreference: Pronoun resolution system. Probability of antecedent presence in sentence, previous sentence. Coreference: Pronoun resolution system. Probability of antecedent presence in sentence, previous sentence. Entity coherence Entity coherence Matrix of entities’ grammatical roles; measure transition probabilities among entity’s role in adjacent sentence. Matrix of entities’ grammatical roles; measure transition probabilities among entity’s role in adjacent sentence.

Experiment Setup Data from summarization task of 2006/2007 Document Understanding Conference Data from summarization task of 2006/2007 Document Understanding Conference 2006 (training/dev sets) 50 inputs, 35 systems tested 2006 (training/dev sets) 50 inputs, 35 systems tested Jackknifing Jackknifing 2007 (test set) 45 inputs, 32 systems 2007 (test set) 45 inputs, 32 systems One ranker for each feature group, plus meta- ranker. One ranker for each feature group, plus meta- ranker. Rank systems/summaries relative to a gold standard human ranking based on each automated measure. Rank systems/summaries relative to a gold standard human ranking based on each automated measure. Find correlations with human ranking on aspects. Find correlations with human ranking on aspects.

Results (System-Level) Prediction Accuracy Prediction Accuracy Percentage of pairwise comparisons matching gold standard. Percentage of pairwise comparisons matching gold standard. Baseline: 50% (random) Baseline: 50% (random) System-level: (for summarization system) System-level: (for summarization system) Prediction accuracies around 90% for all aspects Prediction accuracies around 90% for all aspects Sentence fluency method single best correlation with Grammaticality. Meta-ranker has best overall correlation. Sentence fluency method single best correlation with Grammaticality. Meta-ranker has best overall correlation. Continuity method best correlates with Non- Redundancy, Referential Clarity, Focus, Structure. Continuity method best correlates with Non- Redundancy, Referential Clarity, Focus, Structure.

Results (Input-Level) Input-level: (for each summary) Input-level: (for each summary) Prediction accuracies around 70% -- harder task. Prediction accuracies around 70% -- harder task. Sentence fluency method single best correlation with grammaticality. Sentence fluency method single best correlation with grammaticality. Coh-Metrix single best correlation with Non- Redundancy Coh-Metrix single best correlation with Non- Redundancy Continuity best correlates with Referential Clarity, Focus, Structure. Continuity best correlates with Referential Clarity, Focus, Structure. Meta-ranker best correlation for all aspects. Meta-ranker best correlation for all aspects.

Results (Human-Written) Input-level analysis on human-written, abstractive summaries. Input-level analysis on human-written, abstractive summaries. Abstractive: Rewritten content Abstractive: Rewritten content Extractive: Extracts subset of content, i.e. picking sentences Extractive: Extracts subset of content, i.e. picking sentences Grammaticality: NP Syntax (64.6%) Non-redundancy: Coherence devices (68.6%) Referential Clarity: Sentence Fluency, Meta-Ranker (80.4%) Focus: Sentence Fluency, LMs (71.9%) Structure: LMs (78.4%) Grammaticality: NP Syntax (64.6%) Non-redundancy: Coherence devices (68.6%) Referential Clarity: Sentence Fluency, Meta-Ranker (80.4%) Focus: Sentence Fluency, LMs (71.9%) Structure: LMs (78.4%)

Components of Continuity Subsets of features in continuity block removed one- at-a-time to measure effect of each. Cosine similarity had greatest effect (-10%) Summary-specific features were second (-7%) Removing coreference features had no effect.

Conclusions Continuity features correlate with linguistic quality of machine-written summaries. Continuity features correlate with linguistic quality of machine-written summaries. Sentence fluency features correlate with grammaticality. Sentence fluency features correlate with grammaticality. LM and entity coherence features also correlate relatively strongly. LM and entity coherence features also correlate relatively strongly. This will make testing systems easier. Hooray! This will make testing systems easier. Hooray!

Questions?