Download presentation
Presentation is loading. Please wait.
1
AUTOMATIC TEXT SUMMARIZATION
By Chetana Gavankar Subhabrata Mukherjee Kedharnath Narahari Sarbartha Sengupta under guidance of: Prof Pushpak Bhattacharya
2
PRESENTATION CONTENT Motivation Types of summaries Challenges
Single-Document Summarization Early work Machine Learning Methods Supervised Methods Unsupervised Method Deep Natural Language Analysis methods Multi-Document Summarization Evaluation Conclusion
3
MOTIVATION Download 1000 + papers and get the summary..
You have list of s about sports event get the summary of those s in one para… You have to study loads of books for the exam and the summarizer gives the key concepts of the books as few pages notes… Value for researchers Get me everything Papers say about “Automatic Text Summarization”
4
MOTIVATION
5
DEFINITION Automatic Summaries
Should be less than half of original text Should convey important information May be produced from single or multiple documents (Radev et al) Dipanjan Das, Andre F.T. Martins (2007). A Survey on Automatic Text Summarization. Literature Survey for the Language and Statistics II course at CMU, Pittsburg
6
APPLICATIONS - NEWS AGGREGATOR
7
APPLICATIONS – MOVIE REVIEWS
8
TYPES OF SUMMARIES With respect to content:
Indicative: provide an idea what the text is about, but do not render the content Informative: shortened versions of the text With respect to the way of creating: Extracts: identify important sections of the text Abstracts: produce important material in a new way Dipanjan Das, Andre F.T. Martins (2007). A Survey on Automatic Text Summarization. Literature Survey for the Language and Statistics II course at CMU, Pittsburg
9
TYPES OF SUMMARIES Restricted vs. Unrestricted domain
With respect to Input Restricted vs. Unrestricted domain Single-document vs. Multiple-document With respect to Purpose Generic vs. Query based Background vs. just-the-newsnd vs. just-the-news domain-specific vs. general: When the input texts all pertain to a single domain, it may be appropriate to apply domain-specific summarization techniques, focus on specific content, and output specific formats, compared to the general case. A domain-specific summary derives from input text(s) whose theme(s) pertain to a single restricted domain A generic summary provides the author’s point of view of the input text(s), giving equal import to all major themes in it. A query-oriented (or user-oriented) summary favors specific themes or aspect(s) of the text, in response to a user’s desire to learn about just those themes in particular. It may do so explicitly, by highlighting pertinent themes, or implicitly, by omitting themes that do not match the user’s interests. Background vs. just-the-news: A background summary assumes the reader’s prior knowledge of the general setting of the input text(s) content is poor, and hence includes explanatory material, such as circumstances of place, time, and actors. A just-the-news summary contains just the new or principal themes, assuming that the reader knows enough background to interpret them in context. Eduard Hovy and Lin C. Y. "Automated Text Summarization in summarist", MIT Press
10
TYPES OF SUMMARIES Restricted vs. Unrestricted domain
With respect to Input Restricted vs. Unrestricted domain Single-document vs. Multiple-document With respect to Purpose Generic vs. Query based Background vs. just-the-news domain-specific vs. general: When the input texts all pertain to a single domain, it may be appropriate to apply domain-specific summarization techniques, focus on specific content, and output specific formats, compared to the general case. A domain-specific summary derives from input text(s) whose theme(s) pertain to a single restricted domain A generic summary provides the author’s point of view of the input text(s), giving equal import to all major themes in it. A query-oriented (or user-oriented) summary favors specific themes or aspect(s) of the text, in response to a user’s desire to learn about just those themes in particular. It may do so explicitly, by highlighting pertinent themes, or implicitly, by omitting themes that do not match the user’s interests. Background vs. just-the-news: A background summary assumes the reader’s prior knowledge of the general setting of the input text(s) content is poor, and hence includes explanatory material, such as circumstances of place, time, and actors. A just-the-news summary contains just the new or principal themes, assuming that the reader knows enough background to interpret them in context. Eduard Hovy and Lin C. Y. (1998 )"Automated Text Summarization and the summarist system", TIPSTER III Final Report (SUMMAC)
11
TOOLS – WORD SUMMARIZER
Microsoft Word AutoSummarize
12
TOOLS – GNOME SUMMARIZER
13
TOOLS – SWESUM SUMMARIZER
14
CHALLENGES Selecting pieces from the input and concatenating them to yield a summary High reduction rates like headline Methods for evaluating summaries Multiple languages Multiple Hybrid sources These problems arise because existing robust NLP methods tend to operate at the word level, and hence miss concept-level generalizations (which are provided by symbolic world knowledge), while on the other hand symbolic knowledge is too difficult to acquire in large enough scale to provide adequate coverage and robustness. summarization = topic identification + interpretation + generation For identification, the goal is to filter the input to retain only the most important, central, topics. Once they have been identified, they can simply be output, to form an extract. For interpretation, the goal is to perform compaction through re-interpreting and fusing the extracted topics into more succinct ones. This is necessary because abstracts are usually much shorter than their equivalent extracts. All the variations of fusion are yet to be discovered, but they include at least simple concept generalization (he ate pears, apples, and bananas® he ate fruit) and script identification (he sat down, read the menu, ordered, ate, and left® he visited the restaurant). We discuss interpretation in Section 4. For generation, the goal is to produce the output summary. In the case of extracts, generation is a null step, but in the case of abstracts, the generator has to reformulate the extracted and fused material into a coherent, densely phrased, new text. Hahn U. and Mani I. (2000) “The Challenges of Automatic Summarization”, Computer, IEEE Computer Society
15
EARLY WORK ATS has its roots in the last 50’s and has been developed continuously… A word frequency based ATS [Luhn, 1958]. An ATS based on multiple features [Edmundson, 1969]. …….. ……. Still unsolved !
16
WORD FREQUENCY Content words indicate topic of a text
Frequency of a content word – measure of its significance Retrieve the top n frequent occurring content words Rank a sentence according to the frequency of those words present in it E FREQUENCY WORDS Resolving power of significant words Luhn, H. P. (1958). The automatic creation of literature abstracts. IBM Journal of Research Development, 2(2):
17
Position of a Sentence Sentences occurring under certain headings are positively relevant Topic sentences tend to occur very early or very late in a document and its paragraphs Optimum Position Policy a ranked list that indicates in what ordinal positions in the text the high-topic-bearing sentences tend to occur. (Lin and Hovy, 97). Ex: [T1, P1S1, P1S2, ...] for a News Article Optimal Position Policy (OPP), a ranked list that indicates in what ordinal positions in the text the high-topic-bearing sentences tend to occur. Edmundson, H. P. (1969). New methods in automatic extracting. Journal of the ACM, 16(2):
18
Cue words in a Text Probable relevance of a sentence affected by Cue words: Bonus words: positively affecting the relevance of a sentence (e.g. “Significant”, “Greatest”) Stigma words: negatively affecting the relevance of a sentence (e.g. “Impossible”, “Hardly”) Null words: irrelevant sets of phrases—bonus phrases and stigma phrases—that tend to signal when a sentence is a likely candidate for inclusion in a summary and when it is definitely not a candidate, respectively. Bonus phrases such as “in summary”, “in conclusion”, and superlatives such as “the best”, “the most important” can be good indicators of important content. During processing, the Cue Phrase Module simply rewards each sentence containing a cue phrase with an appropriate score (constant per cue phrase) and penalizes those containing stigma phrases. Unfortunately, cue phrases are genre dependent. For example, “Abstract” and “in conclusion” are more likely to occur in scientific literature than in newspaper articles. Edmundson, H. P. (1969). New methods in automatic extracting. Journal of the ACM, 16(2):
19
NAÏVE BAYES Let s be a particular sentence, S the set of sentences making up the summary and F1, … Fk be the set of features Assume feature independence Additional Features Sentence Length Presence of Uppercase Words Position, Cue features with Sentence Length performed best Kupiec, J., Pedersen, J., and Chen, F. (1995). A trainable document summarizer, In Proceedings SIGIR '95, pages 68-73
20
Naïve Bayes Contd… Richer features Tf-idf to derive signature words
Named Entity Tagger to retrieve tokens Shallow Discourse Analysis to maintain cohesion Synonym and Morphological variants of lexical terms merged using WordNet. Aone, C., Okurowski, M. E., Gorlinsky, J., and Larsen, B. (1999), A trainable summarizer with knowledge acquired from robust nlp techniques, In Mani, I. and Maybury, M. T., editors, Advances in Automatic Text Summarization, pages 71-80
21
Decision Tree Feature independence assumption not valid in real world situation Creation of feature vector Baseline Title tf & tf-idf scores Position score Query Signature IR Signature Sentence Length Average Lexical Connectivity Numerical Data Proper Name Pronoun & Adjective Weekday & Month Quotation First Sentence Lin, C.-Y. (1999). Training a selection function for extraction, In Proceedings of CIKM '99, pages 55-62
22
Decision Tree Contd… Scores of all the features combined by automated learning process using decision tree and normalized Remarks Decision Tree performs best over all dataset Naïve combination beats Decision Tree in 3 topics Possible Reason ???? Features were Independent
23
Hidden Markov Model Drawbacks of earlier approches
Feature based bag-of-words model Non-sequential Use sequential model to account for local dependencies between sentences Features Position of sentence in the document Number of terms in the sentence Likeliness of the sentence terms given the document terms Conroy, J. M. and O'leary, D. P. (2001). Text summarization via hidden markov models, In Proceedings of SIGIR '01, pages
24
Hidden Markov Model Contd…
2s+1 states alternating between s summary states & s+1 non-summary states Odd state summary state, Even state non- summary state Transition Matrix M whose element (i,j) is the probability of transition from state i to j Output function bi(o)= Pr(O|state i) where O is an observed vector of features Assumption : features are multivariate normal M & O learnt from training data
25
Log-Linear Models Let c be a label, d the item we are interested in labeling, fi the ith feature and λi the corresponding feature weight Z(d) = Ʃcexp(Ʃi λifi(c,d)) is the normalization constant fw,c’ (d,c)= c≠c’ c=c’ Larger value of λi means fi is a strong indicator of class c GIS, IIS used to iteratively tune model parameters. Outperformed Naïve Bayes DRAWBACK : Overfitting Osborne, M. (2002). Using maximum entropy for sentence extraction. In Proceedings of the ACL'02 Workshop on Automatic Summarization
26
Drawbacks of Earlier Methods
1.Performance might degrade, if the text consists of multiple topics. 2.Anaphors in the extracted sentences might not have any antecedent in the summary. 3.The summary might be incoherent, since the sentences are just extracted from various parts of the text.
27
Drawbacks Contd … Consider the following two sequences:
1. “Dr.Kenny has invented an anesthetic machine. This device controls the rate at which an anesthetic is pumped into the blood.” 2. “Dr.Kenny has invented an anesthetic machine. The doctor spent two years on this research.” “Dr.Kenny” appears once in both sequences and so does “machine”. But sequence 1 is about the machine, and sequence 2 is about the “doctor”.
28
Cohesion Cohesion ( Halliday & Hasan 1976) Lexical Cohesion
“stitching together” different parts of the text Use of semantically related terms, co-reference, ellipsis, conjunctions Lexical Cohesion Semantically related words Reiteration category Repetations, synonyms, hyponyms Collocation category Words occurring in same lexical context Ex: She works as a teacher in the school.
29
Lexical Chain General Approach
1. Select a set of candidate words; 2. For each candidate word, find an appropriate chain relying on a relatedness criterion among members of the chains; 3. If it is found, insert the word in the chain and update it accordingly. Relations Extra-strong (between word & its repetation) No restriction Strong (between 2 words connected by WordNet reln) Window of 7 sentences Medium-strong (path length > 1 hop ) Window of 3 sentences Preference : Extra-strong > Strong > Medium-strong
30
Drawback [lex "Mr.", sense {mister, Mr.}]
Mr. Kenny is the person that invented an anesthetic machine which uses micro-computers to control the rate at which an anesthetic is pumped into the blood. Such machines are nothing new. But his device uses two micro-computers to achieve much closer monitoring of the pump feeding the anesthetic into the patient. (Morris & Hirst 1991) [lex "Mr.", sense {mister, Mr.}] [lex "person", sense {person, individual,someone, man, mortal, human, soul}]. First sense of machine in WordNet – “an efficient person” – a holonym of “person” and thus wrongly disambiguated
31
Component and Graph Connectivity
Barzilay, R. and Elhadad, M. (1997). Using lexical chains for text summarization. In Proceedings ISTS'97
32
Component & Graph Connectivity Contd…
33
Multi Document Summarization
Multiple sources of information Similarity between topics Supplement each other Occasionally contradictory Key Tasks Identifying Key concepts across documents Coping with redundancy Ensuring final Summary is coherent and complete Applications: news clustering systems Google News, Columbia NewsBlaster, News in Essence etc
34
TOPIC-DRIVEN SUMMARIZATION
Carbonell, J. and Goldstein, J. (1998). The use of MMR, diversity-based re-ranking for reordering documents and producing summaries. 34
35
TOPIC DRIVEN SUMMARIZATION(contd.)
Carbonell, J. and Goldstein, J. (1998). The use of MMR, diversity-based re-ranking for reordering documents and producing summaries. In proceedings of SIGIR '98.
36
GRAPH SPREADING Mani, I. and Bloedorn, E. (1997). Multi-document summarization by graph search and matching. In AAAI/IAAI, pages
37
Example for nodes and links in the graph(mani and bloerdon 97’)
Mani, I. and Bloedorn, E. (1997). Multi-document summarization by graph search and matching. In AAAI/IAAI, pages
38
GRAPH SPREADING(Contd..)
Words and phrases are intialized according to their TF-IDF scores. For each sentence in both documents, two scores are computed. One score that reflects the presence of common nodes, which is computed as the average weight of these nodes; Other score that computes instead the average weights of difference nodes. the sentences that have higher common and different scores are highlighted and accordingly the ouput is generated.
39
CENTROID-BASED SUMMARY
Does not make use of language generation model. Documents are modeled as bag-of-words. Topic Detection: Clustering algorithm that uses TF-IDF vector repesentations of documents Successively add documents and recomputes centroids. Centroids are pseudo documents that include words with TF-IDF score above some threshold. d’-truncated document C j – jth cluster Radev, D. R., Jing, H., and Budzikowska, M. (2000). Centroid-based summarization of multiple documents: sentence extraction, utility-based evaluation, and user studies. 39
40
CENTROID BASED SUMMARIZATION
Sentence Identification: 2metrics Cluster Based Relative Utility-how relevant a sentence is to particular topic of the cluster. Cross Sentence Informational Subsumption- measure of redundancy among sentences. Radev, D. R., Jing, H., and Budzikowska, M. (2000). Centroid-based summarization of multiple documents: sentence extraction, utility-based evaluation, and user studies. In NAACL-ANLP 2000 Workshop on Automatic summarization, pages 21-30, Morristown, NJ, USA.
41
MULTILINGUAL MULTI-DOCUMENT SUMMARY
Target language(English) in which summary is written. Source documents present in both preferred language and foreign language(Arabic). Use IBM’s translational model to translate documents in source language to target language. Check for similarity between translated sentences in two documents. If similarities found, retain documents in source language, since they can be more grammatically correct. Evans, D. K. (2005). Similarity-based multilingual multi-document summarization. Technical Report CUCS , Columbia University.
42
SHORT SUMMARIES Witbrock, M. J. and Mittal, V. O. (1999). Ultra-summarization (poster abstract): a statistical approach to generating highly condensed non-extractive summaries.
43
Witbrock, M. J. and Mittal, V. O. (1999)
Witbrock, M. J. and Mittal, V. O. (1999). Ultra-summarization (poster abstract): a statistical approach to generating highly condensed non-extractive summaries. In Proceedings of SIGIR '99, pages 315{316, New York, NY, USA.
44
Compression of sentence used for summarization.
SENTENCE COMPRESSION Compression of sentence used for summarization. Uses noisy-channel model which considers that one starts with a short summary s, according to source model P(s). Subjected to noisy-channel to make full sentence t, in a process guided by channel model, P(t/s). Now observing t, recover the summary as: s’=argmaxs P(s/t)= argmaxs P(s).P(t/s) Advantage of decoupling the goals of grammatical correctness and preserving important information. Knight, K. and Marcu, D. (2000). Statistics-based summarization - step one: Sentence compression. In AAAI/IAAI, pages
45
Knight, K. and Marcu, D. (2000). Statistics-based summarization - step one: Sentence compression. In AAAI/IAAI, pages
46
Evaluation Difficult task. (There does not exist an ideal summary for a given document or set of document.) Agreement between human summarizers is quite low. Difficult to evaluate the summary content. Absence of a standard human or automatic evaluation metric.
47
Evaluation Lin and Hovy (2002).
describe and compare various human and automatic metrics to evaluate summaries. Focus on the evaluation procedure used in the Document Understanding Conference 2001(DUC-2001). compared manually written ideal summaries with summaries generated automatically by summarization systems and baseline summaries. Lin, C.-Y. and Hovy, E. (2002). Manual and automatic evaluation of summaries. In Proceedings of the ACL-02 Workshop on Automatic Summarization, pages 45-51
48
Evaluation Lin and Hovy (2002).
Each text was decomposed into a list of units (sentences). stepped through each model unit (MU) from the ideal summaries. marked all system units (SU) sharing content with the current model unit. All (4) Most (3) Some (2) or Hardly any (1) Grammaticality, cohesion, and coherence were also rated
49
Evaluation Lin and Hovy (2002).
The weighted recall at threshold ‘t’ (t=1 to 4). outline an accumulative n-gram matching score (which they call NAMS), Taken from: Dipanjan Das, Andre F.T. Martins, A Survey on Automatic Text Summarization2007
50
Evaluation Recall-Oriented Understudy for Gisting Evaluation (ROUGE) (Lin 2004). Let be a set of reference summary, and let be a summary generated automatically by a system. Let be a binary vector representing n-grams contained in a document d. The metric ROUGE-N is an n-gram recall based statistic where denotes the usual inner product of vectors Lin, C.-Y. (2004). Rouge: A package for automatic evaluation of summaries. In Marie-Francine Moens, S. S., editor, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74-81
51
Evaluation Recall-Oriented Understudy for Gisting Evaluation (ROUGE) (Lin 2004). Example: d := “Ram helped Shyam to learn German.” r1:= “Shyam learnt German by Ram.” r2:= “Ram taught German to Shyam.” s := “Shyam learn German from Ram.” Φn(r1) := [10101] Φn(r2) := [10111] Φn(s) := [11101] = 6/7
52
Evaluation Recall-Oriented Understudy for Gisting Evaluation (ROUGE) (Lin 2004). ROUGE-N can be used for multiple reference summaries. An alternative is taking the most similar summary in the reference set.
53
Evaluation Recall-Oriented Understudy for Gisting Evaluation (ROUGE) (Lin 2004). Example: d := “Ram helped Shyam to learn German.” r1:= “Shyam learnt German by Ram.” r2:= “Ram taught German to Shyam.” s := “Shyam learn German from Ram.” Φn(r1) := [10101] Φn(r2) := [10111] Φn(s) := [11101] = 3/3
54
Evaluation Recall-Oriented Understudy for Gisting Evaluation (ROUGE) (Lin 2004). Another metric in (Lin, 2004) applies the concept of longest common subsequence (LCS). Let r1,…,ru be the reference sentences of the documents in R, and s a candidate summary
55
Evaluation Recall-Oriented Understudy for Gisting Evaluation (ROUGE) (Lin 2004). Example: s := “Ram pushed Shyam into the Pool.” r1:= “Ram push Shyam into the Pool.” r2:= “Shyam push Ram into the Pool.” ROUGE-N: r1=r2 (“Ram”, “Shyam into the Pool”) ROUGE-L: r1: = 5/6 (“Ram”, “Shyam into the Pool”) r2: = 4/6 (“Ram”, “into the Pool”) r1>r2
56
Evaluation Recall-Oriented Understudy for Gisting Evaluation (ROUGE) (Lin 2004). Yet another measure ROUGE-S, which can be seen as a modified version of ROUGE-N for n = 2
57
Evaluation Recall-Oriented Understudy for Gisting Evaluation (ROUGE) (Lin 2004). Example: s := “Ram pushed Shyam into the Pool.” r1:= “Ram push Shyam into the Pool.” r2:= “Shyam push Ram into the Pool.” r3:= “Shyam into the Pool Ram pushed.” ROUGE-N: r3>r1=r2 ROUGE-L: r2>r3=r4 ROUGE-S: r1: = 10/15 r2: = 9/15 r3: = 7/15 r1>r2>r3
58
Evaluation Recall-Oriented Understudy for Gisting Evaluation (ROUGE) (Lin 2004). The various versions of ROUGE were evaluated by computing the correlation coefficient between ROUGE scores and human judgment scores
59
Conclusion a need to develop efficient and accurate summarization systems. attention has drifted from summarizing scientic articles to news articles, electronic mail messages, advertisements, and blogs. Both abstractive and extractive approaches have been attempted. simple extraction of sentences have produced satisfactory results in large-scale applications. This survey emphasizes extractive approaches to summarization using statistical methods.
60
References Primary Reference Dipanjan Das, Andre F.T. Martins (2007). A Survey on Automatic Text Summarization. Literature Survey for the Language and Statistics II course at CMU, Pittsburg Secondary Reference Edmundson, H. P. (1969). New methods in automatic extracting. Journal of the ACM, 16(2): Luhn, H. P. (1958). The automatic creation of literature abstracts. IBM Journal of Research Development, 2(2): Lin, C.-Y. (1999). Training a selection function for extraction. In Proceedings of CIKM '99, pages 55-62 Ono, K., Sumita, K., and Miike, S. (1994). Abstract generation based on rhetorical structure extraction. In Proceedings of Coling '94, pages Barzilay, R. and Elhadad, M. (1997). Using lexical chains for text summarization. In Proceedings ISTS'97 Mani, I. and Bloedorn, E. (1997). Multi-document summarization by graph search and matching. In AAAI/IAAI, pages
61
References Contd… Evans, D. K. (2005). Similarity-based multilingual multi-document summarization Technical Report CUCS , Columbia University Radev, D. R., Jing, H., and Budzikowska, M. (2000). Centroid-based summarization of multiple documents: sentence extraction, utility-based evaluation, and user studies. In NAACL-ANLP 2000 Workshop on Automatic summarization, pages 21-30 Witbrock, M. J. and Mittal, V. O. (1999). Ultra-summarization (poster abstract): a statistical approach to generating highly condensed non-extractive summaries. In Proceedings of SIGIR '99, pages Knight, K. and Marcu, D. (2000). Statistics-based summarization - step one: Sentence compression. In AAAI/IAAI, pages Lin, C.-Y. and Hovy, E. (2002). Manual and automatic evaluation of summaries. In Proceedings of the ACL-02 Workshop on Automatic Summarization, pages 45-51 Lin, C.-Y. (2004). Rouge: A package for automatic evaluation of summaries. In Marie-Francine Moens, S. S., editor, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74-81 Lin, C.-Y., Cao, G., Gao, J., and Nie, J.-Y. (2006). An information-theoretic approach to automatic evaluation of summaries. In Proceedings of HLT-NAACL '06, pages
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.