Presentation is loading. Please wait.

Presentation is loading. Please wait.

Summary  The task of extractive speech summarization is to select a set of salient sentences from an original spoken document and concatenate them to.

Similar presentations


Presentation on theme: "Summary  The task of extractive speech summarization is to select a set of salient sentences from an original spoken document and concatenate them to."— Presentation transcript:

1 Summary  The task of extractive speech summarization is to select a set of salient sentences from an original spoken document and concatenate them to form a summary To facilitate users to better browse through and understand the content of the document  We propose a novel margin-based discriminative training (MBDT) algorithm that aims to penalize non-summary sentences in an inverse proportion to their summarization evaluation scores  The summarization model can be trained with an objective function that is closely coupled with the ultimate evaluation metric of extractive speech summarization A Margin-based Discriminative Modeling Approach for Extractive Speech Summarization Shih-Hung Liu †*, Kuan-Yu Chen *, Berlin Chen #, Ea-Ee Jan +,Hsin-Min Wang *, Hsu-Chun Yen †, Wen-Lian Hsu * † Institute of Information Science, Academia Sinica, Taiwan # National Taiwan Normal University, Taiwan * National Taiwan University, Taiwan + IBM Thomas J. Watson Research Center, USA Various Discriminative Models  SVM A binary summarizer which can output the decision score g(S i ) of each sentence S i  The posterior probability of a sentence S i being included in the summary class S can be approximated by the following sigmoid operation (the weights and are optimized by the training set):  Perceptron The decision score of sentence S i is f(S i )=  X i   X i is the inner product of feature vector of sentence S i and model parameter  The model parameter can be estimated by maximizing the accumulated squared score distances of all the training spoken documents  N is total training documents, Summ n is the reference summary of the n-th training document D n, S R denotes a summary sentence in Summ n, and S n * is the non-summary sentence of D n that has the highest decision score  Global Conditional Log-linear Model (GCLM) The GCLM method aims at maximizing the posterior scores of the summary sentences of each given training spoken document Experimental Results  Dataset: The summarization dataset employed in this study is a publicly available broadcast news (MATBN) corpus 100 documents for training and 20 documents for test  16 indicative features in total for representing the sentence  The results of comparing with other discriminative methods and unsupervised approaches. TD: manual transcripts SD: speech recognition transcripts  The results of each individual feature used in isolation and combination with other kind of feature Margin-based Discriminative Training (MBDT) 2)In the second stage, MBDT attempts to define a training objective function that is closely coupled with the ultimate evaluation metric of speech summarization w(S j ) is the weight of a (non-summary) sentence S j Eval(S j,Summ n ) is a function that estimate the summarization performance of a sentence S j of D n by comparing S j to the reference summary Summ n of D n with a desired evaluation metric  The score returns by Eval(S j,Summ n ) is between 0 and 1  The higher the value, the better the performance Prosodic Features 1. Pitch value (max, min, mean, diff) 2. Energy value (max, min, mean, diff) Lexical Features 1. Number of named entities 2. Number of stop words 3. Bigram language model scores 4. Normalized bigram scores Relevance Features 1. VSM 2. DLM 3. RM 4. SMM TDSD ROUGE-1ROUGE-2ROUGE-LROUGE-1ROUGE-2ROUGE-L SVM0.4700.3640.4260.3830.2450.342 Ranking SVM0.4900.3910.4470.3880.2540.344 Perceptron0.4870.3940.4390.3930.2590.352 GCLM0.4820.3860.4330.3800.2500.342 MBDT0.5150.4220.4620.3930.2640.353 ILP0.4420.3370.4010.3480.2090.306 Submodularity0.4140.2860.3630.3320.2040.303 TDSD ROUGE-1ROUGE-2ROUGE-LROUGE-1ROUGE-2ROUGE-L MBDT+Pro 0.3740.2560.337 0.3250.1890.292 MBDT+Lex 0.2550.1590.228 0.1890.0820.170 MBDT+Rel 0.4110.2870.360 0.2020.298 MBDT+Pro+Lex 0.3700.2690.3450.3420.2050.310 MBDT+Lex+Rel 0.4280.3140.3820.3550.2140.313 MBDT+Pro+Rel 0.4220.3150.3700.3410.1970.288 MBDT+All 0.5150.4220.4620.3930.2640.353


Download ppt "Summary  The task of extractive speech summarization is to select a set of salient sentences from an original spoken document and concatenate them to."

Similar presentations


Ads by Google