Presentation is loading. Please wait.

Presentation is loading. Please wait.

Atomatic summarization of voicemail messages using lexical and prosodic features Koumpis and Renals Presented by Daniel Vassilev.

Similar presentations

Presentation on theme: "Atomatic summarization of voicemail messages using lexical and prosodic features Koumpis and Renals Presented by Daniel Vassilev."— Presentation transcript:

1 Atomatic summarization of voic messages using lexical and prosodic features Koumpis and Renals Presented by Daniel Vassilev

2 The Domain  Voic is a special case of spontaneous speech  Goal: Enable the voic user to receive his/her messages anywhere and any time, in particular on mobile devices  Key components: caller identification, reason for call, information that the caller requires and a return phone number

3 The Task  Summarization: obtain the most important information from the voic  A complete system requires spoken language understanding and language generation  current technology is not adequate  Solution: simplify task by deciding for each word if it will be in the summary

4 The Task  Voic is short, 40s on average  Summaries must fit into 140 characters (mobile devices)  Content more important than coherence and document flow  ASR used on the voic , a significant word error rate must be assumed (30%-40% error!)

5 Voic Corpus  IBM Voic Corpus-Part I  1801 messages (1601 for training, 200 for testing)  14.6 h  On average 90 words / message  Message topics:  Message topics: 27% business related, 25% personal, 17% work-related, 13% technical and 18% in other categories.

6 The classification problem  Classifier decides if a word is included in the summary  Using Parcel (feature selection alg.) and a Receiver Operating Characteristic (ROC) graphs for feature selection   Hybrid multi-layer perceptron (MLP) / hidden Markov model (HMM) classifier

7 Receiver Operating Characteristic  Graph plots the true positive rate (sensitivity) vs. 1 - the true negative rate (specificity)  We can shift the positive vs. negative error by taking different acceptance thresholds (we move on the ROC curve)  Different classifiers will have different ROC curves

8 Sample ROC graph

9 System setup  The team built a sophisticated, multi- component system that can capture the different types of information occurring in voic  Initial trigram language model, augmented with sentences from t  Initial trigram language model, augmented with sentences from the Hub- 4 Broadcast News and Switchboard language model training corpora

10 System Setup  Pronunciation dictionary with 10,000 words from the training data   + pronunciations obtained from the SPRACH broadcast news system   Annotated summary words in 1,000 messages

11 System overview

12 Entities in summaries

13 Annotation procedures   1. Pre-annotated NEs were marked as targets, unless unmarked by later rules;   2. The first occurrences of the names of the speaker and recipient were always marked   as targets; later repetitions were unmarked unless they resolved ambiguities;   3. Any words that explicitly determined the reason for calling including important   dates/times and action items were marked;   4. Words in a stopword list with 54 entries were unmarked;

14 Annotation procedures  Labeled only on transcribed messages (no audio)  Annotators tended to eliminate irrelevant words (as opposed to mark content words)  Produced summaries about 30% shorter than original message  Relatively good level of inter-annotator agreement

15 Lexical features  Lexical features from ASR output collection frequency (less frequent words more informative) acoustic confidence (ASR confidence) All other features considered before and after stemming: NEs, proper names, tel. numbers, dates and times, word position

16 Prosodic features  Prosodic features from audio using signal processing algorithms duration normalization over the corpus pauses (preceding and succeeding) mean energy F0 range, average, onset and offset

17 Results  Named entities identified very accurately (without stemming)  Telephone numbers recognized well also by specific named entity lists. Pos also good as numbers appear towards the end of the message All prosodic features but duration had no predictive power

18 ROC curves for different tasks / features

19 Results  Dates / times: best matched by specific named entity list and collection frequency Prosodic features (duration, energy, F0 range) more important but still not the best predictors

20 The Parcel bootstrapping algorithm

21 Conclusions  Trade-off between length of summary and retaining essential content words  70%-80% agreement with human summary for hand-annotated messages  50% agreement when using ASR

22 Conclusions  Automatic summaries perceived as worse than human summaries (duh!)  However, if the summarizer used human annotated data (as opposed to ASR output), the perceived quality improved significantly

Download ppt "Atomatic summarization of voicemail messages using lexical and prosodic features Koumpis and Renals Presented by Daniel Vassilev."

Similar presentations

Ads by Google