Presentation is loading. Please wait.

Presentation is loading. Please wait.

N-best list reranking using higher level phonetic, lexical, syntactic and semantic knowledge sources Mithun Balakrishna, Dan Moldovan and Ellis K. Cave.

Similar presentations


Presentation on theme: "N-best list reranking using higher level phonetic, lexical, syntactic and semantic knowledge sources Mithun Balakrishna, Dan Moldovan and Ellis K. Cave."— Presentation transcript:

1 N-best list reranking using higher level phonetic, lexical, syntactic and semantic knowledge sources Mithun Balakrishna, Dan Moldovan and Ellis K. Cave Presenter: Hsuan-Sheng Chiu

2 M. Balakrishna, D. Moldovan, E.K Cave, “N-best list reranking using higher level phonetic, lexical, syntactic and semantic knowledge sources”, ICASSP 2006 Substantial improvements can be gained by applying a strong postprocessing mechanism like reranking, even at a small n-best depth

3 Proposed architecture Reduce LVCSR WER by working these sources in tandem, complementing each other

4 Features Score of hypothesis

5 Features (cont.) Phonetic features –SVM Phoneme Class Posterior Probability

6 Features (cont.) –LVCSR-SVM Phoneme Classification Accuracy Probability W

7 Features (cont.) Lexical Features –Use n-best list word boundary information (avoid string alignment) and score each hypothesis based on the presence of these dominant words Syntactic Features –Use a immediate-head parser since the n-best reranking does not impose a left-to-right processing constraint Semantic Features –Use a semantic parser ASSERT to extract statistical semantic knowledge

8 Experimental results Reranking score is a simple linear weighted combination of the individual scores from each knowledge source The proposed reranking mechanism achieves the best WER improvements at the 15-best depth with 2.9 absolute WER reduction This is not very surprising since nearly 80% of the total WER improvement list by the Oracle hidden within the 20-best hypotheses

9 EFFICIENT ESTIMATION OF LANGUAGE MODEL STATISTICS OF SPONTANEOUS SPEECH VIA STATISTICAL TRANSFORMATION MODEL Yuya Akita, Tatsuya Kawahara

10 Efficient estimation of language model statistics of spontaneous speech via statistical transformation model –Estimate LM statistics of spontaneous speech from a document- style text database Machine translation model (P(X|Y): translation model) Transformation model => counts of n-word sequence

11 SMT-based transformation

12 Three characteristics of spontaneous speech Insertion of fillers –Fillers must be removed from transcripts for documentation Deletion of postpositional particles –Indicating the nominative case re often omitted while possessive case are rarely dropped Substitution of colloquial expressions –Colloquial expression must be always corrected in document- style text

13 Transformation probability Back-off scheme for POS-based model

14 Experimental setup Document-style text (for baseline model) –National Congress of Japan –71M words Training data for transformation model –666K words Test data –63K words Comparison corpus –Corpus of Spontaneous Japan –2.9M words

15 Experimental results Minute Baseline ParallelProposedBaseline + CSJ Proposed + Parallel Baseline + Parallel PP8065.551.975.943.456.2 OOV rate3.741.430.47 Vocabulary Size 3038630431 310193043140431


Download ppt "N-best list reranking using higher level phonetic, lexical, syntactic and semantic knowledge sources Mithun Balakrishna, Dan Moldovan and Ellis K. Cave."

Similar presentations


Ads by Google