Presentation is loading. Please wait.

Presentation is loading. Please wait.

27 th, February 2004Presentation for the speech recognition system An overview of the SPHINX Speech Recognition System Jie Zhou, Zheng Gong Lingli Wang,

Similar presentations


Presentation on theme: "27 th, February 2004Presentation for the speech recognition system An overview of the SPHINX Speech Recognition System Jie Zhou, Zheng Gong Lingli Wang,"— Presentation transcript:

1 27 th, February 2004Presentation for the speech recognition system An overview of the SPHINX Speech Recognition System Jie Zhou, Zheng Gong Lingli Wang, Tiantian Ding M.Sc in CMHE Spoken Language Processing Module Presentation of the speech recognition system 27 th, February 2004

2 Presentation for the speech recognition system Abstract   SPHINX is a system that demonstrates the feasibility of accuracy, large-vocabulary speaker- independent, continuous speech recognition.   SPHINX is based on discrete hidden Markov model (HMM’s) with LPC-derived parameters.   To provide speaker independence   To deal with co-articulation in continuous speech   Adequately represent a large-vocabulary   SPHINX attained word accuracies of 71, 94, and 96 percent on a 997-word task.

3 27 th, February 2004Presentation for the speech recognition system Introduction SPHINX is a system that tries to overcome three constraints: SPHINX is a system that tries to overcome three constraints: 1)Speaker dependent 2)Isolated words 3)Small vocabulary

4 27 th, February 2004Presentation for the speech recognition system Introduction Speaker independent Train on less appropriate training data Many more data can be acquired which may compensate for the less appropriate training material Continuous speech recognition’s difficulties Word boundaries are difficult to locate Coarticulatory effects are much stronger in continuous speech Content words are often emphasized, while function words are poorly articulated Large vocabulary 1000 words or more

5 27 th, February 2004Presentation for the speech recognition system Introduction   To improve speaker independence Presented additional knowledge through the use of multiple vector quantized codebooks Enhance the recognizer with carefully designed models and word duration modeling.   To deal with coarticulation in continuous speech   Function-word-dependent phone models   Generalized triphone models   SPHINX achieved speaker-independent word recognition accuracies of 71, 94 and 96 percent on the 997 word DARPA resource management task with grammars of perplexity 997, 60 and 20.

6 27 th, February 2004Presentation for the speech recognition system The baseline SPHINX system This system uses standard HMM techniques This system uses standard HMM techniques Speech Processing Speech Processing Sample rate 16KHz Sample rate 16KHz Frame span 20ms, each frame overlap 10ms Frame span 20ms, each frame overlap 10ms Each frame is multiplied by Hamming window Each frame is multiplied by Hamming window Computing the LPC coefficients Computing the LPC coefficients 12 LPC-derived cepstral coefficients are got 12 LPC-derived cepstral coefficients are got 12 LPC cepstrum coefficient are vector quantized into one of 256 prototype vectors 12 LPC cepstrum coefficient are vector quantized into one of 256 prototype vectors

7 27 th, February 2004Presentation for the speech recognition system Task and Database The resource Management task The resource Management task SHPINX was evaluated on the DARPA resource management task SHPINX was evaluated on the DARPA resource management task Three difficult grammars are used with SPHINX Three difficult grammars are used with SPHINX Null grammar (perplexity 997) Null grammar (perplexity 997) Word-pair grammar (perplexity 60) Word-pair grammar (perplexity 60) Bigram grammar (perplexity 20) Bigram grammar (perplexity 20) The TIRM Database The TIRM Database 80 “training” speakers 80 “training” speakers 40 “development test” speakers 40 “development test” speakers 40 “evaluation” speakers 40 “evaluation” speakers

8 27 th, February 2004Presentation for the speech recognition system Task and Database Phonetic Hidden Markov Models Phonetic Hidden Markov Models HMM’s are parametric models particularly suitable for describing speech events. HMM’s are parametric models particularly suitable for describing speech events. Each HMM represents a phone Each HMM represents a phone A total number of 46 phones in English A total number of 46 phones in English {s}: a set of states {s}: a set of states {a ij }: a set of transitions where a ij is the probability of transition from state i to state j {a ij }: a set of transitions where a ij is the probability of transition from state i to state j {b ij (k)}: the output probability matrix {b ij (k)}: the output probability matrix Phonetic HMM’s topology figure Phonetic HMM’s topology figure

9 27 th, February 2004Presentation for the speech recognition system Phonetic HMM’s topology

10 27 th, February 2004Presentation for the speech recognition system Task and Database Training Training A set of 46 phone models was used to initialize the parameters. A set of 46 phone models was used to initialize the parameters. Ran the forward-backward algorithm on the resource management training sentences. Ran the forward-backward algorithm on the resource management training sentences. Create a sentence model from word models, which were in turn concatenated from phone models. Create a sentence model from word models, which were in turn concatenated from phone models. The trained transition probability are used directly in recognition The trained transition probability are used directly in recognition The output probabilities are smoothed with a uniform distribution The output probabilities are smoothed with a uniform distribution The SPHINX recognition search is a standard time- synchronous Viterbi beam search. The SPHINX recognition search is a standard time- synchronous Viterbi beam search.

11 27 th, February 2004Presentation for the speech recognition system Task and Database The results with the baseline SPHINX system, using 15 new speakers with 10 sentences each for evaluation are shown in table I. The results with the baseline SPHINX system, using 15 new speakers with 10 sentences each for evaluation are shown in table I. Baseline system is inadequate for any realistic large- vocabulary applications, without incorporating knowledge and contextual modeling Baseline system is inadequate for any realistic large- vocabulary applications, without incorporating knowledge and contextual modeling

12 27 th, February 2004Presentation for the speech recognition system Adding knowledge to SPHINX Fixed-Width Speech Parameters Fixed-Width Speech Parameters Lexical/Phonological Improvements Lexical/Phonological Improvements Word Duration Modeling Word Duration Modeling Results Results

13 27 th, February 2004Presentation for the speech recognition system Fixed-Width Speech Parameter Bilinear Transform on the Cepstrum Coefficients Bilinear Transform on the Cepstrum Coefficients Differenced Cepstrum Coefficients Differenced Cepstrum Coefficients Power and Differenced Power Power and Differenced Power Integrating Fixed-Width Parameters in Multiple Codebooks Integrating Fixed-Width Parameters in Multiple Codebooks

14 27 th, February 2004Presentation for the speech recognition system Lexical/Phonological Improvements This set of improvements involved the modification of the set of phones and the pronunciation dictionary. These changes lead to more accurate assumptions about how words are articulated, without changing our assumption that each word has a single pronunciation. This set of improvements involved the modification of the set of phones and the pronunciation dictionary. These changes lead to more accurate assumptions about how words are articulated, without changing our assumption that each word has a single pronunciation. The first step we took was to replace the baseform pronunciation with the most likely pronunciation. The first step we took was to replace the baseform pronunciation with the most likely pronunciation. In order to improve the appropriateness of the word pronunciation dictionary, a small set of rules was created to In order to improve the appropriateness of the word pronunciation dictionary, a small set of rules was created to modify closure-stop pairs into optional compound phones when appropriate modify closure-stop pairs into optional compound phones when appropriate modify /t/’s and /d/’s into /dx/ when appropriate modify /t/’s and /d/’s into /dx/ when appropriate reduce nasal /t/’s when appropriate reduce nasal /t/’s when appropriate perform other mappings such as /t s/ to /ts/. perform other mappings such as /t s/ to /ts/. Finally, there is the issue of what HMM topology is optimal for phones in general, and what topology is optimal for each phone. Finally, there is the issue of what HMM topology is optimal for phones in general, and what topology is optimal for each phone.

15 27 th, February 2004Presentation for the speech recognition system Word Duration Modeling HMM’s model duration of events with transition probabilities, which lead to a geometric distribution for the duration of state residence. HMM’s model duration of events with transition probabilities, which lead to a geometric distribution for the duration of state residence. We incorporated word duration into SPHINX as a part of the Viterbi search. The duration of a word is modelled by a univariate Gaussian distribution, with the mean and variance estimated from a supervised Viterbi segmentation of the training set. We incorporated word duration into SPHINX as a part of the Viterbi search. The duration of a word is modelled by a univariate Gaussian distribution, with the mean and variance estimated from a supervised Viterbi segmentation of the training set.

16 27 th, February 2004Presentation for the speech recognition system Results We have presented various strategies for adding knowledge to SPHINX. We have presented various strategies for adding knowledge to SPHINX. Consistent with earlier results, we found that bilinear transformed coefficients improved the recognition rates. An even greater improvement came from the use of differential coefficients, power, and differenced power in three separate codebooks. Consistent with earlier results, we found that bilinear transformed coefficients improved the recognition rates. An even greater improvement came from the use of differential coefficients, power, and differenced power in three separate codebooks. Next, we enhanced the dictionary and the phone set- a step that led to an appreciable improvement. Next, we enhanced the dictionary and the phone set- a step that led to an appreciable improvement. Finally, the addition of durational information significantly improved SPHINX’s accuracy when no grammar was used, but was not helpful with a grammar. Finally, the addition of durational information significantly improved SPHINX’s accuracy when no grammar was used, but was not helpful with a grammar.

17 27 th, February 2004Presentation for the speech recognition system Context Modeling in SPHINX Previously Proposed Units of Speech Previously Proposed Units of Speech Function-Word Dependent Phones Function-Word Dependent Phones Generalized Triphones Generalized Triphones Smoothing Detailed Models Smoothing Detailed Models

18 27 th, February 2004Presentation for the speech recognition system Previously Proposed Units of Speech Since lack of sharing across words, word models not practical for large-vocabulary speech recognition Since lack of sharing across words, word models not practical for large-vocabulary speech recognition In order to improve trainability, some subword unit has to be used In order to improve trainability, some subword unit has to be used Word-dependent phones: a compromise btw word modeling and phone modeling Word-dependent phones: a compromise btw word modeling and phone modeling Context-dependent phones: triphone model, instead of modeling phone-in-word, they model phone-in-context Context-dependent phones: triphone model, instead of modeling phone-in-word, they model phone-in-context

19 27 th, February 2004Presentation for the speech recognition system Function-Word Dependent Phones Function words are particularly problematic in continuous speech recognition since they are typically unstressed Function words are particularly problematic in continuous speech recognition since they are typically unstressed The phones in function words are distorted The phones in function words are distorted Function-word-dependent phones are the same as word-dependent phones, except they are only used for function words Function-word-dependent phones are the same as word-dependent phones, except they are only used for function words

20 27 th, February 2004Presentation for the speech recognition system Generalized Triphones Triphones model are sparsely trained and consume substantial memory Triphones model are sparsely trained and consume substantial memory Combining similar triphones, improving the trainability and reduce the memory storage Combining similar triphones, improving the trainability and reduce the memory storage Create generalized triphones by merging contexts with an agglomerative clustering procedure Create generalized triphones by merging contexts with an agglomerative clustering procedure To determine the similarity btw two models, we use the following distance metric: To determine the similarity btw two models, we use the following distance metric:

21 27 th, February 2004Presentation for the speech recognition system Generalized Triphones In measuring the distance btw the two models, we only consider the o/p probabilities and ignore the transition probabilities, which are of secondary important In measuring the distance btw the two models, we only consider the o/p probabilities and ignore the transition probabilities, which are of secondary important This context generalization algorithm provides the ideal means for finding the equilibrium btw trainability and sensitivity. This context generalization algorithm provides the ideal means for finding the equilibrium btw trainability and sensitivity.

22 27 th, February 2004Presentation for the speech recognition system Smoothing Detailed Models Detailed models are accurate, but are less robust since many o/p probabilities will be zeros, which can be disastrous to recognition. Detailed models are accurate, but are less robust since many o/p probabilities will be zeros, which can be disastrous to recognition. Combing these detailed models with other more robust ones. Combing these detailed models with other more robust ones. An ideal solution for weighting different estimates of the same event is deleted interpolated estimation. An ideal solution for weighting different estimates of the same event is deleted interpolated estimation. Procedure to combine the detailed models and robust models Procedure to combine the detailed models and robust models Using the uniform distribution to smooth the distribution Using the uniform distribution to smooth the distribution

23 27 th, February 2004Presentation for the speech recognition system Entire training procedure The summary of the The summary of the entire training procedure is illustrated in figure 2

24 27 th, February 2004Presentation for the speech recognition system Summary of Results The six versions correspond to the following descriptions with incremental improvements: The six versions correspond to the following descriptions with incremental improvements: the baseline system, which uses only LPC cepstral parameters in one codebook; the baseline system, which uses only LPC cepstral parameters in one codebook; the addition of differenced LPC cepstral coefficients, power, and differenced power in one codebook; the addition of differenced LPC cepstral coefficients, power, and differenced power in one codebook; all four feature sets were used in three separate codebooks all four feature sets were used in three separate codebooks tuning of phone models and the pronunciation dictionary, and the use of word duration modelling; tuning of phone models and the pronunciation dictionary, and the use of word duration modelling; function word dependent phone modelling function word dependent phone modelling generalized triphone modelling generalized triphone modelling

25 27 th, February 2004Presentation for the speech recognition system Results of five versions of SPHINX

26 27 th, February 2004Presentation for the speech recognition system Conclusion Given a fixed amount of training, model specificity and model trainability pose two incompatible goals. Given a fixed amount of training, model specificity and model trainability pose two incompatible goals. More specificity usually reduces trainability, and increased trainability usually results in over generality. More specificity usually reduces trainability, and increased trainability usually results in over generality. Our work lies on finding an equilibrium btw specificity and trainability Our work lies on finding an equilibrium btw specificity and trainability

27 27 th, February 2004Presentation for the speech recognition system Conclusion To improve trainability, using one of the largest speaker- independent speech databases. To improve trainability, using one of the largest speaker- independent speech databases. To facilitate sharing btw models, using deleted interpolation to combine robust models with detailed ones. To facilitate sharing btw models, using deleted interpolation to combine robust models with detailed ones. Improving trainability through sharing by combining poorly trained models with well-trained models Improving trainability through sharing by combining poorly trained models with well-trained models To improve specificity, using multiple codebookds of various LPC-derived features, and integrated external knowledge sources into the system To improve specificity, using multiple codebookds of various LPC-derived features, and integrated external knowledge sources into the system Improving the phone set to include multiple representations of some phones, and introduce the use of function-word- dependent phone modeling and generalized triphone modeling Improving the phone set to include multiple representations of some phones, and introduce the use of function-word- dependent phone modeling and generalized triphone modeling

28 27 th, February 2004Presentation for the speech recognition system Reference An Overview of the SPHINX Speech Recognition System, Kai-Fu LEE, member IEEE, Hsiao-Wuen, Hon, and Raj Reddy, fellow, IEEE, 1989 An Overview of the SPHINX Speech Recognition System, Kai-Fu LEE, member IEEE, Hsiao-Wuen, Hon, and Raj Reddy, fellow, IEEE, 1989 The SPHINX Speech Recognition system, Kai-Fu Lee, Hsiao-Wuen Hon, Mei-Yuh Hwang, Sanjoy Mahajan, Raj Reddy, 1989 The SPHINX Speech Recognition system, Kai-Fu Lee, Hsiao-Wuen Hon, Mei-Yuh Hwang, Sanjoy Mahajan, Raj Reddy, 1989

29 27 th, February 2004Presentation for the speech recognition system Thank you very much!


Download ppt "27 th, February 2004Presentation for the speech recognition system An overview of the SPHINX Speech Recognition System Jie Zhou, Zheng Gong Lingli Wang,"

Similar presentations


Ads by Google