Download presentation

Presentation is loading. Please wait.

1
Part 6 HMM in Practice CSE717, SPRING 2008 CUBS, Univ at Buffalo

2
Practical Problems in the HMM Computation with Probabilities Configuration of HMM Robust Parameter Estimation (Feature Optimization, Tying) Efficient Model Evaluation (Beam Search, Pruning)

3
Computation with Probabilities Logarithmic Probability Representation Lower Bounds for Probabilities Codebook for Semi-Continuous HMMs

4
Probability of State Sequence s for a Given Model λ If all, for a sequence of T>100,

5
Logarithm Transformation

6
Kingsbury-Rayner Formula

7
Mixture Density Model Kingsbury-Rayner Formula is not advisable here (too many exps and logs) Approximation

8
Lower Bounds for Probabilities Choose a minimal probability For example: In training it is avoided that certain states are not considered for parameter estimation In decoding it is avoided that paths through states with vanishing output probabilities are immediately discarded

9
Codebook Evaluation for Semi- Continuous HMMs Semi-Continuous HMM

10
Codebook Evaluation for Semi- Continuous HMMs By Bayes’ Law Assume can be approximated by a uniform distribution, then

11
Codebook Evaluation for Semi- Continuous HMMs This reduces the dynamic range of all quantities involved

12
Configuration of HMM Model Topology Modularization Compound Models Modeling Emissions

13
Model Topology Input data of speech and handwriting recognition exhibit a chronological or linear structure Ergodic model is not necessary

14
Linear Model The most simple model that describes chronological sequences Transitions to the next state and to the current state are allowed

15
Bakis Model Skipping of states is allowed Larger flexibility inn the modeling of duration Widely used in speech and handwriting recognition

16
Left-to-right Model An arbitrary number of states may be skipped in forward direction Jumping back to “past” states is not allowed Can describe larger variations in the temporal structure; longer parts of the data may be missing

17
Modularization English Word Recognition Thousands of words: more than thousands of word models; requires large amount of training data 26 letters: limited number of character models Modularization: divides complex model into smaller models of segmentation units Word -> subword -> character

18
Variation of Segmentation Units in Different Context Phonetic transcription of word “speech” : /spitS/ Cannot easily be distinguished from achieve (/@tSiv/), cheese (/tSiz/), or reality (/riEl@ti/) Triphone [Schwartz, 1984] Three immediately neighboring phone units taken as a segmentation units, e.g., p/i/t Eliminates the dependence of the variability of segmentation units on the context

19
Compound Models Parallel connection of all individual word models HMM structure for isolated word recognition Circles: Model States Squares: Non-emission States HMM structure for connected word recognition

20
Grammar Coded into HMM

21
Modeling Emissions Continuous feature vectors in the fields of speech and handwriting recognition are described by mixture models Size of the codebook and number of component densities per mixture density need to be decided No general way; a compromise between the precision of the model, its generalization capabilities, and the computation time Semi-Continuous Model Size of codebook: some hundred up to a few thousand densities Mixture Model: 8 to 64 component densities

22
References [1] Schwartz R, Chow Y, Roucos S, Krasner M, Makhoul J, Improved hidden Markov Modelling of phonemes for continuous speech recognition, in International Conference on Acoustics, Speech and Signal Processing, pp 35.6.1-35.6.4, 1984.

23
Robust Parameter Estimation Feature Optimization Tying

24
Feature Optimization Techniques

Similar presentations

© 2019 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google