# HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.

## Presentation on theme: "HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D."— Presentation transcript:

HMM-BASED PATTERN DETECTION

Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D HMM  Application  Simulation and Results

Markov Process  Can be described at any time to be in one state among N distinct states  Its probabilistic description just requires a fixed specification of current and previous states actual state at time t state transition probability  Each state corresponds to a physical (observable) event  Too restrictive for sophisticated applications S1S1 S2S2 S3S3 a 31

Extension to Hidden Markov Models  A conditionally independent process on a Markov chain  States correspond to clusters of context with similar distribution  Elements of HMM: State transition probability The observation symbol probability in each state The initial state distribution

Fundamental Problems for HMM  Evaluation the probability of the observation O=O 1 O 2 …O T given the model, P(O| )  Optimization Choosing optimal state sequence given the observation and the model.  Training Estimating model parameters to maximize P(O| )

Evaluation the Model; Forward- Backward Algorithm This calculation is on order of Forward-Backward Procedure with order of  Forward variable:  Backward variable:

Optimal States Sequence; Solution(s)  One solution: choose the states which are individually most likely. This optimal solution has to be a valid state sequence!!  Vitterbi Algorithm: find the single best state sequence that maximizes P(Q|O, )

Training the Model

Continuous Observation Distributions  In most of the applications (Speech, Image, …), observations can not be characterized as discrete symbols from finite alphabet and should be considered by probability density function (PDF).  The most general representation of the PDF is a finite mixture of normal distributions with different means and variances for each state.  Estimating mean and variance instead of estimating b j (k)

Implementation Considerations  Scaling: Dynamic range of  and  will exceed the precision range of any machine  Multiple observations for training  Initial Estimation of HMM Parameters for convergence, good initial values of PDF are really helpful.  Choice of Model, Number of states, Choice of observation PDF

Two-Dimensional HMM  Set of Markovian states within each super-state  Transition probability  Useful for segmentation S i,j S i-1,j S i,j-1 Sub-State Super-State

Application: Pattern Detection SNR=-5 SNR=10

Simulations  Feature Vector: DCT Coefficients or their averages over some of them Block Size: 16*16  Both images in training set and test set have different rotation of “jinc”s, but the distance and center of them are fixed.  Running K-means Clustering Algorithm For initial estimation  Comparing with template matching and Learning Vector Quantization  Distance measure for LVQ: is the computed variance of each coefficients in reference centroid Average of Absolute value of the Coefficients

Results and Conclusion! Detection Error

Download ppt "HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D."

Similar presentations