Presentation is loading. Please wait.

Presentation is loading. Please wait.

Part1 Markov Models for Pattern Recognition – Introduction CSE717, SPRING 2008 CUBS, Univ at Buffalo.

Similar presentations


Presentation on theme: "Part1 Markov Models for Pattern Recognition – Introduction CSE717, SPRING 2008 CUBS, Univ at Buffalo."— Presentation transcript:

1 Part1 Markov Models for Pattern Recognition – Introduction CSE717, SPRING 2008 CUBS, Univ at Buffalo

2 Textbook Markov models for pattern recognition: from theory to applications by Gernot A. Fink, 1st Edition, Springer, Nov 2007

3 Textbook Foundation of Math Statistics Vector Quantization and Mixture Density Models Markov Models Hidden Markov Model (HMM) Model formulation Classic algorithms in the HMM Application domain of the HMM n-Gram Systems Character and handwriting recognition Speech recognition Analysis of biological sequences

4 Preliminary Requirements Familiar with Probability Theory and Statistics Basic concepts in Stochastic Process

5 Part 2 a Foundation of Probability Theory, Statistics & Stochastic Process CSE717, SPRING 2008 CUBS, Univ at Buffalo

6 Coin Toss Problem Coin toss result: X: random variable head, tail: states S X : set of states Probabilities:

7 Discrete Random Variable A discrete random variable’s states are discrete: natural numbers, integers, etc Described by probabilities of states Pr X (s 1 ), Pr X (x=s 2 ), … s 1, s 2, …: discrete states (possible values of x) Probabilities over all the states add up to 1

8 Continuous Random Variable A continuous random variable’s states are continuous: real numbers, etc Described by its probability density function (p.d.f.): p X (s) The probability of a<X<b can be obtained by integral Integral from to

9 Joint Probability and Joint p.d.f. Joint probability of discrete random variables Joint p.d.f. of continuous random variables Independence Condition

10 Conditional Probability and p.d.f. Conditional probability of discrete random variables Joint p.d.f. for continuous random variables

11 Statistics: Expected Value and Variance For discrete random variable For continuous random variable

12 Normal Distribution of Single Random Variable Notation p.d.f Expected value Variance

13 Stochastic Process A stochastic process is a time series of random variables : random variable t: time stamp Audio signal Stock market

14 Causal Process A stochastic process is causal if it has a finite history A causal process can be represented by

15 Stationary Process A stochastic process is stationary if the probability at a fixed time t is the same for all other times, i.e., for any n, and, A stationary process is sometimes referred to as strictly stationary, in contrast with weak or wide-sense stationarity

16 Gaussian White Noise White Noise: obeys independent identical distribution (i.i.d.) Gaussian White Noise

17 Gaussian White Noise is a Stationary Process Proof for any n, and,

18 Temperature Q1: Is the temperature within a day stationary?

19 Markov Chains A causal process is a Markov chain if for any x 1, …, x t k is the order of the Markov chain First order Markov chain Second order Markov chain

20 Homogeneous Markov Chains A k-th order Markov chain is homogeneous if the state transition probability is the same over time, i.e., Q2: Does homogeneous Markov chain imply stationary process?

21 State Transition in Homogeneous Markov Chains Suppose is a k -th order Markov chain and S is the set of all possible states (values) of x t, then for any k+1 states x 0, x 1, …, x k, the state transition probability can be abbreviated to

22 Rain Dry 0.60.4 0.20.8 Two states : ‘Rain’ and ‘Dry’. Transition probabilities: Pr(‘Rain’|‘Rain’)=0.4, Pr(‘Dry’|‘Rain’)=0.6, Pr(‘Rain’|‘Dry’)=0.2, Pr(‘Dry’|‘Dry’)=0.8 Example of Markov Chain

23 Rain Dry 0.60.4 0.20.8 Initial (say, Wednesday) probabilities: Pr Wed (‘Rain’)=0.3, Pr Wed (‘Dry’)=0.7 What’s the probability of rain on Thursday? P Thur (‘Rain’)= Pr Wed (‘Rain’) x Pr(‘Rain’|‘Rain’)+Pr Wed (‘Dry’) x Pr(‘Rain’|‘Dry’)= 0.3 x 0.4+0.7 x 0.2=0.26 Short Term Forecast

24 Rain Dry 0.60.4 0.20.8 P t (‘Rain’)= Pr t-1 (‘Rain’) x Pr(‘Rain’|‘Rain’)+Pr t-1 (‘Dry’) x Pr(‘Rain’|‘Dry’)= Pr t-1 (‘Rain’) x 0.4+(1– Pr t-1 (‘Rain’) x 0.2= 0.2+0.2 x Pr t (‘Rain’) P t (‘Rain’)= Pr t-1 (‘Rain’) => Pr t-1 (‘Rain’)=0.25, Pr t-1 (‘Dry’)=1-0.25=0.75 Condition of Stationary steady state distribution

25 Rain Dry 0.60.4 0.20.8 P t (‘Rain’) = 0.2+0.2 x Pr t-1 (‘Rain’)  P t (‘Rain’) – 0.25 = 0.2 x( Pr t-1 (‘Rain’) – 0.25)  P t (‘Rain’) = 0.2 t-1 x( Pr 1 (‘Rain’)-0.25)+0.25  P t (‘Rain’) = 0.25 (converges to steady state distribution) Steady-State Analysis

26 Rain Dry 10 10 Periodic Markov chain never converges to steady states Periodic Markov Chain


Download ppt "Part1 Markov Models for Pattern Recognition – Introduction CSE717, SPRING 2008 CUBS, Univ at Buffalo."

Similar presentations


Ads by Google