Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 12 Sept 30, 2005 Nanjing University of Science & Technology.

Similar presentations


Presentation on theme: "1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 12 Sept 30, 2005 Nanjing University of Science & Technology."— Presentation transcript:

1 1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 12 Sept 30, 2005 Nanjing University of Science & Technology

2 2 Lecture 11 Topics 1. M-Class Case and Gaussian Review 2. M-Class Case in Likelihood Ratio Space 3. Example Vector Observation M-Class

3 3 if equality then decide x from the boundary classes by random choice MPE and MAP Decision Rule: M-Class Case Select class C k if p( x | C k ) P(C k ) > p( x | C j ) P(C j ) for all j = 1, 2, …, M j = k for observed x Review 1

4 4 y i (x) = C ij p(x | C j ) P(C j ) j=1 M if y i (x) < y j (x) for all j = i Then decide x is from C i Bayes Decision Rule: M-Class Case Review 2

5 5 if - (x – M 1 ) T K 1 -1 (x – M 1 ) + (x – M 2 ) T K 2 -1 (x – M 2 ) > < C1C1 C2C2 T 1 = 2 ln(T ) = 2 lnT + ln - ln K2K2 1212 K1K1 1212 T1T1 Optimum Decision Rule: 2-Class Gaussian where And T is the optimum threshold for the type of performance measure used K1K1 K2K2 Quadratic Processing Review 3

6 6 if ( M 1 – M 2 ) T K -1 x > < C1C1 C2C2 T2T2 2-Class Gaussian: Special Case 1: K 1 = K 2 = K And T is the optimum threshold for the type of performance measure used T 2 = ln T + ½ ( M 1 T K -1 M 1 – M 2 T K -1 M 2 ) Equal Covariance Matrices Linear Processing Review 4

7 7 if ( M 1 – M 2 ) T x > < C1C1 C2C2 T3T3 2-Class Gaussian: Case 2: K 1 = K 2 = K = s 2 I And T is the optimum threshold for the type of performance measure used T 3 = s 2 ln T + ½ ( M 1 T M 1 – M 2 T M 2 ) Equal Scaled Identity Covariance Matrices Linear Processing Review 5

8 8 Q i (x) = (x – M j ) T K j -1 (x – M j ) } – 2 ln P(C j ) + ln | K i | d MAH (x, M j )Bias Quadratic Operation on observation vector x 2 M- Class General Gaussian - Continued Select Class C j if Q j (x) is MINIMUM Equivalent statistic: Q j (x) for j = 1, 2, …, M Review 6

9 9 Select Class C j if L j (x) is MAXIMUM L j (x) = M j T K -1 x – ½ M j T K -1 M j + lnP(C j ) Equivalent Rule for MPE and MAP M-Class Gaussian – Case 1: K 1 = K 2 = … = K M = K Dot Product Bias Linear Operation on observation vector x Review 7

10 10 where y i (x) = C ij p(x | C j ) P(C j ) j=1 M if y i (x) < y j (x) for all j = i Then decide x is from C i Bayes Decision Rule in Likelihood ratio space: M-Class Case derivation We know that Bayes Decision Rule for the M- Class Case is

11 11 L M (x) = p(x | C M ) / p(x | C M ) = 1 Dividing through by p(x | C M ) gives sufficient statistics v i (x) as follows Therefore the decisin rule becomes

12 12 Bayes Decision Rule in the Likelihood Ratio Space The dimension of the Likelihood Ratio Space is always one less than the number of classes

13 13 Given: Three Classes C 1, C 2, and C 3 Example: M-Class case N k are statistically independent all classes N k ~ N(0,1)

14 14 Determine: This problem is an abstraction of a tri-phase communication system (a) Find the minimum probability of error (MPE) decision rule (b) Illustrate your decision regions in the observation space (c) Use your decision rule to classify the observed pattern vector x =[ 0.4, 0.7] T (d) Calculate the probability of error P(error)

15 15 Solution: Problem is Gaussian with equal scaled identity Covariance Matrices so the optimum decision rule is as follows (a) Find MPE decision Rule

16 16 Select class with minimum L i (x) / for our example we have

17 17 Droping the -½ + ln 1/3 as it appears in all the L i (x), the new statistics s 1 (x), s 2 (x), and s 3 (x)can be defined as and an equivalent decision rule becomes

18 18 This decision rule can be rewritten in terms of the observation x as follows where in the observation space X, R k is the region where C k is decided

19 19 Decision Region in the Observation Space X Observation Space

20 20 (c) the pattern vector x x = [ x 1, x 2 ] T = [ 0.4, 0.7 ] T x Is a member of R 1 therefore x is classified as coming from class C 1

21 21 (d) Determine the probability of error P(error) = 1- P(correct) = 1 - P(correct |C 1 )P(C 1 ) - P(correct |C 2 )P(C 2 ) - P(correct |C 3 )P(C 3 )

22 22

23 23

24 24 P(error) = 0.42728

25 25 Summary 1. M-Class Case and Gaussian Review 2. M-Class Case in Likelihood Ratio Space 3. Example Vector Observation M-Class

26 26 End of Lecture 12


Download ppt "1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 12 Sept 30, 2005 Nanjing University of Science & Technology."

Similar presentations


Ads by Google