Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.

Similar presentations


Presentation on theme: "Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley."— Presentation transcript:

1 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, 2000 with the permission of the authors and the publisher

2 Chapter 2 (part 3) Bayesian Decision Theory (Sections 2-6,2-9) Discriminant Functions for the Normal Density Bayes Decision Theory – Discrete Features

3 Pattern Classification, Chapter 2 (Part 3) 2 Discriminant Functions for the Normal Density We saw that the minimum error-rate classification can be achieved by the discriminant function g i (x) = ln P(x |  i ) + ln P(  i ) Case of multivariate normal

4 Pattern Classification, Chapter 2 (Part 3) 3 Case  i =  2 I ( I stands for the identity matrix) What does “  i =  2 I ” say about the dimensions? What about the variance of each dimension?

5 Pattern Classification, Chapter 2 (Part 3) 4 We can further simplify by recognizing that the quadratic term x t x implicit in the Euclidean norm is the same for all i.

6 Pattern Classification, Chapter 2 (Part 3) 5 A classifier that uses linear discriminant functions is called “a linear machine” The decision surfaces for a linear machine are pieces of hyperplanes defined by: g i (x) = g j (x) The equation can be written as: w t (x-x 0 )=0

7 Pattern Classification, Chapter 2 (Part 3) 6 The hyperplane separating R i and R j always orthogonal to the line linking the means!

8 Pattern Classification, Chapter 2 (Part 3) 7

9 8

10 9

11 10 Case  i =  (covariance of all classes are identical but arbitrary!) Hyperplane separating R i and R j (the hyperplane separating R i and R j is generally not orthogonal to the line between the means!)

12 Pattern Classification, Chapter 2 (Part 3) 11

13 Pattern Classification, Chapter 2 (Part 3) 12

14 Pattern Classification, Chapter 2 (Part 3) 13 Case  i = arbitrary The covariance matrices are different for each category The decision surfaces are hyperquadratics (Hyperquadrics are: hyperplanes, pairs of hyperplanes, hyperspheres, hyperellipsoids, hyperparaboloids, hyperhyperboloids)

15 Pattern Classification, Chapter 2 (Part 3) 14

16 Pattern Classification, Chapter 2 (Part 3) 15

17 Pattern Classification, Chapter 2 (Part 3) 16 Bayes Decision Theory – Discrete Features Components of x are binary or integer valued, x can take only one of m discrete values v 1, v 2, …, v m  concerned with probabilities rather than probability densities in Bayes Formula:

18 Pattern Classification, Chapter 2 (Part 3) 17 Bayes Decision Theory – Discrete Features Conditional risk is defined as before: R(  |x) Approach is still to minimize risk:

19 Pattern Classification, Chapter 2 (Part 3) 18 Bayes Decision Theory – Discrete Features Case of independent binary features in 2 category problem Let x = [x 1, x 2, …, x d ] t where each x i is either 0 or 1, with probabilities: p i = P(x i = 1 |  1 ) q i = P(x i = 1 |  2 )

20 Pattern Classification, Chapter 2 (Part 3) 19 Bayes Decision Theory – Discrete Features Assuming conditional independence, P(x|  i ) can be written as a product of component probabilities:

21 Pattern Classification, Chapter 2 (Part 3) 20 Bayes Decision Theory – Discrete Features Taking our likelihood ratio

22 Pattern Classification, Chapter 2 (Part 3) 21 The discriminant function in this case is:


Download ppt "Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley."

Similar presentations


Ads by Google