Presentation is loading. Please wait.

Presentation is loading. Please wait.

Discrimination and Classification. Discrimination Situation: We have two or more populations  1,  2, etc (possibly p-variate normal). The populations.

Similar presentations


Presentation on theme: "Discrimination and Classification. Discrimination Situation: We have two or more populations  1,  2, etc (possibly p-variate normal). The populations."— Presentation transcript:

1 Discrimination and Classification

2 Discrimination Situation: We have two or more populations  1,  2, etc (possibly p-variate normal). The populations are known (or we have data from each population) We have data for a new case (population unknown) and we want to identify the which population for which the new case is a member.

3 The Basic Problem Suppose that the data from a new case x 1, …, x p has joint density function either :  1 : g(x 1, …, x n ) or  2 : h(x 1, …, x n ) We want to make the decision to D 1 : Classify the case in  1 (g is the correct distribution) or D 2 : Classify the case in  2 (h is the correct distribution)

4 The Two Types of Errors 1.Misclassifying the case in  1 when it actually lies in  2. Let P[1|2] = P[D 1 |  2 ] = probability of this type of error 2.Misclassifying the case in  2 when it actually lies in  1. Let P[2|1] = P[D 2 |  1 ] = probability of this type of error This is similar Type I and Type II errors in hypothesis testing.

5 Note: 1. C 1 = the region were we make the decision D 1. (the decision to classify the case in  1 ) A discrimination scheme is defined by splitting p – dimensional space into two regions. 2. C 2 = the region were we make the decision D 2. (the decision to classify the case in  2 )

6 1.Set up the regions C 1 and C 2 so that one of the probabilities of misclassification, P[2|1] say, is at some low acceptable value . Accept the level of the other probability of misclassification P[1|2] = . There can be several approaches to determining the regions C 1 and C 2. All concerned with taking into account the probabilities of misclassification P[2|1] and P[1|2]

7 2.Set up the regions C 1 and C 2 so that the total probability of misclassification: P[Misclassification] = P[1] P[2|1] + P[2]P[1|2] is minimized P[1] = P[the case belongs to  1 ] P[2] = P[the case belongs to  2 ]

8 3.Set up the regions C 1 and C 2 so that the total expected cost of misclassification: E[Cost of Misclassification] = ECM = c 2|1 P[1] P[2|1] + c 1|2 P[2]P[1|2] is minimized P[1] = P[the case belongs to  1 ] P[2] = P[the case belongs to  2 ] c 2|1 = the cost of misclassifying the case in  2 when the case belongs to  1. c 1|2 = the cost of misclassifying the case in  1 when the case belongs to  2.

9 The Optimal Classification Rule Suppose that the data x 1, …, x p has joint density function f(x 1, …, x p ;  ) where  is either  1 or  2. Let g(x 1, …, x p ) = f(x 1, …, x n ;  1 ) and h(x 1, …, x p ) = f(x 1, …, x n ;  2 ) We want to make the decision D 1 :  =  1 (g is the correct distribution) against D 2 :  =  2 (h is the correct distribution)

10 and where then the optimal regions (minimizing ECM, expected cost of misclassification) for making the decisions D 1 and D 2 respectively are C 1 and C 2

11 Proof: ECM = E[Cost of Misclassification] = c 2|1 P[1] P[2|1] + c 1|2 P[2]P[1|2]

12 Therefore Thus ECM is minimized if C 1 contains all of the points (x 1, …, x p ) such that the integrand is negative

13 Fishers Linear Discriminant Function. Suppose that x 1, …, x p is either data from a p-variate Normal distribution with mean vector: The covariance matrix  is the same for both populations  1 and  2.

14 The Neymann-Pearson Lemma states that we should classify into populations  1 and  2 using: That is make the decision D 1 : population is  1 if > k

15 or and

16 Finally we make the decision D 1 : population is  1 if where and Note: k = 1 and ln k = 0 if c 1|2 = c 2|1 and P[1] = P[2].

17 The function Is called Fisher’s linear discriminant function

18 In the case where the populations are unknown but estimated from data Fisher’s linear discriminant function

19

20

21

22 Example 2 Annual financial data are collected for firms approximately 2 years prior to bankruptcy and for financially sound firms at about the same point in time. The data on the four variables x 1 = CF/TD = (cash flow)/(total debt), x 2 = NI/TA = (net income)/(Total assets), x 3 = CA/CL = (current assets)/(current liabilties, and x 4 = CA/NS = (current assets)/(net sales) are given in the following table.

23 The data are given in the following table:

24 Examples using SPSS

25 Classification or Cluster Analysis Have data from one or several populations

26 Situation Have multivariate (or univariate) data from one or several populations (the number of populations is unknown) Want to determine the number of populations and identify the populations

27 Example

28

29 Hierarchical Clustering Methods The following are the steps in the agglomerative Hierarchical clustering algorithm for grouping N objects (items or variables). 1.Start with N clusters, each consisting of a single entity and an N X N symmetric matrix (table) of distances (or similarities) D = (d ij ). 2.Search the distance matrix for the nearest (most similar) pair of clusters. Let the distance between the "most similar" clusters U and V be d UV. 3.Merge clusters U and V. Label the newly formed cluster (UV). Update the entries in the distance matrix by a)deleting the rows and columns corresponding to clusters U and V and b)adding a row and column giving the distances between cluster (UV) and the remaining clusters.

30 4.Repeat steps 2 and 3 a total of N-1 times. (All objects will be a single cluster a termination of this algorithm.) Record the identity of clusters that are merged and the levels (distances or similarities) at which the mergers take place.

31 Different methods of computing inter-cluster distance

32 Example To illustrate the single linkage algorithm, we consider the hypothetical distance matrix between pairs of five objects given below:

33 Treating each object as a cluster, the clustering begins by merging the two closest items (3 & 5). To implement the next level of clustering we need to compute the distances between cluster (35) and the remaining objects: d (35)1 = min{3,11} = 3 d (35)2 = min{7,10} = 7 d (35)4 = min{9,8} = 8 The new distance matrix becomes:

34 The next two closest clusters ((35) & 1) are merged to form cluster (135). Distances between this cluster and the remaining clusters become:

35 Distances between this cluster and the remaining clusters become: d (135)2 = min{7,9} = 7 d (135)4 = min{8,6} = 6 The distance matrix now becomes: Continuing the next two closest clusters (2 & 4) are merged to form cluster (24).

36 Distances between this cluster and the remaining clusters become: d (135)(24) = min{d (135)2,d (135)4 )= min{7,6} = 6 The final distance matrix now becomes: At the final step clusters (135) and (24) are merged to form the single cluster (12345) of all five items.

37 The results of this algorithm can be summarized graphically on the following "dendogram"

38 Dendograms for clustering the 11 languages on the basis of the ten numerals

39

40

41

42

43

44 Dendogram Cluster Analysis of N=22 Utility companies Euclidean distance, Average Linkage

45 Dendogram Cluster Analysis of N=22 Utility companies Euclidean distance, Single Linkage


Download ppt "Discrimination and Classification. Discrimination Situation: We have two or more populations  1,  2, etc (possibly p-variate normal). The populations."

Similar presentations


Ads by Google