Presentation is loading. Please wait.

Presentation is loading. Please wait.

Initial slides by Eamonn Keogh Clustering. Organizing data into classes such that there is high intra-class similarity low inter-class similarity Finding.

Similar presentations


Presentation on theme: "Initial slides by Eamonn Keogh Clustering. Organizing data into classes such that there is high intra-class similarity low inter-class similarity Finding."— Presentation transcript:

1 Initial slides by Eamonn Keogh Clustering

2 Organizing data into classes such that there is high intra-class similarity low inter-class similarity Finding the class labels and the number of classes directly from the data (in contrast to classification). More informally, finding natural groupings among objects. What is Clustering? Also called unsupervised learning, sometimes called classification by statisticians and sorting by psychologists and segmentation by people in marketing

3 Intuitions behind desirable distance measure properties D(A,B) = D(B,A) Symmetry Otherwise you could claim “Alex looks like Bob, but Bob looks nothing like Alex.” D(A,A) = 0 Constancy of Self-Similarity Otherwise you could claim “Alex looks more like Bob, than Bob does.” D(A,B) = 0 If A=B Positivity (Separation) Otherwise there are objects in your world that are different, but you cannot tell apart. D(A,B)  D(A,C) + D(B,C) Triangular Inequality Otherwise you could claim “Alex is very like Carl, and Bob is very like Carl, but Alex is very unlike Bob.”

4 Peter Piter Pioter Piotr Substitution (i for e) Insertion (o) Deletion (e) Edit Distance Example It is possible to transform any string Q into string C, using only Substitution, Insertion and Deletion. Assume that each of these operators has a cost associated with it. The similarity between two strings can be defined as the cost of the cheapest transformation from Q to C. Note that for now we have ignored the issue of how we can find this cheapest transformation How similar are the names “Peter” and “Piotr”? Assume the following cost function Substitution1 Unit Insertion1 Unit Deletion1 Unit D( Peter,Piotr ) is 3 Piotr Pyotr Petros Pietro Pedro Pierre Piero Peter

5 Pedro (Portuguese) Petros (Greek), Peter (English), Piotr (Polish), Peadar (Irish), Pierre (French), Peder (Danish), Peka (Hawaiian), Pietro (Italian), Piero (Italian Alternative), Petr (Czech), Pyotr (Russian) Cristovao (Portuguese) Christoph (German), Christophe (French), Cristobal (Spanish), Cristoforo (Italian), Kristoffer (Scandinavian), Krystof (Czech), Christopher (English) Miguel (Portuguese) Michalis (Greek), Michael (English), Mick (Irish!) A Demonstration of Hierarchical Clustering using String Edit Distance Piotr Pyotr Petros Pietro Pedro Pierre Piero Peter Peder Peka Peadar Michalis Michael Miguel Mick Cristovao Christopher Christophe Christoph Crisdean Cristobal Cristoforo Kristoffer Krystof Since we cannot test all possible trees we will have to use heuristic search of all possible trees. We could do this.. Bottom-Up (agglomerative): Starting with each item in its own cluster, find the best pair to merge into a new cluster. Repeat until all clusters are fused together. Top-Down (divisive): Starting with all the data in a single cluster, consider every possible way to divide the cluster into two. Choose the best division and recursively operate on both sides.

6 We can look at the dendrogram to determine the “correct” number of clusters. In this case, the two highly separated subtrees are highly suggestive of two clusters. (Things are rarely this clear cut, unfortunately)

7 Outlier One potential use of a dendrogram is to detect outliers The single isolated branch is suggestive of a data point that is very different to all others

8 Partitional Clustering Nonhierarchical, each instance is placed in exactly one of K nonoverlapping clusters. Since only one set of clusters is output, the user normally has to input the desired number of clusters K.

9 Squared Error 10 123456789 1 2 3 4 5 6 7 8 9 Objective Function

10 Algorithm k-means 1. Decide on a value for k. 2. Initialize the k cluster centers (randomly, if necessary). 3. Decide the class memberships of the N objects by assigning them to the nearest cluster center. 4. Re-estimate the k cluster centers, by assuming the memberships found above are correct. 5. If none of the N objects changed membership in the last iteration, exit. Otherwise goto 3.

11 0 1 2 3 4 5 012345 K-means Clustering: Step 1 Algorithm: k-means, Distance Metric: Euclidean Distance k1k1 k2k2 k3k3

12 0 1 2 3 4 5 012345 K-means Clustering: Step 2 Algorithm: k-means, Distance Metric: Euclidean Distance k1k1 k2k2 k3k3

13 0 1 2 3 4 5 012345 K-means Clustering: Step 3 Algorithm: k-means, Distance Metric: Euclidean Distance k1k1 k2k2 k3k3

14 0 1 2 3 4 5 012345 K-means Clustering: Step 4 Algorithm: k-means, Distance Metric: Euclidean Distance k1k1 k2k2 k3k3

15 K-means Clustering: Step 5 Algorithm: k-means, Distance Metric: Euclidean Distance k1k1 k2k2 k3k3

16 Comments on the K-Means Method Strength – Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. – Often terminates at a local optimum. The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms Weakness – Applicable only when mean is defined, then what about categorical data? Need to extend the distance measurement. Ahmad, Dey: A k-mean clustering algorithm for mixed numeric and categorical data, Data & Knowledge Engineering, Nov. 2007 – Need to specify k, the number of clusters, in advance – Unable to handle noisy data and outliers – Not suitable to discover clusters with non-convex shapes – Tends to build clusters of equal size

17 EM Algorithm

18 18 Processing : EM Initialization – Initialization : Assign random value to parameters

19 19 Processing : the E-Step – Expectation : Pretend to know the parameter Assign data point to a component Mixture of Gaussians

20 20 Processing : the M-Step (1/2) – Maximization : Fit the parameter to its set of points Mixture of Gaussians

21

22

23

24 Iteration 1 The cluster means are randomly assigned

25 Iteration 2

26 Iteration 5

27 Iteration 25

28 Comments on the EM K-Means is a special form of EM EM algorithm maintains probabilistic assignments to clusters, instead of deterministic assignments, and multivariate Gaussian distributions instead of means Does not tend to build clusters of equal size Source: http://en.wikipedia.org/wiki/K-means_algorithmhttp://en.wikipedia.org/wiki/K-means_algorithm

29 Nearest Neighbor Clustering Not to be confused with Nearest Neighbor Classification Items are iteratively merged into the existing clusters that are closest. Incremental Threshold, t, used to determine if items are added to existing clusters or a new cluster is created. What happens if the data is streaming…

30 10 Threshold t t 1 2

31 10 New data point arrives… It is within the threshold for cluster 1, so add it to the cluster, and update cluster center. 1 2 3

32 10 New data point arrives… It is not within the threshold for cluster 1, so create a new cluster, and so on.. 1 2 3 4 Algorithm is highly order dependent… It is difficult to determine t in advance…

33 How can we tell the right number of clusters? In general, this is a unsolved problem. However there are many approximate methods. In the next few slides we will see an example. For our example, we will use the familiar katydid/grasshopper dataset. However, in this case we are imagining that we do NOT know the class labels. We are only clustering on the X and Y axis values.

34 12345678910 When k = 1, the objective function is 873.0

35 12345678910 When k = 2, the objective function is 173.1

36 12345678910 When k = 3, the objective function is 133.6

37 0.00E+00 1.00E+02 2.00E+02 3.00E+02 4.00E+02 5.00E+02 6.00E+02 7.00E+02 8.00E+02 9.00E+02 1.00E+03 123456 We can plot the objective function values for k equals 1 to 6… The abrupt change at k = 2, is highly suggestive of two clusters in the data. This technique for determining the number of clusters is known as “knee finding” or “elbow finding”. Note that the results are not always as clear cut as in this toy example k Objective Function

38 Goals Estimate class-conditional densities Estimate posterior probabilities

39 R Density Estimation n samples Assume p(x) is continuous & R is small + Randomly take n samples, let K denote the number of samples inside R.

40 Density Estimation R n samples + Assume p(x) is continuous & R is small Let k R denote the number of samples in R.

41 R Density Estimation n samples + What items can be controlled? How? Use subscript n to take sample size into account. To this, we should have We hope 1. 2. 3.

42 Two Approaches Parzen Windows – Control V n k n -Nearest-Neighbor – Control k n R n samples + 1. 2. 3. What items can be controlled? How?

43 Two Approaches Parzen Windows k n -Nearest-Neighbor

44 Parzen Windows 1 1 1

45 Window Function hnhn hnhn hnhn

46 hnhn hnhn hnhn

47 Parzen-Window Estimation hnhn hnhn hnhn k n : # samples inside hypercube centered at x.

48 Generalization Requirement Set x/h n =u. The window is not necessarily a hypercube. h n is a important parameter. It depends on sample size.

49 Interpolation Parameter h n  0  n (x) is a Dirac delta function.

50 Example Parzen-window estimations for five samples

51 Convergence Conditions To assure convergence, i.e., and we have the following additional constraints:

52 Illustrations One dimension case:

53 Illustrations One dimension case:

54 Illustrations Two dimension case:

55 Classification Example Smaller windowLarger window

56 Choosing the Window Function V n must approach zero when n , but at a rate slower than 1/n, e.g., The value of initial volume V 1 is important. In some cases, a cell volume is proper for one region but unsuitable in a different region.

57 k n -Nearest-Neighbor Estimation Let the cell volume depend on the training data. To estimate p(x), we can center a cell about x and let it grow until it captures k n samples, where is some specified function of n, e.g.,

58 Example k n =5

59 Example

60 Estimation of A Posteriori Probabilities x P n (  i |x)=?

61 Estimation of A Posteriori Probabilities x P n (  i |x)=?

62 Estimation of A Posteriori Probabilities The value of V n or k n can be determined base on Parzen window or k n -nearest-neighbor technique.

63 Classification: The Nearest-Neighbor Rule x  A set of labeled prototypes x’x’ Classify as

64 The Nearest-Neighbor Rule Voronoi Tessellation

65 Error Rate x Baysian (optimum): Optimum:

66 Error Rate x x’x’ 1-NN Suppose the true class for x is 

67 Error Rate x x’x’ 1-NN ? As n ,

68 Error Rate x x’x’ 1-NN

69 Error Bounds 1-NN Bayesian Consider the most complex classification case:

70 Error Bounds 1-NN Bayesian Consider the opposite case: Maximized this term to find the upper bound This term is minimum when all elements have the same value i.e., Minimized this term

71 Error Bounds 1-NN Bayesian Consider the opposite case:

72 Error Bounds 1-NN Bayesian Consider the opposite case: The nearest-neighbor rule is a suboptimal procedure. The error rate is never worse than twice the Bayes rate.

73 Error Bounds

74 Classification: The k -Nearest-Neighbor Rule k = 5 Assign pattern to the class wins the majority.

75 Error Bounds

76 Computation Complexity The computation complexity of the nearest-neighbor algorithm (both in time and space) has received a great deal of analysis. Require O(dn) space to store n prototypes in a training set. – Editing, pruning or condensing To search the nearest neighbor for a d-dimensional test point x, the time complexity is O(dn). – Partial distance – Search tree

77 Partial Distance Using the following fact to early throw far-away prototypes

78 Editing Nearest Neighbor Given a set of points, a Voronoi diagram is a partition of space into regions, within which all points are closer to some particular node than to any other node.

79 Delaunay Triangulation If two Voronoi regions share a boundary, the nodes of these regions are connected with an edge. Such nodes are called the Voronoi neighbors (or Delaunay neighbors).

80 The Decision Boundary The circled prototypes are redundant.

81 The Edited Training Set

82 Editing: The Voronoi Diagram Approach Compute the Delaunay triangulation for the training set. Visit each node, marking it if all its Delaunay neighbors are of the same class as the current node. Delete all marked nodes, exiting with the remaining ones as the edited training set.

83 Spectral Clustering Acknowledgements for subsequent slides to Xiaoli Fern CS 534: Machine Learning 2011 http://web.engr.oregonstate.edu/~xfern/classes /cs534/

84 Spectral Clustering

85 How to Create the Graph?

86

87 Motivations / Objectives

88 Graph Terminologies

89 Graph Cut

90 Min Cut Objective

91 Normalized Cut

92 Optimizing Ncut Objective

93 Solving Ncut

94 2-way Normalized Cuts

95 Creating Bi-Partition Using 2 nd Eigenvector

96 K-way Partition?

97 Spectral Clustering

98

99


Download ppt "Initial slides by Eamonn Keogh Clustering. Organizing data into classes such that there is high intra-class similarity low inter-class similarity Finding."

Similar presentations


Ads by Google