Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 1 Introduction to Clustering. Section 1.1 Introduction.

Similar presentations


Presentation on theme: "Chapter 1 Introduction to Clustering. Section 1.1 Introduction."— Presentation transcript:

1 Chapter 1 Introduction to Clustering

2 Section 1.1 Introduction

3 3 Objectives Introduce clustering and unsupervised learning. Explain the various forms of cluster analysis. Outline several key distance metrics used as estimates of experimental unit similarity.

4 4 Course Overview Variable Selection VARCLUS Plot Data PRINCOMP,MDS,CANDISC Preprocessing ACECLUS ‘Fuzzy’ Clustering FACTOR Discrete Clustering Hierarchical Clustering CLUSTER Optimization Clustering Parametric Clustering FASTCLUS Non-Parametric Clustering MODECLUS

5 5 “Cluster analysis is a set of methods for constructing a (hopefully) sensible and informative classification of an initially unclassified set of data, using the variable values observed on each individual.” B. S. Everitt (1998), “The Cambridge Dictionary of Statistics” Definition Cluster Solution Sensible  Interpretable Given Class Derived Class Un- interpretable

6 6 Learning without a priori knowledge about the classification of samples; learning without a teacher. Kohonen (1995), “Self-Organizing Maps” Unsupervised Learning

7 Section 1.2 Types of Clustering

8 8 Distinguish between the two major classes of clustering methods: –hierarchical clustering –optimization (partitive) clustering. Objectives

9 9 Hierarchical Clustering AgglomerativeDivisiveIteration 1 2 3 4

10 10 Propagation of Errors Iteration 1 2 3 4 (error)

11 11 Optimization (Partitive) Clustering “Seeds”Observations X X X X Initial StateFinal State Old location X X X X X X X X New location

12 12 Heuristic Search 1.Find an initial partition of the n objects into g groups. 2.Calculate the change in the error function produced by moving each observation from its own cluster to another group. 3.Make the change resulting in the greatest improvement in the error function. 4.Repeat steps 2 and 3 until no move results in improvement.

13 Section 1.3 Similarity Metrics

14 14 Define similarity and what comprises a good measure of similarity. Describe a variety of similarity metrics. Objectives

15 15 Although the concept of similarity is fundamental to our thinking, it is also often difficult to precisely quantify. Which is more similar to a duck: a crow or a penguin? The metric that you choose to operationalize similarity (for example, Euclidean distance or Pearson correlation) often impacts the clusters you recover. What Is Similarity?

16 16 The following principles have been identified as a foundation of any good similarity metric: 1.symmetry: d(x,y) = d(y,x) 2.non-identical distinguishability: if d(x,y)  0 then x  y 3.identical non-distinguishability: if d(x,y) = 0 then x = y Some popular similarity metrics (for example, correlation) fail to meet one or more of these criteria. What Makes a Good Similarity Metric?

17 17 Euclidean Distance Similarity Metric Pythagorean Theorem: The square of the hypotenuse is equal to the sum of the squares of the other two sides. x1x1 x2x2 (x1,x2)(x1,x2) (0, 0)

18 18 City block (Manhattan) distance is the distance between two points measured along axes at right angles. City Block Distance Similarity Metric (w 1,w 2 ) (x 1,x 2 )

19 19 Similar............. Tom Marie Correlation Similarity Metrics Dissimilar............. Jerry Marie Tom............. Jerry No Similarity

20 20 The Problem with Correlation VariableObservation 1Observation 2 x 1 551 x 2 442 x 3 333 x 4 224 x 5 115 Mean333 Std. Dev.1.581114.2302 The correlation between observations 1 and 2 is a perfect 1.0, but are the observations really similar?

21 21 Density Estimate Based Similarity Metrics Clusters can be seen as areas of increased observation density. Similarity is a function of the distance between the identified density bubbles (hyper-spheres). similarity Density Estimate 1 (Cluster 1) Density Estimate 2 (Cluster 2)

22 22 12345…17 Gene A01100100100111001 Gene B01110000111111011 D H =00010100011000010 = 5 Gene expression levels under 17 conditions (low=0, high=1) Hamming Distance Similarity Metric

23 23 The DISTANCE Procedure General form of the DISTANCE procedure: Both the PROC DISTANCE statement and the VAR statement are required. PROC DISTANCE METHOD=method ; COPY variables; VAR level (variables ) ; RUN; PROC DISTANCE METHOD=method ; COPY variables; VAR level (variables ) ; RUN;

24 24 This demonstration illustrates the impact on cluster formation of two distance metrics generated by the DISTANCE procedure. Generating Distances ch1s3d1

25 Section 1.4 Classification Performance

26 26 Use classification matrices to determine the quality of a proposed cluster solution. Use the chi-square and Cramer’s V statistic to assess the relative strength of the derived association. Objectives

27 27 Perfect Solution Quality of the Cluster Solution Typical Solution No Solution

28 28 Probability of Cluster Assignment Frequency The probability that a cluster number represents a given class is given by the cluster’s proportion of the row total. Probability

29 29 The Chi-Square Statistic The chi-square statistic (and associated probability) determine whether an association exists depend on sample size do not measure the strength of the association.

30 30 Measuring Strength of an Association WEAKSTRONG 01 CRAMER'S V STATISTI C  Cramer’s V ranges from -1 to 1 for 2X2 tables.


Download ppt "Chapter 1 Introduction to Clustering. Section 1.1 Introduction."

Similar presentations


Ads by Google