CSE4334/5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai.

Slides:



Advertisements
Similar presentations
Data Mining Cluster Analysis Basics
Advertisements

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ What is Cluster Analysis? l Finding groups of objects such that the objects in a group will.
Lecture Notes for Chapter 2 Introduction to Data Mining
Qiang Yang Adapted from Tan et al. and Han et al.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
Data Mining Cluster Analysis: Basic Concepts and Algorithms
What is Cluster Analysis?
What is Cluster Analysis? Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or.
An Introduction to Clustering
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
Chapter 3 Data Issues. What is a Data Set? Attributes (describe objects) Variable, field, characteristic, feature or observation Objects (have attributes)
EECS 800 Research Seminar Mining Biological Data
© Vipin Kumar CSci 8980 Fall CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer.
Distance Measures Tan et al. From Chapter 2.
Cluster Analysis (1).
What is Cluster Analysis?
Distance Measures Tan et al. From Chapter 2. Similarity and Dissimilarity Similarity –Numerical measure of how alike two data objects are. –Is higher.
COSC 4335 DM: Preprocessing Techniques
ZHANGXI LIN TEXAS TECH UNIVERSITY Lecture Notes 10 CRM Segmentation - Introduction.
Data Mining Lecture 2: data.
© Vipin Kumar CSci 8980 Fall CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
Data Mining & Knowledge Discovery Lecture: 2 Dr. Mohammad Abu Yousuf IIT, JU.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining Basics: Data Remark: Discusses “basics concerning data sets (first half of Chapter.
Jeff Howbert Introduction to Machine Learning Winter Clustering Basic Concepts and Algorithms 1.
Data Mining Cluster Analysis: Basic Concepts and Algorithms Adapted from Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar.
Minqi Zhou Introduction to Data Mining 3/23/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Minqi Zhou.
Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
Jeff Howbert Introduction to Machine Learning Winter Machine Learning Data.
What is Data? Attributes
CIS527: Data Warehousing, Filtering, and Mining
1 Data Mining Lecture 2: Data. 2 What is Data? l Collection of data objects and their attributes l Attribute is a property or characteristic of an object.
Chapter 2: Getting to Know Your Data
1 Chapter 2 Data. What is Data? Collection of data objects and their attributes An attribute is a property or characteristic of an object –Examples: eye.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
Clustering/Cluster Analysis. What is Cluster Analysis? l Finding groups of objects such that the objects in a group will be similar (or related) to one.
1 What is Data? l An attribute is a property or characteristic of an object l Examples: eye color of a person, temperature, etc. l Attribute is also known.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Cluster Analysis This lecture node is modified based on Lecture Notes for Chapter.
Tallahassee, Florida, 2016 CIS4930 Introduction to Data Mining Getting to Know Your Data Peixiang Zhao.
DATA MINING: CLUSTER ANALYSIS Instructor: Dr. Chun Yu School of Statistics Jiangxi University of Finance and Economics Fall 2015.
1 Data Mining Lecture 02a: Data Theses slides are based on the slides by Tan, Steinbach and Kumar (textbook authors)
Data Mining Lecture 02a: Theses slides are based on the slides by Data
Lecture Notes for Chapter 2 Introduction to Data Mining
Lecture 2-2 Data Exploration: Understanding Data
Data Mining K-means Algorithm
Lecture Notes for Chapter 2 Introduction to Data Mining
Similarity and Dissimilarity
Lecture Notes for Chapter 2 Introduction to Data Mining
Lecture Notes for Chapter 2 Introduction to Data Mining
CISC 4631 Data Mining Lecture 02:
Lecture Notes for Chapter 2 Introduction to Data Mining
Lecture Notes for Chapter 2 Introduction to Data Mining
Critical Issues with Respect to Clustering
Lecture Notes for Chapter 2 Introduction to Data Mining
Lecture Notes for Chapter 2 Introduction to Data Mining
Lecture Notes for Chapter 2 Introduction to Data Mining
Lecture Notes for Chapter 2 Introduction to Data Mining
What Is Good Clustering?
Lecture Notes for Chapter 2 Introduction to Data Mining
Data Mining Lecture 02a: Theses slides are based on the slides by Data
Lecture Notes for Chapter 2 Introduction to Data Mining
Group 9 – Data Mining: Data
Lecture Notes for Chapter 2 Introduction to Data Mining
Data Pre-processing Lecture Notes for Chapter 2
Lecture Notes for Chapter 2 Introduction to Data Mining
What is Cluster Analysis?
Presentation transcript:

CSE4334/5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides courtesy of Vipin Kumar) Lecture 10: Clustering (1)

What is Cluster Analysis?  Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups Inter-cluster distances are maximized Intra-cluster distances are minimized 2

What is not Cluster Analysis?  Supervised classification  Have class label information  Simple segmentation  Dividing students into different registration groups alphabetically, by last name  Results of a query  Groupings are a result of an external specification  Graph partitioning  Some mutual relevance and synergy, but areas are not identical 3

Notion of a Cluster can be Ambiguous How many clusters? Four ClustersTwo Clusters Six Clusters 4

Types of Clusterings  A clustering is a set of clusters  Important distinction between hierarchical and partitional sets of clusters  Partitional Clustering  A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset  Hierarchical clustering  A set of nested clusters organized as a hierarchical tree 5

Partitional Clustering Original Points A Partitional Clustering 6

Hierarchical Clustering Traditional Hierarchical Clustering Non-traditional Hierarchical Clustering Non-traditional Dendrogram Traditional Dendrogram 7

Other Distinctions Between Sets of Clusters  Exclusive versus non-exclusive  In non-exclusive clusterings, points may belong to multiple clusters.  Can represent multiple classes or ‘border’ points  Fuzzy versus non-fuzzy  In fuzzy clustering, a point belongs to every cluster with some weight between 0 and 1  Weights must sum to 1  Probabilistic clustering has similar characteristics  Partial versus complete  In some cases, we only want to cluster some of the data  Heterogeneous versus homogeneous  Cluster of widely different sizes, shapes, and densities 8

Types of Clusters  Well-separated clusters  Center-based clusters  Contiguous clusters  Density-based clusters  Property or Conceptual  Described by an Objective Function 9

Types of Clusters: Well-Separated  Well-Separated Clusters:  A cluster is a set of points such that any point in a cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster. 3 well-separated clusters 10

Types of Clusters: Center-Based  Center-based  A cluster is a set of objects such that an object in a cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster  The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster 4 center-based clusters 11

Types of Clusters: Contiguity-Based  Contiguous Cluster (Nearest neighbor or Transitive)  A cluster is a set of points such that a point in a cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the cluster. 8 contiguous clusters 12

Types of Clusters: Density-Based  Density-based  A cluster is a dense region of points, which is separated by low- density regions, from other regions of high density.  Used when the clusters are irregular or intertwined, and when noise and outliers are present. 6 density-based clusters 13

Types of Clusters: Conceptual Clusters  Shared Property or Conceptual Clusters  Finds clusters that share some common property or represent a particular concept. 2 Overlapping Circles 14

Types of Attributes 15

Types of Attributes by Measurement Scale  Categorical (Qualitative) Attribute  Nominal Examples: ID numbers, eye color, zip codes  Ordinal Examples: rankings (e.g., taste of potato chips on a scale from 1- 10), grades, height in {tall, medium, short}  Numeric (Quantitative) Attribute  Interval Examples: calendar dates, temperatures in Celsius or Fahrenheit.  Ratio Examples: temperature in Kelvin, length, time, counts 16

Properties of Attribute Values  The type of an attribute depends on which of the following properties it possesses:  Distinctness: =   Order:  Addition: + -  Multiplication: * /  Nominal attribute: distinctness  Ordinal attribute: distinctness & order  Interval attribute: distinctness, order & addition  Ratio attribute: all 4 properties 17

Attribute TypeDescriptionExamplesOperations NominalThe values of a nominal attribute are just different names, i.e., nominal attributes provide only enough information to distinguish one object from another. (=,  ) zip codes, employee ID numbers, eye color, sex: {male, female} mode, entropy, contingency correlation,  2 test OrdinalThe values of an ordinal attribute provide enough information to order objects. ( ) hardness of minerals, {good, better, best}, grades, street numbers median, percentiles, rank correlation, run tests, sign tests IntervalFor interval attributes, the differences between values are meaningful, i.e., a unit of measurement exists. (+, - ) calendar dates, temperature in Celsius or Fahrenheit mean, standard deviation, Pearson's correlation, t and F tests RatioFor ratio variables, both differences and ratios are meaningful. (*, /) temperature in Kelvin, monetary quantities, counts, age, mass, length, electrical current geometric mean, harmonic mean, percent variation 18

Attribute Level TransformationComments NominalAny permutation of valuesIf all employee ID numbers were reassigned, would it make any difference? OrdinalAn order preserving change of values, i.e., new_value = f(old_value) where f is a monotonic function. An attribute encompassing the notion of good, better best can be represented equally well by the values {1, 2, 3} or by { 0.5, 1, 10}. Intervalnew_value =a * old_value + b where a and b are constants Thus, the Fahrenheit and Celsius temperature scales differ in terms of where their zero value is and the size of a unit (degree). Rationew_value = a * old_valueLength can be measured in meters or feet. 19

Type of Attributes by Number of Values  Discrete Attribute  Has only a finite or countably infinite set of values  Examples: zip codes, counts, or the set of words in a collection of documents  Often represented as integer variables.  Note: binary attributes are a special case of discrete attributes  Continuous Attribute  Has real numbers as attribute values  Examples: temperature, height, or weight.  Practically, real values can only be measured and represented using a finite number of digits.  Continuous attributes are typically represented as floating-point variables. 20

Types of Attributes  By measure scale  Categorical (Qualitative) Attribute Nominal Ordinal  Numeric (Quantitative) Attribute Interval Ratio  By number of values  Discrete Attribute  Continuous Attribute 21

Similarity and Dissimilarity 22

Similarity and Dissimilarity  Similarity  Numerical measure of how alike two data objects are.  Is higher when objects are more alike.  Often falls in the range [0,1]  Dissimilarity  Numerical measure of how different are two data objects  Lower when objects are more alike  Minimum dissimilarity is often 0  Upper limit varies  Proximity refers to a similarity or dissimilarity 23

Similarity and Dissimilarity  Similarity and Dissimilarity of Simple Attributes  Dissimilarity between Objects:  Distance  Set Difference  …  Similarity between Objects:  Binary Vectors  Vectors  … 24

Similarity/Dissimilarity for Simple Attributes p and q are the attribute values for two data objects. 25

Dissimilarity between Data Objects: Euclidean Distance  Euclidean Distance Where n is the number of dimensions (attributes) and p k and q k are, respectively, the k th attributes (components) or data objects p and q.  Standardization is necessary, if scales differ. 26

Euclidean Distance Distance Matrix 27

Minkowski Distance  Minkowski Distance is a generalization of Euclidean Distance Where r is a parameter, n is the number of dimensions (attributes) and p k and q k are, respectively, the kth attributes (components) or data objects p and q. 28

Minkowski Distance: Examples  r = 1. City block (Manhattan, taxicab, L 1 norm) distance.  A common example of this is the Hamming distance, which is just the number of bits that are different between two binary vectors  r = 2. Euclidean distance  r  . “supremum” (L max norm, L  norm) distance.  This is the maximum difference between any component of the vectors  Do not confuse r with n, i.e., all these distances are defined for all numbers of dimensions. 29

Minkowski Distance Distance Matrix 30

Common Properties of a Distance  Distances, such as the Euclidean distance, have some well known properties. 1. d(p, q)  0 for all p and q and d(p, q) = 0 only if p = q. (Positive definiteness) 2. d(p, q) = d(q, p) for all p and q. (Symmetry) 3. d(p, r)  d(p, q) + d(q, r) for all points p, q, and r. (Triangle Inequality) where d(p, q) is the distance (dissimilarity) between points (data objects), p and q.  A distance that satisfies these properties is a metric 31

Common Properties of a Similarity  Similarities, also have some well known properties. 1. s(p, q) = 1 (or maximum similarity) only if p = q. 2. s(p, q) = s(q, p) for all p and q. (Symmetry) where s(p, q) is the similarity between points (data objects), p and q. 32

Similarity Between Binary Vectors  Common situation is that objects, p and q, have only binary attributes  Compute similarities using the following quantities M 01 = the number of attributes where p was 0 and q was 1 M 10 = the number of attributes where p was 1 and q was 0 M 00 = the number of attributes where p was 0 and q was 0 M 11 = the number of attributes where p was 1 and q was 1  Simple Matching and Jaccard Coefficients SMC = number of matches / number of attributes = (M 11 + M 00 ) / (M 01 + M 10 + M 11 + M 00 ) J = number of 11 matches / number of not-both-zero attributes values = (M 11 ) / (M 01 + M 10 + M 11 ) 33

SMC versus Jaccard: Example p = q = M 01 = 2 (the number of attributes where p was 0 and q was 1) M 10 = 1 (the number of attributes where p was 1 and q was 0) M 00 = 7 (the number of attributes where p was 0 and q was 0) M 11 = 0 (the number of attributes where p was 1 and q was 1) SMC = (M 11 + M 00 )/(M 01 + M 10 + M 11 + M 00 ) = (0+7) / ( ) = 0.7 J = (M 11 ) / (M 01 + M 10 + M 11 ) = 0 / ( ) = 0 34

Cosine Similarity  If d 1 and d 2 are two document vectors, then cos( d 1, d 2 ) = (d 1  d 2 ) / ||d 1 || ||d 2 ||, where  indicates vector dot product and || d || is the length of vector d.  Example: d 1 = d 2 = d 1  d 2 = 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 + 0*0 + 0*2 = 5 ||d 1 || = (3*3+2*2+0*0+5*5+0*0+0*0+0*0+2*2+0*0+0*0) 0.5 = (42) 0.5 = ||d 2 || = (1*1+0*0+0*0+0*0+0*0+0*0+0*0+1*1+0*0+2*2) 0.5 = (6) 0.5 = cos( d 1, d 2 ) =

General Approach for Combining Similarities  Sometimes attributes are of many different types, but an overall similarity is needed. 36

Using Weights to Combine Similarities  May not want to treat all attributes the same.  Use weights w k which are between 0 and 1 and sum to 1. 37