Chapter 2: Getting to Know Your Data

Slides:



Advertisements
Similar presentations
Clustering Clustering of data is a method by which large sets of data is grouped into clusters of smaller sets of similar data. The example below demonstrates.
Advertisements

Clustering.
What is Cluster Analysis?
Clustering.
CLUSTERING PROXIMITY MEASURES
Distance and Similarity Measures
Descriptive Statistics
What is Cluster Analysis?
1 CLUSTERING  Basic Concepts In clustering or unsupervised learning no training data, with class labeling, are available. The goal becomes: Group the.
What is Cluster Analysis? Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or.
Clustering.
Scales of Measurement S1-1. Scales of Measurement: important for selecting stat's (later on) 1. Nominal Scale: number is really a name! 1 = male 2 = female.
Cluster Analysis.
What is Cluster Analysis
EECS 800 Research Seminar Mining Biological Data
Distance Measures Tan et al. From Chapter 2.
1 Chapter 8: Clustering. 2 Searching for groups Clustering is unsupervised or undirected. Unlike classification, in clustering, no pre- classified data.
Cluster Analysis (1).
Cluster Analysis.
Data Mining: Concepts and Techniques — Chapter 2 —
Distance Measures Tan et al. From Chapter 2. Similarity and Dissimilarity Similarity –Numerical measure of how alike two data objects are. –Is higher.
COSC 4335 DM: Preprocessing Techniques
Distance and Similarity Measures
September 4, 2015Data Mining: Concepts and Techniques1 Data Mining: Concepts and Techniques — Chapter 2 —
@ 2012 Wadsworth, Cengage Learning Chapter 5 Description of Behavior Through Numerical 2012 Wadsworth, Cengage Learning.
COMP5331: Knowledge Discovery and Data Minig
Exploratory Data Analysis. Computing Science, University of Aberdeen2 Introduction Applying data mining (InfoVis as well) techniques requires gaining.
Data Mining Cluster Analysis: Basic Concepts and Algorithms
Chapter 3 Statistical Concepts.
Cluster Analysis Part I
Statistics. Question Tell whether the following statement is true or false: Nominal measurement is the ranking of objects based on their relative standing.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 16 Descriptive Statistics.
Chapter 3: Central Tendency. Central Tendency In general terms, central tendency is a statistical measure that determines a single value that accurately.
Data Mining & Knowledge Discovery Lecture: 2 Dr. Mohammad Abu Yousuf IIT, JU.
11/15/2012ISC471 / HCI571 Isabelle Bichindaritz 1 Clustering.
1 1 MSCIT-5210: Knowledge Discovery and Data Minig Instructor: Lei Chen Acknowledgement: Slides modified by Dr. Lei Chen based on the slides provided by.
Types of Data How to Calculate Distance? Dr. Ryan Benton January 29, 2009.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
Chapter 3: Central Tendency. Central Tendency In general terms, central tendency is a statistical measure that determines a single value that accurately.
BASIC STATISTICAL CONCEPTS Chapter Three. CHAPTER OBJECTIVES Scales of Measurement Measures of central tendency (mean, median, mode) Frequency distribution.
Business Statistics, 4e, by Ken Black. © 2003 John Wiley & Sons. 3-1 Business Statistics, 4e by Ken Black Chapter 3 Descriptive Statistics.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
LIS 570 Summarising and presenting data - Univariate analysis.
CSE4334/5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai.
Tallahassee, Florida, 2016 CIS4930 Introduction to Data Mining Getting to Know Your Data Peixiang Zhao.
Medical Statistics (full English class) Ji-Qian Fang School of Public Health Sun Yat-Sen University.
Measurements and Data. Topics Types of Data Distance Measurement Data Transformation Forms of Data Data Quality.
(Unit 6) Formulas and Definitions:. Association. A connection between data values.
Chapter 11 Summarizing & Reporting Descriptive Data.
Chapter 2: Getting to Know Your Data
Distance and Similarity Measures
Data Mining: Concepts and Techniques Data Understanding
Chapter 2: Getting to Know Your Data
Lecture 2-2 Data Exploration: Understanding Data
ECE 417 Lecture 2: Metric (=Norm) Learning
COP 6726: New Directions in Database Systems
Lecture Notes for Chapter 2 Introduction to Data Mining
Data Mining: Concepts and Techniques — Chapter 2 —
©Jiawei Han and Micheline Kamber Department of Computer Science
Similarity and Dissimilarity
CS6220: Data Mining Techniques
School of Computer Science & Engineering
Lecture Notes for Chapter 2 Introduction to Data Mining
Data Mining: Concepts and Techniques Data Understanding
Data Mining: Concepts and Techniques — Chapter 2 —
What Is Good Clustering?
Group 9 – Data Mining: Data
Data Pre-processing Lecture Notes for Chapter 2
Data Mining: Concepts and Techniques — Chapter 2 —
Presentation transcript:

Chapter 2: Getting to Know Your Data Data Objects and Attribute Types Basic Statistical Descriptions of Data Measuring Data Similarity and Dissimilarity Summary

Similarity and Dissimilarity Numerical measure of how alike two data objects are Value is higher when objects are more alike Often falls in the range [0,1] Dissimilarity (e.g., distance) Numerical measure of how different two data objects are Lower when objects are more alike Minimum dissimilarity is often 0 Upper limit varies Proximity refers to a similarity or dissimilarity Keep here because keeping it in clustering chapter seems far away. 08.11.02

Data Matrix and Dissimilarity Matrix n data points with p dimensions or p variable (also called measurements or attributes), such as age, height, weight, and so on when represents n persons. Dissimilarity matrix n data points, but registers only the distance A triangular matrix D(i,j) is the measured difference or dissimilarity between object I and j.

Proximity Measure for Nominal Attributes Can take 2 or more states, e.g., red, yellow, blue, green (generalization of a binary attribute) Method 1: Simple matching m: # of matches, p: total # of variables Method 2: Use a large number of binary attributes creating a new binary attribute for each of the M nominal states

Proximity Measure for Binary Attributes Object j A contingency table for binary data Distance measure for symmetric binary variables: p=q+r+s+t Distance measure for asymmetric binary variables: Jaccard coefficient (similarity measure for asymmetric binary variables): Object i

Dissimilarity between Binary Variables Example Gender is a symmetric attribute The remaining attributes are asymmetric binary Let the values Y and P be 1, and the value N 0 Jim 1 Mary 2

Standardizing Numeric Data Z-score: X: raw score to be standardized, μ: mean of the population, σ: standard deviation the distance between the raw score and the population mean in units of the standard deviation negative when the raw score is below the mean, “+” when above An alternative way: Calculate the mean absolute deviation where standardized measure (z-score): Using mean absolute deviation is more robust than using standard deviation

Example: Data Matrix and Dissimilarity Matrix (with Euclidean Distance)

Distance on Numeric Data: Minkowski Distance Minkowski distance: A popular distance measure where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-dimensional data objects, and h is the order (the distance so defined is also called L-h norm) Properties d(i, j) > 0 if i ≠ j, and d(i, i) = 0 (Positive definiteness) d(i, j) = d(j, i) (Symmetry) d(i, j)  d(i, k) + d(k, j) (Triangle Inequality) A distance that satisfies these properties is a metric

Special Cases of Minkowski Distance h = 1: Manhattan (city block, L1 norm) distance E.g., the Hamming distance: the number of bits that are different between two binary vectors h = 2: (L2 norm) Euclidean distance h  . “supremum” (Lmax norm, L norm) distance. This is the maximum difference between any component (attribute) of the vectors

Example: Minkowski Distance Dissimilarity Matrices Manhattan (L1) Euclidean (L2) Supremum

Ordinal Variables An ordinal variable can be discrete or continuous Order is important, e.g., rank Can be treated like interval-scaled replace xif by their rank map the range of each variable onto [0, 1] by replacing i-th object in the f-th variable by compute the dissimilarity using methods for interval-scaled variables

Example Rank 3 1 2 M 1 .5 Dissimilarity matrix Normalizes the ranking by mapping Rank 1 to 0 Rank 2 to 0.5 Rank 3 to 1 Use Euclidean distance Test (ordinal) Object Identifier Excellent 1 Fair 2 Good 3 4 Rank 3 1 2 M 1 .5 4 3 2 1 0.5 Dissimilarity matrix

Attributes of Mixed Type A database may contain all attribute types Nominal, symmetric binary, asymmetric binary, numeric, ordinal One may use a weighted formula to combine their effects f is binary or nominal: ij(f) = 0 if xif = xjf , or ij(f) = 1 otherwise f is numeric: use the normalized distance f is ordinal Compute ranks rif and Treat zif as interval-scaled

Cosine Similarity A document can be represented by thousands of attributes, each recording the frequency of a particular word (such as keywords) or phrase in the document. Other vector objects: gene features in micro-arrays, … Applications: information retrieval, biologic taxonomy, gene feature mapping, ... Cosine measure: If d1 and d2 are two vectors (e.g., term-frequency vectors), then cos(d1, d2) = (d1  d2) /||d1|| ||d2|| , where  indicates vector dot product, ||d||: the length of vector d MK: Machine learning used the term “feature VECTORS” > I made some changes to the wording of this slide because it implied that our previous data were not vectors, but they were all feature vectors. I also changed the example since the one used was from Tan’s book (see next slide). Chapter 3: Statistics Methods Co-variance distance K-L divergence

Example: Cosine Similarity cos(d1, d2) = (d1  d2) /||d1|| ||d2|| , where  indicates vector dot product, ||d|: the length of vector d Ex: Find the similarity between documents 1 and 2. d1 = (5, 0, 3, 0, 2, 0, 0, 2, 0, 0) d2 = (3, 0, 2, 0, 1, 1, 0, 1, 0, 1) d1d2 = 5*3+0*0+3*2+0*0+2*1+0*1+0*1+2*1+0*0+0*1 = 25 ||d1||= (5*5+0*0+3*3+0*0+2*2+0*0+0*0+2*2+0*0+0*0)0.5=(42)0.5 = 6.481 ||d2||= (3*3+0*0+2*2+0*0+1*1+1*1+0*0+1*1+0*0+1*1)0.5=(17)0.5 = 4.12 cos(d1, d2 ) = 0.94 09/09/25MK: New example (previous one was from Tan’s book). Chapter 3: Statistics Methods Co-variance distance K-L divergence

Chapter 2: Getting to Know Your Data Data Objects and Attribute Types Basic Statistical Descriptions of Data Measuring Data Similarity and Dissimilarity Summary

Summary Data attribute types: nominal, binary, ordinal, interval-scaled, ratio-scaled Many types of data sets, e.g., numerical, text, graph, Web, image. Gain insight into the data by: Basic statistical data description: central tendency, dispersion, graphical displays Data visualization: map data onto graphical primitives Measure data similarity Above steps are the beginning of data preprocessing. Many methods have been developed but still an active area of research. 08/10/23: This slide was out of date. Draft of new version above. Cut out the following: Data description, data exploration, and measure data similarity set the basis for quality data preprocessing A lot of methods have been developed but data preprocessing still an active area of research