Presentation is loading. Please wait.

Presentation is loading. Please wait.

School of Computer Science & Engineering

Similar presentations


Presentation on theme: "School of Computer Science & Engineering"— Presentation transcript:

1 School of Computer Science & Engineering
Artificial Intelligence Distance Measure Dae-Won Kim School of Computer Science & Engineering Chung-Ang University

2 Data mining is the process of discovery of unknown relationship between patterns.

3 Thus, many data mining techniques are based on similarity between patterns.

4 We need to define what we mean by similar, so that we can calculate formal similarity measures.

5 Similarity is obtained from vectors of measurements (features) describing each pattern.

6 Instead of talking about how similar two objects are, we could talk about how dissimilar they are.

7 We can easily define the other by applying a monotonically decreasing transformation.

8 e.g.) Similarity S(x,y) is given,
Dissimilarity D(x,y)=1 - S(x,y) Dissimilarity D(x,y)=1 / S(x,y)

9 The term distance is often used informally to refer to a dissimilarity measure.

10 The Euclidean distance:

11 It is simple and effective, and is called metric.
Q: What happens if it’s not a metric.

12 A metric is a dissimilarity measures that satisfies three conditions:

13 D(x,y)  0 and D(x,x)=0 D(x,y) = D(y,x) D(x,y)  D(x,k)+D(k,y) (triangle inequality)

14 However, the Euclidean distance has some limitations.

15 Limit 1. What if the features were measured using different units. (e
Limit 1. What if the features were measured using different units? (e.g., length, weight)

16 We can make all features equally important by using normalization and standardization.

17 Or, if we have some idea of the relative importance for each feature, then we can weight them.

18 The weighted Euclidean distance:

19 Limit 2. What if the shape of each class is not hyper-sphere?

20 Two pairs of data in the same class can yield different distance values.

21 The Mahalanobis distance:

22 The Euclidean distance is generalized to the Minkowski (L).

23 The Euclidean distance is the special case of =2.

24 Sample (Pearson) correlation coefficient:

25 The Cosine distance:

26 Q1: Euclidean vs. Correlation
vs. Cosine distance Q2: Which one of the three is used in Google?

27 We are now discussing how to compute the similarity/distance between two categorical patterns.

28 For binary categorical data, we can count the number of features on which two patterns take the same or take different values.

29 Rather than measuring the dissimilarities between patterns, we often measure the similarities.

30 The simple matching coefficient:

31 It may be inappropriate to include the (x:0,y:0) or (x:1,y:1) depending on the meaning of 0 and 1.

32 The Jaccard coefficient:

33 How about categorical data in which the features have more than two categories?

34 The Hamming distance:

35 For example, what is your approach to the mixed-type (numeric and categorical features) data ?

36 You need to develop a good distance measure for mixed-data.

37 Approach 1. Preprocessing

38 Preprocess the mixed data into a single-type data.

39 1. discretize numeric features or
2. encode categorical features as numeric integer values.

40 However, this approach often leads to loss of important information.

41 Approach 2. Mixed-data distance

42 Let us mix two distance measures.
D(x,y)= Euclidean(x,y) + Hamming(x,y)

43 What would be issues?

44 D(x,y)=Normalized Euclidean(x,y) +
Normalized Hamming(x,y)

45 Tip: HVDM will be a starting point
(Heterogeneous Value Difference Metric)

46 Two values are considered to be closer if they have more similar classifications (labels)

47 A feature color has three values red, green and blue when identifying if an object is an apple.

48 Red and green would be considered closer than red and blue because the former two both have similar correlations with the output class apple.

49 Q: Distance measure in Project II-2

50 The 1st trial: 1NN using Euclidean distance for fixed-length data

51 Two same characters can show different signal shapes in time and magnitude.

52 Is the Euclidean distance working?

53 Two signals should be aligned and stretched in each axis.

54

55 Of the algorithms you are familiar with, which one is best to calculate the distance between two patterns?

56 Dynamic Programming

57 Dynamic Time Warping

58 The DTW is the most widely used algorithm to measure similarity between two given signals that vary in time or speed by warping the signals non-linearly in the time dimension

59 As expected, it is easy to implement (dynamic programming).

60 DTW for 1-D time-series is simple.

61 Tip. Project-II contains 3-D time-series.

62 We are expecting your good job.


Download ppt "School of Computer Science & Engineering"

Similar presentations


Ads by Google