Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Machine Learning

Similar presentations


Presentation on theme: "Introduction to Machine Learning"— Presentation transcript:

1 Introduction to Machine Learning
Data Science Workshop Introduction to Machine Learning Instructor: Dr Eamonn Keogh Computer Science & Engineering Department 318 Winston Chung Hall University of California - Riverside Riverside, CA Get the slides now! Some slides adapted from Tan, Steinbach and Kumar, and from Chris Clifton

2 *Currently, self driving cars do a bit of both.
Machine Learning Machine learning explores the study and construction of algorithms  that can learn from data. Basic Idea: Instead of trying to create a very complex program to do X. Use a (relatively) simple program that can learn to do X. Example: Instead of trying to program a car to drive (If light(red) && NOT(pedestrian) || speed(X) <= 12 && .. ), create a program that watches human drive, and learns how to drive*. *Currently, self driving cars do a bit of both.

3 Why Machine Learning I Why do machine learning instead of just writing an explicit program? It is often much cheaper, faster and more accurate. It may be possible to teach a computer something that we are not sure how to program. For example: We could explicitly write a program to tell if a person is obese If (weightkg /(heightm  heightm)) > 30, printf(“Obese”) We would find it hard to write a program to tell is a person is sad However, we could easily obtain a 1,000 photographs of sad people/ not sad people, and ask a machine learning algorithm to learn to tell them apart.

4 What kind of data do you want to work with?
Insects Stars Books Mice Counties s Historical manuscripts People As potential terrorists As potential voters for your candidate As potential heart attack victims As potential tax cheats etc

5 What kind of data do you want to work with?
Insects Stars Books Mice Counties s Historical manuscripts People As potential terrorists As potential voters for your candidate As potential heart attack victims As potential tax cheats No matter what kind of data you want to work with, it is best if you can “massage” it into a rectangular flat file.. This may be easy, or…

6 What is Data? Collection of objects and their attributes
An attribute is a property or characteristic of an object Examples: eye color of a person, temperature, etc. Attribute is also known as variable, field, characteristic, or feature A collection of attributes describe an object Objects are also known as records, points, cases, samples, entities, exemplars or instances Objects could be a customer, a patient, a car, a country, a novel, a drug, a movie etc Attributes Objects

7 Data Dimensionality and Numerosity
The number of attributes is the dimensionality of a dataset. The number of objects is the numerosity (or just size) of a dataset. Some of the algorithms we want to use, may scale badly in the dimensionality, or scale badly in the numerosity (or both). As we will see, reducing the dimensionality and/or numerosity of data is a common task in data mining. Attributes Objects

8 The Classification Problem
(informal definition) Katydids Given a collection of annotated data. In this case 5 instances Katydids of and five of Grasshoppers, decide what type of insect the unlabeled example is. Grasshoppers Katydid or Grasshopper?

9 The Classification Problem
(informal definition) Canadian Given a collection of annotated data. In this case 3 instances Canadian of and 3 of American, decide what type of coin the unlabeled example is. American Canadian or American?

10 Color {Green, Brown, Gray, Other}
For any domain of interest, we can measure features Color {Green, Brown, Gray, Other} Has Wings? Abdomen Length Thorax Length Antennae Length Mandible Size Spiracle Diameter Leg Length

11 Sidebar 1 In data mining, we usually don’t have a choice of what features to measure. The data is not usually collect with data mining in mind. The features we really want may not be available: Why? ____________________ We typically have to use (a subset) of whatever data we are given.

12 Sidebar 2 In data mining, we can sometimes generate new features.
For example Feature X = Abdomen Length/ Antennae Length Abdomen Length Antennae Length

13 We can store features in a database.
My_Collection We can store features in a database. Insect ID Abdomen Length Antennae Insect Class 1 2.7 5.5 Grasshopper 2 8.0 9.1 Katydid 3 0.9 4.7 4 1.1 3.1 5 5.4 8.5 6 2.9 1.9 7 6.1 6.6 8 0.5 1.0 9 8.3 10 8.1 Katydids The classification problem can now be expressed as: Given a training database (My_Collection), predict the class label of a previously unseen instance previously unseen instance = 11 5.1 7.0 ???????

14 Grasshoppers Katydids 10 1 2 3 4 5 6 7 8 9 Antenna Length
Abdomen Length

15 Grasshoppers Katydids
We will also use this lager dataset as a motivating example… 10 1 2 3 4 5 6 7 8 9 Antenna Length Each of these data objects are called… exemplars (training) examples instances tuples Abdomen Length

16 We will return to the previous slide in two minutes
We will return to the previous slide in two minutes. In the meantime, we are going to play a quick game. I am going to show you some classification problems which were shown to pigeons! Let us see if you are as smart as a pigeon!

17 Pigeon Problem 1 Examples of class A Examples of class B 3 4 1.5 5 6 8
Examples of class B

18 Pigeon Problem 1 What class is this object? Examples of class A
Examples of class B What about this one, A or B?

19 This is a B! Pigeon Problem 1 Here is the rule.
Examples of class A Examples of class B Here is the rule. If the left bar is smaller than the right bar, it is an A, otherwise it is a B.

20 Pigeon Problem 2 Oh! This ones hard! Examples of class A
Examples of class B Even I know this one

21 Pigeon Problem 2 Examples of class A Examples of class B
The rule is as follows, if the two bars are equal sizes, it is an A. Otherwise it is a B. So this one is an A.

22 Pigeon Problem 3 Examples of class A Examples of class B
This one is really hard! What is this, A or B?

23 Pigeon Problem 3 It is a B! Examples of class A Examples of class B
The rule is as follows, if the square of the sum of the two bars is less than or equal to 100, it is an A. Otherwise it is a B.

24 Why did we spend so much time with this stupid game?
Because we wanted to show that almost all classification problems have a geometric interpretation, check out the next 3 slides…

25 Pigeon Problem 1 Examples of class A Examples of class B
Left Bar 10 1 2 3 4 5 6 7 8 9 Right Bar Examples of class A Examples of class B Here is the rule again. If the left bar is smaller than the right bar, it is an A, otherwise it is a B.

26 Pigeon Problem 2 Examples of class A Examples of class B Left Bar 4 4
10 1 2 3 4 5 6 7 8 9 Right Bar Examples of class A Examples of class B Let me look it up… here it is.. the rule is, if the two bars are equal sizes, it is an A. Otherwise it is a B.

27 Pigeon Problem 3 Examples of class A Examples of class B
Left Bar 100 10 20 30 40 50 60 70 80 90 Right Bar Examples of class A Examples of class B The rule again: if the square of the sum of the two bars is less than or equal to 100, it is an A. Otherwise it is a B.

28 Grasshoppers Katydids 10 1 2 3 4 5 6 7 8 9 Antenna Length
Abdomen Length

29 Katydids Grasshoppers previously unseen instance = 11 5.1 7.0
??????? We can “project” the previously unseen instance into the same space as the database. We have now abstracted away the details of our particular problem. It will be much easier to talk about points in space. 10 1 2 3 4 5 6 7 8 9 Antenna Length Katydids Grasshoppers Abdomen Length

30 Simple Linear Classifier
10 1 2 3 4 5 6 7 8 9 R.A. Fisher If previously unseen instance above the line then class is Katydid else class is Grasshopper Katydids Grasshoppers

31 Simple Quadratic Classifier Simple Cubic Classifier Simple Quartic Classifier Simple Quintic Classifier Simple….. 10 1 2 3 4 5 6 7 8 9 If previously unseen instance above the line then class is Katydid else class is Grasshopper Katydids Grasshoppers

32 The simple linear classifier is defined for higher dimensional spaces…

33 … we can visualize it as being an n-dimensional hyperplane

34 It is interesting to think about what would happen in this example if we did not have the 3rd dimension…

35 We can no longer get perfect accuracy with the simple linear classifier…
We could try to solve this problem by user a simple quadratic classifier or a simple cubic classifier.. However, as we will later see, this is probably a bad idea…

36 Which of the “Pigeon Problems” can be solved by the Simple Linear Classifier?
10 1 2 3 4 5 6 7 8 9 Perfect Useless Pretty Good 100 10 20 30 40 50 60 70 80 90 10 1 2 3 4 5 6 7 8 9 Problems that can be solved by a linear classifier are call linearly separable.

37 What would happen if we created a new feature Z, where:
Revisiting Sidebar 2 What would happen if we created a new feature Z, where: Z= abs(X.value - X.value) 10 1 2 3 4 5 6 7 8 9 All blue points are perfectly aligned, so we can only see one 1 2 3 4 5 6 7 8 9 10

38 A Famous Problem Virginica R. A. Fisher’s Iris Dataset. 3 classes
50 of each class The task is to classify Iris plants into one of 3 varieties using the Petal Length and Petal Width. Setosa Versicolor Iris Setosa Iris Versicolor Iris Virginica

39 We can generalize the piecewise linear classifier to N classes, by fitting N-1 lines. In this case we first learned the line to (perfectly) discriminate between Setosa and Virginica/Versicolor, then we learned to approximately discriminate between Virginica and Versicolor. Setosa Versicolor Virginica If petal width > – (0.325 * petal length) then class = Virginica Elseif petal width…

40 We have now seen one classification algorithm, and we are about to see more. How should we compare them? Predictive accuracy Speed and scalability time to construct the model time to use the model Robustness handling noise, missing values and irrelevant features, streaming data Interpretability: understanding and insight provided by the model

41 Predictive Accuracy I Hold Out Data
How do we estimate the accuracy of our classifier? We can use Hold Out data We divide the dataset into 2 partitions, called train and test. We build our models on train, and see how well we do on test. train 10 1 2 3 4 5 6 7 8 9 Insect ID Abdomen Length Antennae Insect Class 1 2.7 5.5 Grasshopper 2 8.0 9.1 Katydid 3 0.9 4.7 4 1.1 3.1 5 5.4 8.5 6 2.9 1.9 7 6.1 6.6 8 0.5 1.0 9 8.3 10 8.1 Katydids 1 2.7 5.5 Grasshopper 2 8.0 9.1 Katydid 3 0.9 4.7 4 1.1 3.1 5 5.4 8.5 test 6 2.9 1.9 Grasshopper 7 6.1 6.6 Katydid 8 0.5 1.0 9 8.3 10 8.1 4.7 Katydids

42 Predictive Accuracy II
How do we estimate the accuracy of our classifier? We can use K-fold cross validation We divide the dataset into K equal sized sections. The algorithm is tested K times, each time leaving out one of the K section from building the classifier, but using it to test the classifier instead Number of correct classifications Number of instances in our database Accuracy = K = 5 Insect ID Abdomen Length Antennae Insect Class 1 2.7 5.5 Grasshopper 2 8.0 9.1 Katydid 3 0.9 4.7 4 1.1 3.1 5 5.4 8.5 6 2.9 1.9 7 6.1 6.6 8 0.5 1.0 9 8.3 10 8.1 Katydids

43 The Default Rate How accurate can we be if we use no features?
The answer is called the Default Rate, the size of the most common class, over the size of the full dataset. Examples: I want to predict the sex of some pregnant friends unborn baby. The most common class is ‘boy’, so I will always say ‘boy’. I do just a tiny bit better than random guessing. I want to predict the sex of the nurse that will give me a flu shot next week. The most common class is ‘female’, so I will say ‘female’. No features

44 Predictive Accuracy III
Using K-fold cross validation is a good way to set any parameters we may need to adjust in (any) classifier. We can do K-fold cross validation for each possible setting, and choose the model with the highest accuracy. Where there is a tie, we choose the simpler model. Actually, we should probably penalize the more complex models, even if they are more accurate, since more complex models are more likely to overfit (discussed later). Accuracy = 94% Accuracy = 99% Accuracy = 100% 10 10 10 9 9 9 8 8 8 7 7 7 6 6 6 5 5 5 4 4 4 3 3 3 2 2 2 1 1 1 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10

45 Predictive Accuracy III
Number of correct classifications Number of instances in our database Accuracy = Accuracy is a single number, we may be better off looking at a confusion matrix. This gives us additional useful information… True label is... Cat Dog Pig 100 9 90 1 45 10 Classified as a…

46 Speed and Scalability I
We need to consider the time and space requirements for the two distinct phases of classification: Time to construct the classifier In the case of the simpler linear classifier, the time taken to fit the line, this is linear in the number of instances. Time to use the model In the case of the simpler linear classifier, the time taken to test which side of the line the unlabeled instance is. This can be done in constant time. As we shall see, some classification algorithms are very efficient in one aspect, and very poor in the other.

47 Robustness I We need to consider what happens when we have:
Noise For example, a persons age could have been mistyped as 650 instead of 65, how does this effect our classifier? (This is important only for building the classifier, if the instance to be classified is noisy we can do nothing). Missing values For example suppose we want to classify an insect, but we only know the abdomen length (X-axis), and not the antennae length (Y-axis), can we still classify the instance? 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 10 9 9 8 8 7 7 6 6 5 5 4 4 3 3 2 2 1 1 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10

48 Robustness II We need to consider what happens when we have:
Irrelevant features For example, suppose we want to classify people as either Suitable_Grad_Student Unsuitable_Grad_Student And it happens that scoring more than 5 on a particular test is a perfect indicator for this problem… 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 If we also use “hair_length” as a feature, how will this effect our classifier?

49 Robustness III We need to consider what happens when we have:
Streaming data For many real world problems, we don’t have a single fixed dataset. Instead, the data continuously arrives, potentially forever… (stock market, weather data, sensor data etc) Can our classifier handle streaming data? 10 1 2 3 4 5 6 7 8 9 10

50 Interpretability Some classifiers offer a bonus feature. The structure of the learned classifier tells use something about the domain. As a trivial example, if we try to classify peoples health risks based on just their height and weight, we could gain the following insight (Based of the observation that a single linear classifier does not work well, but two linear classifiers do). There are two ways to be unhealthy, being obese and being too skinny. Weight Height

51 Nearest Neighbor Classifier
10 1 2 3 4 5 6 7 8 9 Evelyn Fix Joe Hodges Antenna Length If the nearest instance to the previously unseen instance is a Katydid class is Katydid else class is Grasshopper Katydids Grasshoppers Abdomen Length

52 We can visualize the nearest neighbor algorithm in terms of a decision surface…
Note the we don’t actually have to construct these surfaces, they are simply the implicit boundaries that divide the space into regions “belonging” to each instance. This division of space is called Dirichlet Tessellation (or Voronoi diagram, or Theissen regions).

53 The nearest neighbor algorithm is sensitive to outliers…
The solution is to…

54 We can generalize the nearest neighbor algorithm to the K- nearest neighbor (KNN) algorithm.
We measure the distance to the nearest K instances, and let them vote. K is typically chosen to be an odd number. K = 1 K = 3

55 The nearest neighbor algorithm is sensitive to irrelevant features…
Suppose the following is true, if an insects antenna is longer than 5.5 it is a Katydid, otherwise it is a Grasshopper. Using just the antenna length we get perfect classification! Training data 1 2 3 4 5 6 7 8 9 10 6 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 5 Suppose however, we add in an irrelevant feature, for example the insects mass. Using both the antenna length and the insects mass with the 1-NN algorithm we get the wrong classification! 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10

56 How do we mitigate the nearest neighbor algorithms sensitivity to irrelevant features?
Use more training instances Ask an expert what features are relevant to the task Use statistical tests to try to determine which features are useful Search over feature subsets (in the next slide we will see why this is hard)

57 Why searching over feature subsets is hard
Suppose you have the following classification problem, with 100 features, where is happens that Features 1 and 2 (the X and Y below) give perfect classification, but all 98 of the other features are irrelevant… Only Feature 2 Only Feature 1 Using all 100 features will give poor results, but so will using only Feature 1, and so will using Feature 2! Of the 2100 –1 possible subsets of the features, only one really works.

58 1 2 3 4 3,4 2,4 1,4 2,3 1,3 1,2 2,3,4 1,3,4 1,2,4 1,2,3 1,2,3,4 Forward Selection Backward Elimination Bi-directional Search

59 The nearest neighbor algorithm is sensitive to the units of measurement
X axis measured in centimeters Y axis measure in dollars The nearest neighbor to the pink unknown instance is red. X axis measured in millimeters Y axis measure in dollars The nearest neighbor to the pink unknown instance is blue. One solution is to normalize the units to pure numbers. Typically the features are Z-normalized to have a mean of zero and a standard deviation of one. X = (X – mean(X))/std(x)

60 We can also speed up classification with indexing
We can speed up nearest neighbor algorithm by “throwing away” some data. This is called data editing. Note that this can sometimes improve accuracy! We can also speed up classification with indexing One possible approach. Delete all instances that are surrounded by members of their own class.

61 Up to now we have assumed that the nearest neighbor algorithm uses the Euclidean Distance, however this need not be the case… 10 1 2 3 4 5 6 7 8 9 Max (p=inf) Manhattan (p=1) Weighted Euclidean Mahalanobis

62 …In fact, we can use the nearest neighbor algorithm with any distance/similarity function
For example, is “Faloutsos” Greek or Irish? We could compare the name “Faloutsos” to a database of names using string edit distance… edit_distance(Faloutsos, Keogh) = 8 edit_distance(Faloutsos, Gunopulos) = 6 Hopefully, the similarity of the name (particularly the suffix) to other Greek names would mean the nearest nearest neighbor is also a Greek name. ID Name Class 1 Gunopulos Greek 2 Papadopoulos 3 Kollios 4 Dardanos 5 Keogh Irish 6 Gough 7 Greenhaugh 8 Hadleigh Specialized distance measures exist for DNA strings, time series, images, graphs, videos, sets, fingerprints etc…

63 Peter Piotr Edit Distance Example Piter Pioter Piotr Pyotr Peter
How similar are the names “Peter” and “Piotr”? Assume the following cost function Substitution 1 Unit Insertion 1 Unit Deletion 1 Unit D(Peter,Piotr) is 3 It is possible to transform any string Q into string C, using only Substitution, Insertion and Deletion. Assume that each of these operators has a cost associated with it. The similarity between two strings can be defined as the cost of the cheapest transformation from Q to C. Note that for now we have ignored the issue of how we can find this cheapest transformation Peter Piter Pioter Piotr Substitution (i for e) Insertion (o) Piotr Pyotr Petros Pietro Pedro Pierre Piero Peter Deletion (e)

64 Setting Parameters and Overfitting
You need to classify widgets, you get a training set.. You could use a Linear Classifier or Nearest Neighbor … Nearest Neighbor You could use 1NN, 3NN, 5NN… You could use Euclidean Distance, LP1, Lpinf, Mahalanobis… You could do some data editing… You could do some feature weighting… You could …. “Linear Classifier” You could use a Constant classifier You could use a Linear Classifier You could use a Quadratic Classifier You could…. Model Selection Parameter Selection Or parameter tuning, tweaking

65 Setting parameters and overfitting
You need to classify widgets, you get a training set.. You could use a Linear Classifier or Nearest Neighbor … Nearest Neighbor You could use 1NN, 3NN, 5NN… You could use Euclidean Distance, LP1, Lpinf, Mahalanobis… You could do some data editing… You could do some feature weighting… You could …. “Linear Classifier” You could use a Constant classifier You could use a Linear Classifier You could use a Quadratic Classifier You could….

66 Overfitting Overfitting occurs when a statistical model describes random error or noise instead of the underlying relationship. Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model which has been overfit will generally have poor predictive performance, as it can exaggerate minor fluctuations in the data.

67 Suppose we need to solve a classification problem
We are not sure if we should use the.. Simple linear classifier or the Simple quadratic classifier How do we decide which to use? We do cross validation or leave-one out and choose the best one.

68 Simple linear classifier gets 81% accuracy
Simple quadratic classifier 99% accuracy 100 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90

69 Simple linear classifier gets 96% accuracy
Simple quadratic classifier 97% accuracy

70 This problem is greatly exacerbated by having too little data
Simple linear classifier gets 90% accuracy Simple quadratic classifier 95% accuracy

71 What happens as we have more and more training examples?
The accuracy for all models goes up! The chance of making a mistake (choosing the wrong model) goes down Even if we make a mistake, it will not matter too much (because we would learn a degenerate quadratic that is basically a straight line) Simple linear 70% accuracy Simple quadratic 90% accuracy Simple linear 90% accuracy Simple quadratic 95% accuracy Simple linear % accuracy Simple quadratic % accuracy

72 One Solution: Charge Penalty for complex models
For example, for the simple {polynomial} classifier, we could “charge” 1% for every increase in the degree of the polynomial Simple linear classifier gets 90.5% accuracy, minus 0, equals 90.5% Simple quadratic classifier 97.0% accuracy, minus 1, equals 96.0% Simple cubic classifier % accuracy, minus 2, equals 95.05% Accuracy = 90.5% Accuracy = 97.0% Accuracy = 97.05% 10 10 10 9 9 9 8 8 8 7 7 7 6 6 6 5 5 5 4 4 4 3 3 3 2 2 2 1 1 1 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10

73 One Solution: Charge Penalty for complex models
For example, for the simple {polynomial} classifier, we could charge 1% for every increase in the degree of the polynomial. There are more principled ways to charge penalties In particular, there is a technique called Minimum Description Length (MDL)

74 Appendix

75 Types of Attributes There are different types of attributes
Nominal (includes Boolean) Examples: ID numbers, eye color, zip codes, sex Ordinal Examples: rankings (e.g., taste of potato chips on a scale from 1-10), grades, height in {tall, medium, short} Interval Examples: calendar dates, temperatures in Celsius or Fahrenheit. Ratio Examples: temperature in Kelvin, length, time, counts

76 Properties of Attribute Values
The type of an attribute depends on which of the following properties it possesses: Distinctness: =  Order: < > Addition: Multiplication: * / Nominal attribute: distinctness Ordinal attribute: distinctness & order Interval attribute: distinctness, order & addition Ratio attribute: all 4 properties

77 Properties of Attribute Values
Nominal attribute: distinctness We can say Jewish = Jewish Catholic  Muslim We cannot say Jewish < Buddist Even though (2<3) (Jewish + Muslim)/2 Sqrt(Atheist) Even though Sqrt(1) is allowed Key: Atheist: 1 Jewish: 2 Buddist:3 Name Religion Ad Joe 1 12 Sue 2 61 Cat 34 Bob 3 65 Tim 54 Jin 44

78 Properties of Attribute Values
Ordinal attribute: distinctness & order We can say {newborn, infant, toddler, child, teen, adult} infant = infant newborn < toddler We cannot say newborn + child infant / newborn log(child) Key: newborn: 1 infant: 2 toddler:3 etc Name lifestage Ad Joe 1 12 Sue 2 61 Cat 4 34

79 Properties of Attribute Values
There are a handful of tricky cases…. Ordinal attribute: distinctness & order If we have {Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday} Then we can clearly say Sunday = Sunday Sunday != Tuesday But can we say Sunday < Tuesday? A similar problem occurs with degree of an angle…

80 Properties of Attribute Values
Interval attribute: distinctness, order & addition Suppose it is 10 degrees Celsius We can say it is not 11 degrees Celsius 10  11 We can say it is colder than 15 degrees Celsius 10 < 15 We can say closing a window will make it two degrees hotter NewTemp = We cannot say that it is twice as hot as 5 degrees Celsius 10 / 2 = No!

81 Properties of Attribute Values
The type of an attribute depends on which of the following properties it possesses: Ratio attribute: all 4 properties We can do anything! So 10kelvin really is twice as hot as 5kelvin Of course, distinctness is tricky to define with real numbers. is = ?

82 Description Examples Operations Attribute Type Nominal
The values of a nominal attribute are just different names, i.e., nominal attributes provide only enough information to distinguish one object from another. (=, ) zip codes, employee ID numbers, eye color, sex: {male, female} mode, entropy, contingency correlation, 2 test Ordinal The values of an ordinal attribute provide enough information to order objects. (<, >) hardness of minerals, {good, better, best}, grades, street numbers median, percentiles, rank correlation, run tests, sign tests Interval For interval attributes, the differences between values are meaningful, i.e., a unit of measurement exists. (+, - ) calendar dates, temperature in Celsius or Fahrenheit mean, standard deviation, Pearson's correlation, t and F tests Ratio For ratio variables, both differences and ratios are meaningful. (*, /) temperature in Kelvin, monetary quantities, counts, age, mass, length, electrical current geometric mean, harmonic mean, percent variation

83 Transformation Comments Attribute Level Nominal
Any permutation of values If all employee ID numbers were reassigned, would it make any difference? Ordinal An order preserving change of values, i.e., new_value = f(old_value) where f is a monotonic function. An attribute encompassing the notion of good, better best can be represented equally well by the values {1, 2, 3} or by { 0.5, 1, 10}, or by {A, B, C} Interval new_value =a * old_value + b where a and b are constants Thus, the Fahrenheit and Celsius temperature scales differ in terms of where their zero value is and the size of a unit (degree). Ratio new_value = a * old_value Length can be measured in meters or feet.

84 Discrete and Continuous Attributes
Discrete Attribute Has only a finite or countably infinite set of values Examples: zip codes, counts, or the set of words in a collection of documents Often represented as integer variables. Note: binary attributes are a special case of discrete attributes Continuous Attribute Has real numbers as attribute values Examples: temperature, height, or weight. As a practical matter, real values can only be measured and represented using a finite number of digits. Continuous attributes are typically represented as floating-point variables.

85 Discrete and Continuous Attributes
We can convert between Continuous and Discrete variables. For example, below we have converted real-valued heights to ordinal {short, medium, tall} Conversions of Discrete to Continuous are less common, but possible. Why convert? Sometimes the algorithms we what to use are only defined for a certain type of data. For example, hashing or Bloom filters are best defined for Discrete data. Conversion may involve making choices, for example, how many “heights”, where do we place the cutoffs (equal width, equal bin counts etc.) These choices may effect the performance of the algorithms. {short, medium, tall} 1, , 6’3’’ 5’1’’ 5’7’’ 5’3’’ 3 1 2

86 Discrete and Continuous Attributes
beginning created heaven without form, darkness  moved God earth. And earth void; upon deep. Spirit waters. Said Amen. In the and was face of The With you all. :: Discrete and Continuous Attributes We can convert between Discrete and Continuous variables. For example, below we have converted discrete words to a real-valued time series There are 783,137 words in the King James Bible There are 12,143 unique words in the King James Bible The location of the first word, of each chapter of the King James bible. Genesis starts at 1964 because there is a short preamble Genesis: 1964 Exodus: Leviticus: Numbers: Deuteronomy: Joshua: Judges: Ruth: Samuel 1: Samuel 2: Kings 1: Kings 2: Chronicles 1: Chronicles 2: Ezra: Nehemiah: Esther: Job: Psalms: Proverbs: Ecclesiastes: Solomon: Isaiah: Jeremiah: Lamentations: Ezekiel: Daniel: Hosea: Joel: Amos: Obadiah: Jonah: Micah: Nahum: Habakkuk: Zephaniah: Haggai: Zechariah: Malachi: Matthew: Mark: Luke: John: Acts: Romans: Corinthians 1: Corinthians 2: Galatians: Ephesians: Philippians: Colossians: Thessalonians 1: Thessalonians 2: Timothy 1: Timothy 2: Titus: Philemon: Hebrews: James: Peter 1: Peter 2: John 1: John 2: John 3: Jude: Revelation: Local frequency of “God” in King James Bible 5 x 10 1 2 3 4 5 6 7 8 Genesis Numbers Jeremiah Chronicles 1 Deuteronomy Revelation Ezekiel

87 Even if the data is given to you as continuous, it might really be intrinsically discrete
partID size Ad 12323 7.801 12 5324 7.802 61 75654 32.09 34 34523 32.13 65 424 47.94 54 25324 62.07 44

88 Even if the data is given to you as continuous, it might really be intrinsically discrete
Bing Hu, Thanawin Rakthanmanon, Yuan Hao, Scott Evans, Stefano Lonardi, and Eamonn Keogh (2011). Discovering the Intrinsic Cardinality and Dimensionality of Time Series using MDL. ICDM 2011

89 Data can be Missing Data can be missing for many reasons.
Someone may decline-to-state The attribute may be the result of an expensive test Sensor failure etc Handling missing values Eliminate Data Objects Estimate Missing Values Ignore the Missing Value During Analysis Replace with all possible values (weighted by their probabilities)

90 Data can be “Missing”: Special Case
In some case we expect most of the data to be missing. Consider a dataset containing people’s rankings of movies (or books, or music etc) The dimensionality is very high, there are lots of movies However, most people have only seen a tiny fraction of these So the movie ranking database will be very sparse. Some platforms/languages explicitly support sparse matrices (including Matlab) Here, inferring a missing value is equivalent to asking a question like “How much would Joe like the movie MASH?” See “Collaborative filtering” / “ Recommender Systems” Jaws E.T. MASH Ted Argo Brave OZ Bait Joe 4 1 Van 3 2 Sue May 5 June

91 Document Data is also Sparse
Each document is a `term' vector (vector-space representation) each term is a component (attribute) of the vector, the value of each component is the number of times the corresponding term occurs in the document. Doc2 Doc4 document-term matrix the harry  rebel god cat dog help near Doc1 42 1 Doc2 22 13 Doc3 32 Doc4 29 56 5 3 Doc5 9

92 Graph Data is also Typically Sparse
The elements of the matrix indicate whether pairs of vertices are connected or not in the graph.

93 Not all datasets naturally fit neatly into a rectangular matrix…
We may have to deal with such data as special cases. DNA Data First 100 base pairs of the chimp’s mitochondrial DNA: gtttatgtagcttaccccctcaaagcaatacactgaaaatgtttcgacgggtttacatcaccccataaacaaacaggtttggtcctagcctttctattag First 100 base pairs of the human’s mitochondrial DNA: gatcacaggtctatcaccctattaaccactcacgggagctctccatgcatttggtattttcgtctggggggtgtgcacgcgatagcattgcgagacgctg Transaction Data Spatio-Temporal Data

94 Data Quality What kinds of data quality problems?
How can we detect problems with the data? What can we do about these problems? Examples of data quality problems: Redundancy Noise and outliers Missing values Duplicate data

95 Redundancy Various subsets of the features are often related or correlated in some way. They are partly redundant with each other. For problems in signal processing, this redundancy is typically helpful. But for data mining, redundancy almost always hurts. 0.3 0.4 0.5 0.6 0.8 0.9 0.7 Height F/I Height Meters Weight 1 4’10’’ 1.47 166 2 6’3’’ 1.90 210 3 5’11’ 1.80 215 4 5’4’’ 1.62 150

96 It is probable that the redundant features will add errors.
Why Redundancy Hurts Some data mining algorithms scale poorly in dimensionality, say O(2D). For the problem below, this means we take O(23) time, when we really only needed O(22) time. We can see some data mining algorithms as counting evidence across a row (Nearest Neighbor Classifier, Bayes Classifier etc). If we have redundant features, we will “overcount” evidence. It is probable that the redundant features will add errors. For example, suppose that person 1 really is exactly 4’10’’. Then they are exactly m, but the system recorded them as 1.47m . So we have introduced m of error. This is a tiny amount, but it we had 100s of such attributes, we would be introducing a lot of error. The curse of dimensionality (discussed later in the quarter) As we will see in the course, we can try to fix this issue with data aggregation, dimensionality reduction techniques, feature selection, feature generation etc Height F/I Height Meters Weight 1 4’10’’ 1.47 166 2 6’3’’ 1.90 210 3 5’11’ 1.80 215 4 5’4’’ 1.62 150

97 Detecting Redundancy By creating a scatterplot of “Height F/I” vs. “Height Meters” we can see the redundancy, and measure it with correlation. However, if we have 100 features, we clearly cannot visual check 1002 scatterplots. Note that two features can have zero correlation, but still be related/redundant. There are more sophisticated tests of “relatedness” Height F/I Height Meters Weight 1 4’10’’ 1.47 166 2 6’3’’ 1.90 210 3 5’11’ 1.80 215 4 5’4’’ 1.62 150

98 Detecting Redundancy Height F/I Height Meters Weight 1 4’10’’ 1.47 166
2 6’3’’ 1.90 210 3 5’11’ 1.80 215 4 5’4’’ 1.62 150

99 (MIT-BIH Atrial Fibrillation Database record afdb/08405)
Noise Noise refers to modification of original values Examples: distortion of a person’s voice when talking on a poor quality phone. The two images are one man’s ECGs, taken about an hour apart. The different are mostly due to sensor noise  (MIT-BIH Atrial Fibrillation Database record afdb/08405)


Download ppt "Introduction to Machine Learning"

Similar presentations


Ads by Google