2 Instance-Based Classifiers Store the training recordsUse training records to predict the class label of unseen cases
3 Instance Based Classifiers Examples:Rote-learnerMemorizes entire training data and performs classification only if attributes of record match one of the training examples exactlyNearest neighborUses k “closest” points (nearest neighbors) for performing classification
4 Nearest Neighbor Classifiers Basic idea:If it walks like a duck, quacks like a duck, then it’s probably a duckTraining RecordsTest RecordCompute DistanceChoose k of the “nearest” records
5 Nearest-Neighbor Classifiers Requires three thingsThe set of stored recordsDistance Metric to compute distance between recordsThe value of k, the number of nearest neighbors to retrieveTo classify an unknown record:Compute distance to other training recordsIdentify k nearest neighborsUse class labels of nearest neighbors to determine the class label of unknown record (e.g., by taking majority vote)
6 Definition of Nearest Neighbor K-nearest neighbors of a record x are data points that have the k smallest distance to x
8 Nearest Neighbor Classification Compute distance between two points:Euclidean distanceDetermine the class from nearest neighbor listtake the majority vote of class labels among the k-nearest neighborsWeigh the vote according to distanceweight factor, w = 1/d2
9 Nearest Neighbor Classification… Choosing the value of k:If k is too small, sensitive to noise pointsIf k is too large, neighborhood may include points from other classes
10 Nearest Neighbor Classification… Scaling issuesAttributes may have to be scaled to prevent distance measures from being dominated by one of the attributesExample:height of a person may vary from 1.5m to 1.8mweight of a person may vary from 90lb to 300lbincome of a person may vary from $10K to $1M
11 Nearest Neighbor Classification… Problem with Euclidean measure:High dimensional datacurse of dimensionalityCan produce counter-intuitive resultsvsd =d =Solution: Normalize the vectors to unit length
12 Nearest neighbor Classification… k-NN classifiers are lazy learnersIt does not build models explicitlyUnlike eager learners such as decision tree induction and rule-based systemsClassifying unknown records are relatively expensive
13 Example: PEBLSPEBLS: Parallel Examplar-Based Learning System (Cost & Salzberg)Works with both continuous and nominal featuresFor nominal features, distance between two nominal values is computed using modified value difference metric (MVDM)Each record is assigned a weight factorNumber of nearest neighbor, k = 1
17 Example of Bayes Theorem Given:A doctor knows that meningitis causes stiff neck 50% of the timePrior probability of any patient having meningitis is 1/50,000Prior probability of any patient having stiff neck is 1/20If a patient has stiff neck, what’s the probability he/she has meningitis?
18 Example of Bayes Theorem Given:A doctor knows that meningitis causes stiff neck 50% of the timePrior probability of any patient having meningitis is 1/50,000Prior probability of any patient having stiff neck is 1/20If a patient has stiff neck, what’s the probability he/she has meningitis?
19 Bayesian ClassifiersConsider each attribute and class label as random variablesGiven a record with attributes (A1, A2,…,An)Goal is to predict class CSpecifically, we want to find the value of C that maximizes P(C| A1, A2,…,An )Posterior probabilityCan we estimate P(C| A1, A2,…,An ) directly from data?
20 Bayesian Classifiers P(C| A1, A2, A3 ) P(No | Refund=Yes, Status = Single, Income=120K)P(Yes | Refund=Yes, Status = Single, Income=120K)Estimate these two posterior probabilitiesand compare them for classification
21 Bayesian Classifiers Approach: compute the posterior probability P(C | A1, A2, …, An) for all values of C using the Bayes theoremChoose value of C that maximizes P(C | A1, A2, …, An)Equivalent to choosing value of C that maximizes P(A1, A2, …, An|C) P(C)How to estimate P(A1, A2, …, An | C )?
22 Naïve Bayes Classifier Assume independence among attributes Ai when class is given:P(A1, A2, …, An |Cj) = P(A1| Cj) P(A2| Cj)… P(An| Cj)Can estimate P(Ai| Cj) for all Ai and Cj.New point is classified to Cj if P(Cj) P(Ai| Cj) is maximal.Classes: C1 = Yes, C2 = NoPredict the class label of (Refund=Y, status = S, Income = 120)P(Yes) P(Refund=Y | Yes) P(Status= S | Yes) P(Income = 120 | Yes)P(No) P(Refund=Y | No) P(Status= S | No) P(Income = 120 | No)
23 How to Estimate Probabilities from Data? Class: P(C) = Nc/Ne.g., P(No) = 7/10, P(Yes) = 3/10For discrete attributes: P(Ai | Ck) = |Aik|/ Ncwhere |Aik| is number of instances having attribute Ai and belongs to class CkExamples:P(Status=Married|No) = ? P(Refund=Yes|Yes)=?k
24 How to Estimate Probabilities from Data? Class: P(C) = Nc/Ne.g., P(No) = 7/10, P(Yes) = 3/10For discrete attributes: P(Ai | Ck) = |Aik|/ Ncwhere |Aik| is number of instances having attribute Ai and belongs to class CkExamples:P(Status=Married|No) = 4/7 P(Refund=Yes|Yes)=0k
25 How to Estimate Probabilities from Data? For continuous attributes:Discretize the range into binsone ordinal attribute per binviolates independence assumptionTwo-way split: (A < v) or (A > v)choose only one of the two splits as new attributeProbability density estimation:Assume attribute follows a normal distributionUse data to estimate parameters of distribution (e.g., mean and standard deviation)Once probability distribution is known, can use it to estimate the conditional probability P(Ai|c)
26 How to Estimate Probabilities from Data? Normal distribution:One for each (Ai,ci) pairFor (Income, Class=No):If Class=Nosample mean = 110sample variance = 2975
28 Naïve Bayes Classifier If one of the conditional probability is zero, then the entire expression becomes zeroProbability estimation:c: number of classesp: prior probability: P(C)m: parameter
29 Example of Naïve Bayes Classifier A: attributesM: mammalsN: non-mammalsP(A|M)P(M) > P(A|N)P(N)=> Mammals
30 Naïve Bayes (Summary) Robust to isolated noise points Handle missing values by ignoring the instance during probability estimate calculationsRobust to irrelevant attributesIndependence assumption may not hold for some attributesUse other techniques such as Bayesian Belief Networks (BBN)
31 Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data
38 Support Vector Machines We want to maximize:Which is equivalent to minimizing:But subjected to the following constraints:This is a constrained optimization problemNumerical approaches to solve it (e.g., quadratic programming)
39 Support Vector Machines What if the problem is not linearly separable?
40 Support Vector Machines What if the problem is not linearly separable?Introduce slack variablesNeed to minimize:Subject to:
41 Nonlinear Support Vector Machines What if decision boundary is not linear?
42 Nonlinear Support Vector Machines Transform data into higher dimensional space
43 How to Construct an ROC curve Use classifier that produces posterior probability for each test instance P(+|A)Sort the instances according to P(+|A) in decreasing orderApply threshold at each unique value of P(+|A)Count the number of TP, FP, TN, FN at each thresholdTP rate, TPR = TP/(TP+FN)FP rate, FPR = FP/(FP + TN)InstanceP(+|A)True Class10.95+20.9330.87-40.855670.7680.5390.43100.25
44 How to construct an ROC curve Threshold >=ROC Curve:
45 Precision, Recall, and F-measure Suppose the cutoff threshold is chosen to be In other words, any instance with posterior probability greater than 0.8 is classified as positive.Compute the precision, recall, and F-measure for the model at this threshold value.InstanceP(+|A)True Class10.95+20.9330.87-40.855670.7680.5390.43100.25