Presentation is loading. Please wait.

Presentation is loading. Please wait.

Supervised Learning Regression, Classification Linear regression, k-NN classification Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute.

Similar presentations


Presentation on theme: "Supervised Learning Regression, Classification Linear regression, k-NN classification Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute."— Presentation transcript:

1 Supervised Learning Regression, Classification Linear regression, k-NN classification Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata August 11, 2014

2 An Example: Size of Engine vs Power 2 Engine displacement (cc) Power (bhp)  An unknown car has an engine of size 1800cc. What is likely to be the power of the engine?

3 An Example: Size of Engine vs Power 3 Engine displacement (cc) Power (bhp)  Intuitively, the two variables have a relation  Learn the relation from the given data  Predict the target variable after learning Target Variable

4 Exercise: on a simpler set of data points  Predict y for x = y x xy ?

5 Linear Regression 5 Engine displacement (cc) Power (bhp)  Assume: the relation is linear  Then for a given x (=1800), predict the value of y Training set

6 Linear Regression 6 Engine displacement (cc) Power (bhp)  Linear regression  Assume y = a. x + b  Try to find suitable a and b Optional exercise Engine (cc) Power (bhp)

7 Exercise: using Linear Regression  Define a regression line of your choice  Predict y for x = y x xy ?

8 Choosing the parameters right  The data points: (x 1, y 1 ), (x 2, y 2 ), …, (x m, y m )  The regression line: f(x) = y = a. x + b  Least-square cost function: J = Σ i ( f(x i ) – y i ) 2  Goal: minimize J over choices of a and b 8 x y Goal: minimizing the deviation from the actual data points

9 How to Minimize the Cost Function?  Goal: minimize J for all values of a and b  Start from some a = a 0 and b = b 0  Compute: J(a 0,b 0 )  Simultaneously change a and b towards the negative gradient and eventually hope to arrive an optimal  Question: Can there be more than one optimal? 9 a b Δ

10 Another example:  Given that a person’s age is 24, predict if (s)he has high blood sugar  Discrete values of the target variable (Y / N)  Many ways of approaching this problem 10 High blood sugar N Y Age Training set

11 Classification problem  One approach: what other data points are nearest to the new point?  Other approaches? 11 High blood sugar N Y Age ? ? 24

12 Classification Algorithms  The k-nearest neighbor classification  Naïve Bayes classification  Decision Tree  Linear Discriminant Analysis  Logistics Regression  Support Vector Machine 12

13 Classification or Regression? Given data about some cars: engine size, number of seats, petrol / diesel, has airbag or not, price  Problem 1: Given engine size of a new car, what is likely to be the price?  Problem 2: Given the engine size of a new car, is it likely that the car is run by petrol?  Problem 3: Given the engine size, is it likely that the car has airbags? 13

14 Classification

15 Example: Age, Income and Owning a flat 15 Monthly income (thousand rupees) Age Training set Owns a flat Does not own a flat  Given a new person’s age and income, predict – does (s)he own a flat?

16 Example: Age, Income and Owning a flat 16 Monthly income (thousand rupees) Age  Nearest neighbor approach  Find nearest neighbors among the known data points and check their labels Training set Owns a flat Does not own a flat

17 Example: Age, Income and Owning a flat 17 Monthly income (thousand rupees) Age  The 1-Nearest Neighbor (1-NN) Algorithm: – Find the closest point in the training set – Output the label of the nearest neighbor Training set Owns a flat Does not own a flat

18 The k-Nearest Neighbor Algorithm 18 Monthly income (thousand rupees) Age  The k-Nearest Neighbor (k-NN) Algorithm: – Find the closest k point in the training set – Majority vote among the labels of the k points Training set Owns a flat Does not own a flat

19 Distance measures  How to measure distance to find closest points?  Euclidean: Distance between vectors x = (x 1, …, x k ) and y = (y 1, …, y k ) 19  Manhattan distance:  Generalized squared interpoint distance: S is the covariance matrix The Maholanobis distance (1936)

20 Classification setup 20  Training data / set: set of input data points and given answers for the data points  Labels: the list of possible answers  Test data / set: inputs to the classification algorithm for finding labels – Used for evaluating the algorithm in case the answers are known (but known to the algorithm)  Classification task: Determining labels of the data points for which the label is not known or not passed to the algorithm  Features: attributes that represent the data

21 Evaluation  Test set accuracy: the correct performance measure  Accuracy = #of correct answer / #of all answers  Need to know the true test labels – Option: use training set itself – Parameter selection (for k-NN) by accuracy on training set  Overfitting: a classifier performs too good on training set compared to new (unlabeled) test data 21

22 Better validation methods  Leave one out: – For each training data point x of training set D – Construct training set D – x, test set {x} – Train on D – x, test on x – Overall accuracy = average over all such cases – Expensive to compute  Hold out set: – Randomly choose x% (say 25-30%) of the training data, set aside as test set – Train on the rest of training data, test on the test set – Easy to compute, but tends to have higher variance 22

23 The k-fold Cross Validation Method  Randomly divide the training data into k partitions D 1,…, D k : possibly equal division  For each fold D i – Train a classifier with training data = D – D i – Test and validate with D i  Overall accuracy: average accuracy over all cases 23

24 References  Lecture videos by Prof. Andrew Ng, Stanford University Available on Coursera (Course: Machine Learning)  Data Mining Map: 24


Download ppt "Supervised Learning Regression, Classification Linear regression, k-NN classification Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute."

Similar presentations


Ads by Google