Presentation is loading. Please wait.

Presentation is loading. Please wait.

Last lecture summary. Basic terminology tasks – classification – regression learner, algorithm – each has one or several parameters influencing its behavior.

Similar presentations


Presentation on theme: "Last lecture summary. Basic terminology tasks – classification – regression learner, algorithm – each has one or several parameters influencing its behavior."— Presentation transcript:

1 Last lecture summary

2 Basic terminology tasks – classification – regression learner, algorithm – each has one or several parameters influencing its behavior model – one concrete combination of learner and parameters – tune the parameters using the training set – the generalization is assessed using test set (previously unseen data)

3 learning (training) – supervised a target vector t is known, parameters are tuned to achieve the best match between prediction and the target vector – unsupervised training data consists of a set of input vectors x without any corresponding target value clustering, vizualization

4 for most applications, the original input variables must be preprocessed – feature selection – feature extraction x 784 x6x6 x5x5 x4x4 x3x3 x2x2 x1x1...... x 456 x 103 x5x5 x1x1 x 784 x6x6 x5x5 x4x4 x3x3 x2x2 x1x1...... x * 666 x * 309 x * 152 x * 18 x * 784 x*6x*6 x*5x*5 x*4x*4 x*3x*3 x*2x*2 x*1x*1...... selectionextraction

5 feature selection/extraction = dimensionality reduction – generally good thing – curse of dimensionality example: – learner: regression (polynomial, y = w 0 + w 1 x + w 2 x 2 + w 3 x 3 + …) – parameters: weights (coeffiients) w, order of polynomial weights – adjusted so the the sum of squared errors SSE (error function) is as small as possible predicted known target

6 order of polynomial – problem of model selection – for model comparison use MSE or RMS (independent from N) – training error always goes down with the increasing polynomial order – however, test error gets worse for high orders of polynomial (overfitting) predictedknown target

7 Training set Test set

8 overfitting

9 for a given model complexity the overfitting problem becomes less severe as the size of the data set increases M = 9 N = 15 M = 9 N = 100 or in other words, the larger the data set is, the more complex (flexible) model can be fitted

10 Bias-variance tradeoff large bias – model is not accurate enough, it is not able to accurately represent the data (large training error) large variance – overfitting occurs (the predictions of the model depend a lot on the particular sample that was used for building the model) tradeoff – low flexibility models have large bias and low variance – high flexibility models have low bias and large variance

11 A polynomial with too few parameters (too low degree) will make large errors because of a large bias. A polynomial with too many parameters (too high degree) will make large errors because of a large variance. MSE is a good error measure because MSE = variance + bias 2

12 Test-data and Cross Validation

13 attributes, input/independent variables, features object instance sample class

14 Attribute types discrete – Has only a finite or countably infinite set of values. – nominal (also categorical) the values are just different labels (e.g. ID number, eye color) central tendency given by mode (median, mean not defined) – ordinal their values reflect the order (e.g. ranking, height in {tall, medium, short}) central tendency given by median, mode (mean not defined) – binary attributes - special case of discrete attributes continuous (also quantitative) – Has real numbers as attribute values. – central tendency given by mean, + stdev, …

15 A regression problem y = f(x) + noise Can we learn from this data? Consider three methods x y taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

16 Linear regression What will the regression model will look like? y = ax + b Univariate linear regression with a constant term. x y taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

17

18 Quadratic regression What will the regression model will look like? y = ax 2 + bx + c x y taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

19

20 Join-the-dots Also known as piecewise linear nonparametric regression if that makes you feel better. x y taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

21 Which is best? Why not to choose the method with the best fit to data? taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

22 What do we really want ? Why not to choose the method with the best fit to data? How well are you going to predict future data? taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

23 The test set method 1.Randomly choose 30% of data to be in test set. 2.The remainder is training set. 3.Perform regression on the training set. 4.Estimate future performance with the test set. x y linear regression MSE = 2.4 taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

24 The test set method 1.Randomly choose 30% of data to be in test set. 2.The remainder is training set. 3.Perform regression on the training set. 4.Estimate future performance with the test set. x y quadratic regression MSE = 0.9 taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

25 The test set method 1.Randomly choose 30% of data to be in test set. 2.The remainder is training set. 3.Perform regression on the training set. 4.Estimate future performance with the test set. x y join-the-dots MSE = 2.2 taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

26 Test set method good news – very simple – Then choose method with the best score. bad news – wastes data (we got an estimate of the best method by using 30% less data) – if you don’t have enough data, test set may be just lucky/unlucky test set estimator of performance has high variance TrainTest taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

27 training error testing error model complexity the above examples were for different algorithms, this one is about the model complexity (for the given algorithm)

28 stratified division – same proportion of data in the training and test sets

29 LOOCV (Leave-one-out Cross Validation) y x 1.choose one data point 2.remove it from the set 3.fit the remaining data points 4.note your error Repeat these steps for all points. When you are done report the mean square error. taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

30 MSE LOOCV = 2.12 taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

31 MSE LOOCV = 0.962 taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

32 MSE LOOCV = 3.33 taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

33 Which kind of Cross Validation? Can we get best of both worlds? taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

34 k-fold Cross Validation x y Randomly break data set into k partitions. In our case k = 3. Red partition: Train on all points not in the red partition. Find the test set sum of errors on the red points. Blue partition: Train on all points not in the blue partition. Find the test set sum of errors on the blue points. Green partition: Train on all points not in the green partition. Find the test set sum of errors on the green points. Then report the mean error. linear regression MSE 3fold = 2.05 taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

35 Results of 3-fold Cross Validation taken from Cross Validation tutorial by Andrew Moore http://www.autonlab.org/tutorials/overfit.html

36 Which kind of Cross Validation?

37 Model selection via CV We are trying to decide which model to use. For the polynomial regression decide about the degree of polynom. Train each machine and make a table. Whichever model gave best CV score: train it with all the data. That’s the predictive model you’ll use. degreeMSE train MSE 10-fold Choice 1 2 3 4 5 6 taken from Cross Validation tutorial by Andrew Moore, http://www.autonlab.org/tutorials/overfit.html

38 Selection and testing Complete procedure to algorithm selection and estimation of its quality 1.Divide data to train/test 2.By Cross Validation on the Train choose the algorithm 3.Use this algorithm to construct a classifier using Train 4.Estimate its quality on the Test TrainTest Train Test Train Val

39 Training error can not be used as an indicator of model’s performance due to overfitting. Training data set - train a range of models, or a given model with a range of values for its parameters. Compare them on independent data – Validation set. – If the model design is iterated many times, then some overfitting to the validation data can occur and so it may be necessary to keep aside a third Test set on which the performance of the selected model is finally evaluated.

40 Which class (Blue or Orange) would you predict for this point? And why? classification boundary x y ?

41 x y ? And now? Classification boundary is quadratic

42 x y ? And now? And why?

43 Nearest Neighbors Classification

44 instances

45 But, what does it mean similar? A BC D source: Kardi Teknomo’s Tutorials, http://people.revoledu.com/kardi/tutorial/index.html

46 Similarity s ij is quantity that reflects the strength of relationship between two objects. – This quantity is usually having range of either -1 to +1 or is normalized into 0 to 1. Distance d ij measures dissimilarity – Dissimilarity measure the discrepancy between the two objects. – Distance is a quantitative variable that satisfies the following conditions: distance is always positive or zero (d ij ≥ 0) distance is zero if and only if it measured to itself distance is symmetric (d ij = d ji )

47 In addition, if distance satisfies triangular inequality |x+y| ≤ |x|+|y|, then it is called metric. Not all distances are metrics, but all metrics are distances.

48 Distances for binary variables FruitSphere shapeSweetSourCrunchy AppleYes BananaNoYesNo Apple1111 Banana0100 p – number of variables positive for both objects q – positive for the i th object and negative for the j th object r – negative for the i th object and positive for the j th object s – negative for both objects t = p + q + r + s (total number of variables) p = q = r = s = 1 3 0 0

49 Simple matching coefficient/distance Jaccard coefficient/distance Hamming distance p – number of variables positive for both objects q – positive for the i th object and negative for the j th object r – negative for the i th object and positive for the j th object s – negative for both objects t = p + q + r + s (total)

50 Distances for quantitative variables Minkowski distance (L p norm) distance matrix – matrix with all pairwise distances

51 Manhattan distance How to measure distance of two bikers in Manhattan ? source: wikipedia

52 x y x1x1 x2x2 y1y1 y2y2

53 Euclidean distance x y x1x1 x2x2 y1y1 y2y2

54 Back to k-NN supervised learning target function f may be – dicrete-valued (classification) – real-valued (regression) We assign to the class which instance is most similar to the given point.

55 Discrete-valued target function The unknown sample x is assigned a class that is most common among the k training examples closest to x. Tan, Stainbach, Kumar – Introduction to Data Mining

56 k-NN never forms an explicit general hypothesis f’ regarding the target function f. – It simply computes classification of each new instance as needed. Nevertheless, we can still ask what classification would be assigned if we hold the training examples constant and query the algorithm with every possible instance x.

57 1-NN … Voronoi tesselation

58 1-NN … classification boundary

59 Which k is best? Hastie et al., Elements of Statistical Learning k = 1k = 15 fitting noise, outliers overfitting value not too small smooth out distinctive behavior Crossvalidation

60 Real-valued target function Algorithms calculate the mean value of the k nearest training examples. value = 12 value = 14 value = 10 value = (12+14+10)/3 = 12 k = 3

61 Distance-weighted NN Refinement: weight the contribution of each of k nearest neighbors according to their distance to the query point. – Give greater weight to closer neighbors. k = 4 unweighted 2 votes weighted 1/1 2 + 1/2 2 = 1.25 votes 1/4 2 + 1/5 2 = 0.102 votes 1 2 4 5

62 Euclidean distance issues Certain attributes with large values can overwhelm the influence of other attributes measured on smaller scale. Solution: normalize the values min-max normalization Z-score standardization

63 k-NN issues Distance is calculated based on ALL attributes. Example: – each instance is described by 20 attributes, however only 2 are relevant – instances with identical 2 relevant attributes (i.e. their distance is zero in 2-D space) may be distant in 20-D space – Thus, the similarity metrics will be misleading – This is the manifestation of the curse of dimensionality

64 k-NN issues Significant computation may be required to process each new query. To find nearest neighbors one has to evaluate full distance matrix. Efficient indexing of stored training examples helps – kd-tree

65 instance based learning (memory based learning) – family of learning algorithms that, instead of performing explicit generalization, compare new problem instances with instances seen in training which have been stored in memory – it is a kind of lazy learning lazy learning – generalization beyond the training data is delayed until a query is made to the system – opposed to eager learning - system tries to generalize the training data before receiving queries lazy learners – e.g. k-NN


Download ppt "Last lecture summary. Basic terminology tasks – classification – regression learner, algorithm – each has one or several parameters influencing its behavior."

Similar presentations


Ads by Google