Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data mining and statistical learning - lecture 13 Separating hyperplane.

Similar presentations


Presentation on theme: "Data mining and statistical learning - lecture 13 Separating hyperplane."— Presentation transcript:

1 Data mining and statistical learning - lecture 13 Separating hyperplane

2 Data mining and statistical learning - lecture 13 Optimal separating hyperplane - support vector classifier margin Find the hyperplane that creates the biggest margin between the training points for class 1 and -1

3 Data mining and statistical learning - lecture 13 Formulation of the optimization problem Signed distance to decision border y=1 for one of the groups and y=-1 for the other one

4 Data mining and statistical learning - lecture 13 Two equivalent formulations of the optimization problem

5 Data mining and statistical learning - lecture 13 Optimal separating hyperplane – overlapping classes Find the hyperplane that creates the biggest margin subject to 11 22 33

6 Data mining and statistical learning - lecture 13 Characteristics of the support vector classifier Points well inside their class boundary do not play a big role in the shaping of the decision border Cf. linear discriminant analysis (LDA) for which the decision boundary is determined by the covariance matrix of the class distributions and their centroids

7 Data mining and statistical learning - lecture 13 Support vector machines using basis expansions (polynomials, splines)

8 Data mining and statistical learning - lecture 13 Characteristics of support vector machines The dimension of the enlarged feature space can be very large Overfitting is prevented by a built-in shrinkage of beta coefficients Irrelevant inputs can create serious problems

9 Data mining and statistical learning - lecture 13 The SVM as a penalization method Misclassification: f(x) 0 when y=-1 Loss function: Loss function + penalty:

10 Data mining and statistical learning - lecture 13 The SVM as a penalization method Minimizing the loss function + penalty is equivalent to fitting a support vector machine to data The penalty factor is a function of the constant providing an upper bound of

11 Data mining and statistical learning - lecture 13 Some characteristics of different learning methods CharacteristicNeural networks Support vector machines TreesMARS Natural handling of data of “mixed” typePoor Good Handling of missing valuesPoor Good Robustness to outliers in input spacePoor GoodPoor Insensitive to monotone transformations of inputs Poor GoodPoor Computational scalability (large N)Poor Good Ability to deal with irrelevant inputsPoor Good Ability to extract linear combinations of featuresGood Poor InterpretabilityPoor FairGood Predictive powerGood PoorFair

12 Data mining and statistical learning - lecture 13  -insensitive error function --

13 Data mining and statistical learning - lecture 13 SVMs for linear regression Estimate the regression coefficients by minimizing (i) The fitting is less sensitive than OLS to outliers (ii) Errors of size less than  are ignored (iii) Typically, the parameter estimates are functions of only a minor subset of the observations

14 Data mining and statistical learning - lecture 13 Ensemble methods  Bootstrapping (Chapter 8)  Bagging (Chapter 8)  Boosting (Chapter 10)  Bagging and boosting in SAS EM

15 Data mining and statistical learning - lecture 13 Major types of ensemble methods Manipulation of the model Manipulation of the data set

16 Data mining and statistical learning - lecture 13 Terminology  Bagging=Manipulation of the data set  Boosting = Manipulation of the model

17 Data mining and statistical learning - lecture 13 The bootstrap We would like to determine a functional F(P) of an unknown probability distribution P The bootstrap: Compute F(P*) where P* is an approximation of P

18 Data mining and statistical learning - lecture 13 Resampling techniques - the bootstrap method 34 67 7988 39 41 85 70 62 905844 60 73 22 58 7988 41 88 85 70 90 223444 60 41 60 Sampling with replacement Resampled data Observed data

19 Data mining and statistical learning - lecture 13 The bootstrap for assessing the accuracy of an estimate or prediction Compute Bootstrap samples are generated by sampling with replacement from the observed data 1.Generate N bootstrap samples and compute 2. Compute the sample variance of T k

20 Data mining and statistical learning - lecture 13 Bagging - using the bootstrap to improve a prediction Question: Given the model Y=f(X)+ε and a set of observed values Z={Y i, X i, i=1,…,N}, what is where P denotes the distribution of (X,Y)? Solution: Replace P with P*: Produce B bootstrap samples and, for each sample, compute Compute the sample mean by averaging over the bootstrap functions.

21 Data mining and statistical learning - lecture 13 Bagging Formula: Construct graphs, compute average

22 Data mining and statistical learning - lecture 13 Properties of bagging  Bagging of fitted functions reduces the variance  Bagging makes good predictions better, bad predictions worse  If the fitted function is linear, it will asymptotically coincide with the bagged estimate (B -> Infinity)

23 Data mining and statistical learning - lecture 13 Bagging for classification Given a K-class classification problem with Z={Y i, X i, i=1, …, N} and a computed indicator function (or class probabilities) we produce a bagging estimate and predict class variables

24 Data mining and statistical learning - lecture 13 Boosting - basic idea Consider a 2-class problem with and a classifier. Produce a sequence of classifiers and combine them. The weights for misclassified observations are increased to force the algorithm to classify them correctly at next step.

25 Data mining and statistical learning - lecture 13 Boosting

26 Data mining and statistical learning - lecture 13 Boosting

27 Data mining and statistical learning - lecture 13 Boosting - comments  Boosting can be modified for regression  AdaBoost.M1 can be modified to handle categorical output

28 Data mining and statistical learning - lecture 13 Bagging and boosting in EM  Create a diagram (Input node (define target!) – Partition node – Group processing node – Your model – Ensemble node)  Comment: boosting works only for classification (categorical output)

29 Data mining and statistical learning - lecture 13 Group processing: General Modes: Unweighted resampling for bagging Weighted resampling for boosting

30 Data mining and statistical learning - lecture 13 Group processing - Unweighted resampling for bagging Specify sample size

31 Data mining and statistical learning - lecture 13 Group processing: weighted resampling for boosting Specify target

32 Data mining and statistical learning - lecture 13 Ensemble results


Download ppt "Data mining and statistical learning - lecture 13 Separating hyperplane."

Similar presentations


Ads by Google