Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Introduction of Support Vector Machine

Similar presentations


Presentation on theme: "An Introduction of Support Vector Machine"— Presentation transcript:

1 An Introduction of Support Vector Machine
Jinwei Gu 2008/10/16

2 Review: What We’ve Learned So Far
Bayesian Decision Theory Maximum-Likelihood & Bayesian Parameter Estimation Nonparametric Density Estimation Parzen-Window, kn-Nearest-Neighbor K-Nearest Neighbor Classifier Decision Tree Classifier

3 Today: Support Vector Machine (SVM)
A classifier derived from statistical learning theory by Vapnik, et al. in 1992 SVM became famous when, using images as input, it gave accuracy comparable to neural-network with hand-designed features in a handwriting recognition task Currently, SVM is widely used in object detection & recognition, content-based image retrieval, text recognition, biometrics, speech recognition, etc. Also used for regression (will not cover today) Chapter 5.1, 5.2, 5.3, (5.4*) in textbook V. Vapnik

4 Outline Linear Discriminant Function Large Margin Linear Classifier
Nonlinear SVM: The Kernel Trick Demo of SVM

5 Discriminant Function
Chapter 2.4: the classifier is said to assign a feature vector x to class wi if For two-category case, An example we’ve learned before: Minimum-Error-Rate Classifier

6 Discriminant Function
It can be arbitrary functions of x, such as: Nearest Neighbor Decision Tree Linear Functions Nonlinear Functions

7 Linear Discriminant Function
g(x) is a linear function: x2 wT x + b > 0 A hyper-plane in the feature space wT x + b = 0 n (Unit-length) normal vector of the hyper-plane: wT x + b < 0 x1

8 Linear Discriminant Function
denotes +1 denotes -1 How would you classify these points using a linear discriminant function in order to minimize the error rate? x2 Infinite number of answers! x1

9 Linear Discriminant Function
denotes +1 denotes -1 How would you classify these points using a linear discriminant function in order to minimize the error rate? x2 Infinite number of answers! x1

10 Linear Discriminant Function
denotes +1 denotes -1 How would you classify these points using a linear discriminant function in order to minimize the error rate? x2 Infinite number of answers! x1

11 Linear Discriminant Function
denotes +1 denotes -1 How would you classify these points using a linear discriminant function in order to minimize the error rate? x2 Infinite number of answers! Which one is the best? x1

12 Large Margin Linear Classifier
denotes +1 denotes -1 The linear discriminant function (classifier) with the maximum margin is the best x2 Margin “safe zone” Margin is defined as the width that the boundary could be increased by before hitting a data point Why it is the best? Robust to outliners and thus strong generalization ability x1

13 Large Margin Linear Classifier
denotes +1 denotes -1 Given a set of data points: x2 , where With a scale transformation on both w and b, the above is equivalent to x1

14 Large Margin Linear Classifier
denotes +1 denotes -1 We know that x2 Margin x+ x- wT x + b = 1 Support Vectors wT x + b = 0 wT x + b = -1 The margin width is: n x1

15 Large Margin Linear Classifier
denotes +1 denotes -1 Formulation: x2 Margin x+ x- wT x + b = 1 such that wT x + b = 0 wT x + b = -1 n x1

16 Large Margin Linear Classifier
denotes +1 denotes -1 Formulation: x2 Margin x+ x- wT x + b = 1 such that wT x + b = 0 wT x + b = -1 n x1

17 Large Margin Linear Classifier
denotes +1 denotes -1 Formulation: x2 Margin x+ x- wT x + b = 1 such that wT x + b = 0 wT x + b = -1 n x1

18 Solving the Optimization Problem
s.t. Quadratic programming with linear constraints s.t. Lagrangian Function

19 Solving the Optimization Problem
s.t.

20 Solving the Optimization Problem
s.t. Lagrangian Dual Problem s.t. , and

21 Solving the Optimization Problem
From KKT condition, we know: x1 x2 wT x + b = 0 wT x + b = -1 wT x + b = 1 x+ x- Support Vectors Thus, only support vectors have The solution has the form:

22 Solving the Optimization Problem
The linear discriminant function is: Notice it relies on a dot product between the test point x and the support vectors xi Also keep in mind that solving the optimization problem involved computing the dot products xiTxj between all pairs of training points

23 Large Margin Linear Classifier
denotes +1 denotes -1 What if data is not linear separable? (noisy data, outliers, etc.) x2 wT x + b = 0 wT x + b = -1 wT x + b = 1 Slack variables ξi can be added to allow mis-classification of difficult or noisy data points x1

24 Large Margin Linear Classifier
Formulation: such that Parameter C can be viewed as a way to control over-fitting.

25 Large Margin Linear Classifier
Formulation: (Lagrangian Dual Problem) such that

26 Non-linear SVMs Datasets that are linearly separable with noise work out great: x x But what are we going to do if the dataset is just too hard? How about… mapping data to a higher-dimensional space: x x2 This slide is courtesy of

27 Non-linear SVMs: Feature Space
General idea: the original input space can be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x) This slide is courtesy of

28 Nonlinear SVMs: The Kernel Trick
With this mapping, our discriminant function is now: No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test. A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space:

29 Nonlinear SVMs: The Kernel Trick
An example: 2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2, Need to show that K(xi,xj) = φ(xi) Tφ(xj): K(xi,xj)=(1 + xiTxj)2, = 1+ xi12xj xi1xj1 xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2 = [1 xi12 √2 xi1xi2 xi22 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj22 √2xj1 √2xj2] = φ(xi) Tφ(xj), where φ(x) = [1 x12 √2 x1x2 x22 √2x1 √2x2] This slide is courtesy of

30 Nonlinear SVMs: The Kernel Trick
Examples of commonly-used kernel functions: Linear kernel: Polynomial kernel: Gaussian (Radial-Basis Function (RBF) ) kernel: Sigmoid: In general, functions that satisfy Mercer’s condition can be kernel functions.

31 Nonlinear SVM: Optimization
Formulation: (Lagrangian Dual Problem) such that The solution of the discriminant function is The optimization technique is the same.

32 Support Vector Machine: Algorithm
1. Choose a kernel function 2. Choose a value for C 3. Solve the quadratic programming problem (many software packages available) 4. Construct the discriminant function from the support vectors

33 Some Issues Choice of kernel Choice of kernel parameters
- Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. Optimization criterion – Hard margin v.s. Soft margin - a lengthy series of experiments in which various parameters are tested This slide is courtesy of

34 Summary: Support Vector Machine
1. Large Margin Classifier Better generalization ability & less over-fitting 2. The Kernel Trick Map data points to higher dimensional space in order to make them linearly separable. Since only dot product is used, we do not need to represent the mapping explicitly.

35 Additional Resource

36 Demo of LibSVM


Download ppt "An Introduction of Support Vector Machine"

Similar presentations


Ads by Google