Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Introduction of Support Vector Machine Courtesy of Jinwei Gu.

Similar presentations


Presentation on theme: "An Introduction of Support Vector Machine Courtesy of Jinwei Gu."— Presentation transcript:

1 An Introduction of Support Vector Machine Courtesy of Jinwei Gu

2 Today: Support Vector Machine (SVM) A classifier derived from statistical learning theory by Vapnik, et al. in 1992 SVM became famous when, using images as input, it gave accuracy comparable to neural-network with hand-designed features in a handwriting recognition task Currently, SVM is widely used in object detection & recognition, content-based image retrieval, text recognition, biometrics, speech recognition, etc.

3 Discriminant Function It can be arbitrary functions of x, such as: Decision Tree Linear Functions Nonlinear Functions

4 Linear Discriminant Function g(x) is a linear function: x1x1 x2x2 w T x + b = 0 w T x + b < 0 w T x + b > 0 g(x) is a hyper-plane in the n-dimensional feature space, w and x vectors (Unit-length) normal vector of the hyper-plane: n w1x1+w2x2+b=0

5 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 x1x1 x2x2 Infinite number of answers!

6 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 x1x1 x2x2 Infinite number of answers!

7 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 x1x1 x2x2 Infinite number of answers!

8 x1x1 x2x2 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 Infinite number of answers! Which one is the best?

9 Large Margin Linear Classifier “safe zone” The linear discriminant function (classifier) with the maximum margin is the best Margin is defined as the width that the boundary could be increased by before hitting a data point Why it is the best?  Robust to outliners and thus strong generalization ability Margin x1x1 x2x2 denotes +1 denotes -1

10 Large Margin Linear Classifier Given a set of data points: With a scale transformation on both w and b, the above is equivalent to x1x1 x2x2 denotes +1 denotes -1, where Note: class is +1,-1 NOT 1,0)

11 Large Margin Linear Classifier We know that The margin width is: x1x1 x2x2 denotes +1 denotes -1 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n Support Vectors

12 Large Margin Linear Classifier Formulation: x1x1 x2x2 denotes +1 denotes -1 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that Remember: our objective is finding w, b such that margins are maximized

13 Large Margin Linear Classifier Equivalent formulation: x1x1 x2x2 denotes +1 denotes -1 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that

14 Large Margin Linear Classifier Equivalent formulation: x1x1 x2x2 denotes +1 denotes -1 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that

15 Solving the Optimization Problem s.t. Quadratic programming with linear constraints s.t. Lagrangian Function

16 Lagrangian Given a function f(x) and a set of constraints c1..cn, a Lagrangian is a function L(f,c1,..cn,  1,..  n ) that “incorporates” constraints in the optimization problem The optimum is at a point where (Karush-Kuhn-Tucker (KKT) conditions): The third condition is known as complementarity condition

17 Solving the Optimization Problem s.t. To minimize L we need to maximize the red box

18 Dual Problem Formulation (2) s.t., and Lagrangian Dual Problem Each non-zero α i indicates that corresponding x i is a support vector SV. Because of complementary condition 3 Since and

19 Why only SV? The complementarity condition (the third one of KKT, see previous definition of Lagrangians) implies that one of the 2 multiplicands is zero. However, for non-SV xi. Therefore only data points on the margin (=SV) will have a non-zero α (because they are the only ones for wich yi (w T xi + b ) = 1 )

20 Final Formulation To summaryze: Again, remember that the constraint (2) can be verified with a non-equality only for the SV!! Find α 1 …α n such that Q(α) =Σα i - ½ΣΣα i α j y i y j x i T x j is maximized and (1) Σα i y i = 0 (2) α i ≥ 0 for all α i

21 The Optimization Problem Solution Given a solution α 1 …α n to the dual problem Q(α), solution to the primal is: Each non-zero α i indicates that corresponding x i is a support vector. Then the classifying function for a new point x is (note that we don’t need w explicitly): Notice that it relies on an inner product x i T x between the test point x and the support vectors x i. Only SV are useful for classification of new instances!!. Also keep in mind that solving the optimization problem involved computing the inner products x i T x j between all training points. w = Σ α i y i x i b = y k - Σ α i y i x i T x k for any α k > 0 g(x) = Σ α i y i x i T x + b (xi are SV)

22 Working example A 3-point training set: The maximum margin weight vector will be parallel to the shortest line connecting points of the two classes, that is, the line between (1,1) and (2,3) (the 2 Support Vectors) The maximum margin weight vector will be parallel to the shortest line connecting points of the two classes, that is, the line between (1,1) and (2,3) (the 2 Support Vectors) W

23 Margin computation w T x1+b=+1 w T x2+b=-1

24 Soft Margin Classification What if the training set is not linearly separable? Slack variables ξ i can be added to allow misclassification of difficult or noisy examples, resulting margins are called soft. ξiξi ξiξi xi are the misclassified examples

25 Large Margin Linear Classifier Formulation: such that Parameter C can be viewed as a way to control over-fitting.

26 Large Margin Linear Classifier Formulation: (Lagrangian Dual Problem) such that

27 Non-linear SVMs Datasets that are linearly separable with noise work out great: 0 x 0 x x2x2 0 x But what are we going to do if the dataset is just too hard? How about … mapping data to a higher-dimensional space: This slide is courtesy of

28 Non-linear SVMs: Feature Space General idea: the original input space can be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x) This slide is courtesy of

29 The original objects (left side of the schematic) are mapped, i.e., rearranged, using a set of mathematical functions, known as kernels. φ(x)φ(x)

30 Nonlinear SVMs: The Kernel Trick With this mapping, our discriminant function is now: No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test. A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space: Original formulation was g(x) = Σα i y i x i T x + b

31 Nonlinear SVMs: The Kernel Trick 2-dimensional vectors x=[x 1 x 2 ]; let K(x i,x j )=(1 + x i T x j ) 2, Need to show that K(x i,x j ) = φ(x i ) T φ(x j ): K(x i,x j )=(1 + x i T x j ) 2, = 1+ x i1 2 x j x i1 x j1 x i2 x j2 + x i2 2 x j x i1 x j1 + 2x i2 x j2 = [1 x i1 2 √2 x i1 x i2 x i2 2 √2x i1 √2x i2 ] T [1 x j1 2 √2 x j1 x j2 x j2 2 √2x j1 √2x j2 ] = φ(x i ) T φ(x j ), where φ(x) = [1 x 1 2 √2 x 1 x 2 x 2 2 √2x 1 √2x 2 ] An example: This slide is courtesy of

32 Nonlinear SVMs: The Kernel Trick  Linear kernel: Examples of commonly-used kernel functions:  Polynomial kernel:  Gaussian (Radial-Basis Function (RBF) ) kernel:  Sigmoid: In general, functions that satisfy Mercer ’ s condition can be kernel functions.

33 Nonlinear SVM: Optimization Formulation: (Lagrangian Dual Problem) such that The solution of the discriminant function is The optimization technique is the same.

34 Support Vector Machine: Algorithm 1. Choose a kernel function 2. Choose a value for C 3. Solve the quadratic programming problem (many software packages available) 4. Construct the discriminant function from the support vectors

35 Some Issues Choice of kernel - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. Optimization criterion – Hard margin v.s. Soft margin - a lengthy series of experiments in which various parameters are tested This slide is courtesy of

36 Summary: Support Vector Machine 1. Large Margin Classifier  Better generalization ability & less over-fitting 2. The Kernel Trick  Map data points to higher dimensional space in order to make them linearly separable.  Since only dot product is used, we do not need to represent the mapping explicitly.

37 Additional Resource

38 LibSVM (best implementation for SVM)


Download ppt "An Introduction of Support Vector Machine Courtesy of Jinwei Gu."

Similar presentations


Ads by Google