Presentation is loading. Please wait.

Presentation is loading. Please wait.

Support Vector Machines Reading: Textbook, Chapter 5 Ben-Hur and Weston, A User’s Guide to Support Vector Machines (linked from class web page)

Similar presentations


Presentation on theme: "Support Vector Machines Reading: Textbook, Chapter 5 Ben-Hur and Weston, A User’s Guide to Support Vector Machines (linked from class web page)"— Presentation transcript:

1 Support Vector Machines Reading: Textbook, Chapter 5 Ben-Hur and Weston, A User’s Guide to Support Vector Machines (linked from class web page)

2 Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1, x 2, …, x n ) S = {(x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ) | (x i, y i )   n  {+1, -1}} –Hypothesis: A function h:  n  {+1, -1}. h(x) = h(x 1, x 2, …, x n )  {+1, -1}

3 Here, assume positive and negative instances are to be separated by the hyperplane Equation of line: x2x2 x1x1

4 Intuition: the best hyperplane (for future generalization) will “maximally” separate the examples

5 Definition of Margin

6 Vapnik (1979) showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal.

7 Definition of Margin Vapnik (1979) showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal. This is an optimization problem!

8 Note that the hyperplane is defined as To make the math easier, we will rescale w and b such that the hyperplane is halfway in-between the closest positive and negative examples, and

9 Geometry of the Margin (www.svms.org/tutorials/Hearst-etal1998.pdf) -

10 In this case, the margin is So to maximize the margin, we need to minimize.

11 Minimizing ||w|| Find w and b by doing the following minimization: This is a quadratic optimization problem. Use “standard optimization tools” to solve it.

12 Dual formulation: It turns out that w can be expressed as a linear combination of a small subset of the training examples x i : those that lie exactly on margin (minimum distance to hyperplane): such that x i lie exactly on the margin. These training examples are called “support vectors”. They carry all relevant information about the classification problem.

13 The results of the SVM training algorithm (involving solving a quadratic programming problem) are the  i and the bias b. The support vectors are all x i such that  i > 0.

14 For a new example x, We can now classify x using the support vectors: This is the resulting SVM classifier.

15 In-class exercise

16 SVM review Equation of line: w 1 x 1 + w 2 x 2 + b = 0 Define margin using: Margin distance: To maximize the margin, we minimize ||w|| subject to the constraint that positive examples fall on one side of the margin, and negative examples on the other side: We can relax this constraint using “slack variables”

17 SVM review To do the optimization, we use the dual formulation: The results of the optimization “black box” are and b. The support vectors are all x i such that  i != 0.

18 SVM review Once the optimization is done, we can classify a new example x as follows: That is, classification is done entirely through a linear combination of dot products with training examples. This is a “kernel” method.

19 Example 12 -2 1 2

20 Example 12 -2 1 2 Input to SVM optimzer: x 1 x 2 class 111 121 211 -10-1 0-1-1 -1-1-1

21 Example 12 -2 1 2 Input to SVM optimzer: x 1 x 2 class 111 121 211 -10-1 0-1-1 -1-1-1 Output from SVM optimzer: Support vectorα (-1, 0)-.208 (1, 1).416 (0, -1)-.208 b = -.376

22 Example 12 -2 1 2 Input to SVM optimzer: x 1 x 2 class 111 121 211 -10-1 0-1-1 -1-1-1 Output from SVM optimzer: Support vectorα (-1, 0)-.208 (1, 1).416 (0, -1)-.208 b = -.376 Weight vector:

23 Example 12 -2 1 2 Input to SVM optimzer: x 1 x 2 class 111 121 211 -10-1 0-1-1 -1-1-1 Output from SVM optimzer: Support vectorα (-1, 0)-.208 (1, 1).416 (0, -1)-.208 b = -.376 Weight vector: Separation line:

24 Example 12 -2 1 2 Classifying a new point:

25 Non-linearly separable training examples What if the training examples are not linearly separable?

26 Non-linearly separable training examples What if the training examples are not linearly separable? Use old trick: find a function that maps points to a higher dimensional space (“feature space”) in which they are linearly separable, and do the classification in that higher- dimensional space.

27 Need to find a function  that will perform such a mapping:  :  n  F Then can find hyperplane in higher dimensional feature space F, and do classification using that hyperplane in higher dimensional space. x1x1 x2x2 x1x1 x2x2 x3x3

28 Example Find a 3-dimensional feature space in which XOR is linearly separable. x2x2 x1x1 (0, 1)(1, 1) (0, 0) (1, 0) x1x1 x2x2 x3x3

29 Problem: –Recall that classification of instance x is expressed in terms of dot products of x and support vectors. –The quadratic programming problem of finding the support vectors and coefficients also depends only on dot products between training examples, rather than on the training examples outside of dot products.

30 –So if each x i is replaced by  (x i ) in these procedures, we will have to calculate  (x i ) for each i as well as calculate a lot of dot products,  (x i )   (x j ) –But in general, if the feature space F is high dimensional,  (x i )   (x j ) will be expensive to compute.

31 Second trick: –Suppose that there were some magic function, k(x i, x j ) =  (x i )   (x j ) such that k is cheap to compute even though  (x i )   (x j ) is expensive to compute. –Then we wouldn’t need to compute the dot product directly; we’d just need to compute k during both the training and testing phases. –The good news is: such k functions exist! They are called “kernel functions”, and come from the theory of integral operators.

32 Example: Polynomial kernel: Suppose x = (x 1, x 2 ) and z = (z 1, z 2 ).

33 Example Does the degree-2 polynomial kernel make XOR linearly separable? x2x2 x1x1 (0, 1)(1, 1) (0, 0) (1, 0)

34 Recap of SVM algorithm Given training set S = {(x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ) | (x i, y i )   n  {+1, -1} 1.Choose a map  :  n  F, which maps x i to a higher dimensional feature space. (Solves problem that X might not be linearly separable in original space.) 2.Choose a cheap-to-compute kernel function k(x,z)=  (x)   (z) (Solves problem that in high dimensional spaces, dot products are very expensive to compute.)

35 4.Apply quadratic programming procedure (using the kernel function k) to find a hyperplane (w, b), such that where the  (x i )’s are support vectors, the  i ’s are coefficients, and b is a bias, such that (w, b ) is the hyperplane maximizing the margin of the training set in feature space F.

36 5.Now, given a new instance, x, find the classification of x by computing

37

38

39

40 HW 2


Download ppt "Support Vector Machines Reading: Textbook, Chapter 5 Ben-Hur and Weston, A User’s Guide to Support Vector Machines (linked from class web page)"

Similar presentations


Ads by Google