Presentation is loading. Please wait.

Presentation is loading. Please wait.

Today’s Topics 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 101 Support Vector Machines (SVMs) Three Key Ideas –Max Margins –Allowing Misclassified.

Similar presentations


Presentation on theme: "Today’s Topics 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 101 Support Vector Machines (SVMs) Three Key Ideas –Max Margins –Allowing Misclassified."— Presentation transcript:

1 Today’s Topics 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 101 Support Vector Machines (SVMs) Three Key Ideas –Max Margins –Allowing Misclassified Training Examples –Kernels (for non-linear models; in next lecture)

2 Three Key SVM Concepts Maximize the Margin Don’t choose just any separating plane Penalize Misclassified Examples Use soft constraints and ‘slack’ variables Use the ‘Kernel Trick’ to get Non-Linearity Roughly like ‘hardwiring’ the input  HU portion of ANNs (so only need a perceptron) 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 102

3 Support Vector Machines Maximizing the Margin between Bounding Planes Support Vectors ? 2 ||w|| 2 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 103 SVMs define some inequalities we want satisfied. We then use advanced optimization methods (eg, linear programming) to find the satisfying solutions, but in cs540 we’ll do a simpler approx

4 Margins and Learning Theory Theorems exist that connect learning (‘PAC’) theory to the size of the margin –Basically the larger the margin, the better the expected future accuracy –See, for example, Chapter 4 of Support Vector Machines by N. Christianini & J. Shawe-Taylor, Cambridge Press, 2000 (not an assigned reading) 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 104

5 y Support Vectors ‘Slack’ Variables Dealing with Data that is not Linearly Separable 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 105 For each wrong example, we pay a penalty, which is the distance we’d have to move it to get on the right side of the decision boundary (ie, the separating plane) If we deleted any/all of the non support vectors we’d get the same answer!

6 SVMs and Non-Linear Separating Surfaces f1f1 f2f2 + + _ _ h(f 1, f 2 ) g(f 1, f 2 ) + + _ _ Non-linearly map to new space Linearly separate in new space Result is a non-linear separator in original space 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 106

7 Math Review: Dot Products 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 107 X  Y  X 1  Y 1 + X 2  Y 2 + … + X n  Y n So if X = [4, 5, -3, 7] and Y = [9, 0, -8, 2] Then X  Y = 4  9 + 5  0 + (-3)  (-8) + 7  2 = 74 (weighted sums in ANNs are dot products)

8 Some Equations + ++ + + - - - - - - - Separating Plane For all positive examples For all negative examples weights input features threshold These 1’s result from dividing through by a constant for convenience (it is the distance from the dashed lines to the green line) 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 108

9 Idea #1: The Margin (derivation not on final) xixi Subtracting (ii) from (i) gives (i) (ii) (iii) W xAxA xBxB margin = 1 since parallel lines (iv) The green line is the set of all pts that satisfy this equation (ditto for red line) Combining (iii) and (iv) we get 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 109 xjxj

10 Our Initial ‘Mathematical Program’ min ||w|| (this is the ‘1-norm’ length of the weight vector, which is the sum of the absolute values of the weights; some SVMs use quadratic programs, but 1-norms have some preferred properties) such that w · x pos ≥  + 1 // for ‘+’ ex’s w · x neg ≤  – 1 // for ‘–’ ex’s  w,  11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 1010 1

11 The ‘p’ Norm – Generalization of the Familiar Euclidean Distance (p=2) 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 10 11

12 Our Mathematical Program (cont.) Note: w and  are our adjustable parameters (we could, of course, use the ANN ‘trick’ and move  to the left side of our inequalities and treat as another weight) We can now use existing math programming optimization s/w to find a sol’n to our current program (covered in cs525) 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 1012

13 Idea #2: Dealing with Non-Separable Data We can add what is called a ‘slack’ variable to each example This variable can be viewed as = 0 if example correctly separated else = ‘distance’ we need to move ex to get it correct (ie, distance from decision boundary) Note: we are NOT counting #misclassified would be nice to do so, but that becomes [mixed] integer programming, which is much harder 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 1013

14 min ||w|| 1 + μ ||S|| 1 such that w · x pos i + S i ≥  + 1 w · x neg j – S j ≤  – 1  S k ≥ 0 The Math Program with Slack Vars (this is the linear-programming version; there also is a quadratic-prog version - in cs540 we won’t worry about the difference)  w, s,  Dim = # of input features Dim = # of training examples scalar Scaling constant (use tuning set to select value) 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 10 The S’s are how far we would need to move an example in order for it to be on the proper side of the decision surface 14 Notice we are solving the perceptron task with a complexity penalty (sum of wgts) – Hinton’s wgt decay!

15 Slack’s and Separability If training data is separable, will all S i = 0 ? Not necessarily! –Might get a larger margin by misclassifying a few examples (just like in d-tree pruning) –This can also happen when using gradient- descent to minimize an ANN’s cost function 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 1015

16 CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 10 Brief Intro to Linear Programs (LP’s) - not on final We need to convert our task into A z ≥ b which is the basic form of an LP (A is a constant matrix, b is a constant vector, z is a vector of variables) Note Can convert inequalities containing ≤ into ones using ≥ by multiplying both sides by -1 eg, 5x ≤ 15 same as -5x ≥ -15 Can also handle = (ie, equalities) could use ≥ and ≤ to get =, but more efficient methods exist 11/10/1516

17 CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 10 Brief Intro to Linear Programs (cont.) - not on final In addition, we want to min c  z under the linear Az ≥ b constraints Vector c says how to penalize settings for variables in vector z Highly optimized s/w for solving LP exists (eg, CPLEX, COINS [free]) 11/10/15 Yellow region are those points that satisfy the constraints; dotted lines are iso-cost lines Lecture #21, Slide 17

18 Review: Matrix Multiplication 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 10 18 From (code also there): http://www.cedricnugteren.nl/tutorial.php?page=2http://www.cedricnugteren.nl/tutorial.php?page=2 A  B = C Matrix A is K by M Matrix B is N by K Matrix C is M by N

19 f e/2 1 CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 10 Aside: Our SVM as an LP (not on final) A pos 1 0 -1 0 -A neg 0 1 1 0 0 1 0 0 0 0 0 1 0 0 -1 0 0 0 1 1 0 0 0 1 The 1’s are identity matrices (often written as I) | f | e/2 | e/2 | 1 | f | e/2 f  ≥ e/2 e f W S pos S neg  Z 1100001100000 Let A pos = our positive training examples A neg = our negative training examples (assume 50% pos and 50% neg for notational simplicity) #features #examples 11/10/1519 f

20 CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 10 Our C Vector (determines the cost we’re minimizing, also not on final) min [ 0 μ 0 1 ] WSZWSZ = min μ ● S + 1 ● Z Aside: could also penalize  (but would need to add more variables since  can be negative) C Note we min Z’s not W’s since only Z’s ≥ 0 = min μ ||S|| 1 + ||W|| 1 since all S are non-negative and the Z’s ‘squeeze’ the W’s 11/10/15 20 Note here: S = S pos concatenated with S neg

21 Where We are so Far We have an ‘objective’ function that we can optimize by Linear Programming –min ||w|| 1 + μ ||S|| 1 subject to some constraints –Free LP solvers exist –CS 525 teaches Linear Programming We could also use gradient descent –Perceptron learning with ‘weight decay’ quite similar, though uses SQUARED wgts and SQUARED error (the S is this error) 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 10 21


Download ppt "Today’s Topics 11/10/15CS 540 - Fall 2015 (Shavlik©), Lecture 22, Week 101 Support Vector Machines (SVMs) Three Key Ideas –Max Margins –Allowing Misclassified."

Similar presentations


Ads by Google