Machine Learning Lecture 7: SVM Moshe Koppel Slides adapted from Andrew Moore Copyright © 2001, 2003, Andrew W. Moore.

Slides:



Advertisements
Similar presentations
Support Vector Machine & Its Applications
Advertisements

Introduction to Support Vector Machines (SVM)
Support Vector Machines
Lecture 9 Support Vector Machines
ECG Signal processing (2)
Support Vector Machine & Its Applications Abhishek Sharma Dept. of EEE BIT Mesra Aug 16, 2010 Course: Neural Network Professor: Dr. B.M. Karan Semester.
Support Vector Machine & Its Applications Mingyue Tan The University of British Columbia Nov 26, 2004 A portion (1/3) of the slides are taken from Prof.
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
An Introduction of Support Vector Machine
An Introduction of Support Vector Machine
1 Support Vector Machines Some slides were borrowed from Andrew Moore’s PowetPoint slides on SVMs. Andrew’s PowerPoint repository is here:
LOGO Classification IV Lecturer: Dr. Bo Yuan
1 CSC 463 Fall 2010 Dr. Adam P. Anthony Class #27.
Support Vector Machine
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Support Vector Machines Kernel Machines
Support Vector Machines
CS 4700: Foundations of Artificial Intelligence
Support Vector Machines
Support Vector Machines
Lecture 10: Support Vector Machines
Statistical Learning Theory: Classification Using Support Vector Machines John DiMona Some slides based on Prof Andrew Moore at CMU:
Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.
Based on: The Nature of Statistical Learning Theory by V. Vapnick 2009 Presentation by John DiMona and some slides based on lectures given by Professor.
Support Vector Machine & Image Classification Applications
CS 8751 ML & KDDSupport Vector Machines1 Support Vector Machines (SVMs) Learning mechanism based on linear programming Chooses a separating plane based.
Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.
Copyright © 2001, Andrew W. Moore Support Vector Machines Andrew W. Moore Associate Professor School of Computer Science Carnegie Mellon University.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Classification - SVM CS 685: Special Topics in Data Mining Jinze Liu.
Introduction to SVMs. SVMs Geometric –Maximizing Margin Kernel Methods –Making nonlinear decision boundaries linear –Efficiently! Capacity –Structural.
Data Mining Volinsky - Columbia University Topic 9: Advanced Classification Neural Networks Support Vector Machines 1 Credits: Shawndra Hill Andrew.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE Massimo Poesio LECTURE: Support Vector Machines.
1 CSC 4510, Spring © Paula Matuszek CSC 4510 Support Vector Machines 2 (SVMs)
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche.
Machine Learning in Ad-hoc IR. Machine Learning for ad hoc IR We’ve looked at methods for ranking documents in IR using factors like –Cosine similarity,
Machine Learning CS 165B Spring Course outline Introduction (Ch. 1) Concept learning (Ch. 2) Decision trees (Ch. 3) Ensemble learning Neural Networks.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Classification - SVM CS 685: Special Topics in Data Mining Spring 2008 Jinze Liu.
1 CMSC 671 Fall 2010 Class #24 – Wednesday, November 24.
1 Support Vector Machines Chapter Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School.
1 Support Vector Machines. Why SVM? Very popular machine learning technique –Became popular in the late 90s (Vapnik 1995; 1998) –Invented in the late.
1 CSC 4510, Spring © Paula Matuszek CSC 4510 Support Vector Machines (SVMs)
Support Vector Machine & Its Applications Mingyue Tan The University of British Columbia Nov 26, 2004 A portion (1/3) of the slides are taken from Prof.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Classification - SVM CS 685: Special Topics in Data Mining Jinze Liu.
Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.
Support Vector Machines Tao Department of computer science University of Illinois.
Nov 30th, 2001Copyright © 2001, Andrew W. Moore PAC-learning Andrew W. Moore Associate Professor School of Computer Science Carnegie Mellon University.
Oct 29th, 2001Copyright © 2001, Andrew W. Moore Bayes Net Structure Learning Andrew W. Moore Associate Professor School of Computer Science Carnegie Mellon.
1 Support Vector Machines Some slides were borrowed from Andrew Moore’s PowetPoint slides on SVMs. Andrew’s PowerPoint repository is here:
An Introduction of Support Vector Machine In part from of Jinwei Gu.
Support Vector Machines Louis Oliphant Cs540 section 2.
Nov 20th, 2001Copyright © 2001, Andrew W. Moore VC-dimension for characterizing classifiers Andrew W. Moore Associate Professor School of Computer Science.
A Brief Introduction to Support Vector Machine (SVM) Most slides were from Prof. A. W. Moore, School of Computer Science, Carnegie Mellon University.
Support Vector Machines Reading: Textbook, Chapter 5 Ben-Hur and Weston, A User’s Guide to Support Vector Machines (linked from class web page)
Support Vector Machines Chapter 18.9 and the paper “Support vector machines” by M. Hearst, ed., 1998 Acknowledgments: These slides combine and modify ones.
Support Vector Machine & Its Applications. Overview Intro. to Support Vector Machines (SVM) Properties of SVM Applications  Gene Expression Data Classification.
Support Vector Machine Slides from Andrew Moore and Mingyue Tan.
Support Vector Machines
Support Vector Machines
Introduction to SVMs.
Support Vector Machines
Machine Learning Week 2.
Support Vector Machines
CSSE463: Image Recognition Day 14
Introduction to Support Vector Machines
CS 485: Special Topics in Data Mining Jinze Liu
Class #212 – Thursday, November 12
Support Vector Machines
Support Vector Machines
SVMs for Document Ranking
Presentation transcript:

Machine Learning Lecture 7: SVM Moshe Koppel Slides adapted from Andrew Moore Copyright © 2001, 2003, Andrew W. Moore

Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or the following link to the source repository of Andrew’s tutorials: Comments and corrections gratefully received.

Support Vector Machines: Slide 3 Copyright © 2001, 2003, Andrew W. Moore Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How would you classify this data?

Support Vector Machines: Slide 4 Copyright © 2001, 2003, Andrew W. Moore Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How would you classify this data?

Support Vector Machines: Slide 5 Copyright © 2001, 2003, Andrew W. Moore Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How would you classify this data?

Support Vector Machines: Slide 6 Copyright © 2001, 2003, Andrew W. Moore Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How would you classify this data?

Support Vector Machines: Slide 7 Copyright © 2001, 2003, Andrew W. Moore Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) Any of these would be fine....but which is best?

Support Vector Machines: Slide 8 Copyright © 2001, 2003, Andrew W. Moore Classifier Margin f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

Support Vector Machines: Slide 9 Copyright © 2001, 2003, Andrew W. Moore Maximum Margin f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Linear SVM

Support Vector Machines: Slide 10 Copyright © 2001, 2003, Andrew W. Moore Maximum Margin f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Support Vectors are those datapoints that the margin pushes up against Linear SVM

Support Vector Machines: Slide 11 Copyright © 2001, 2003, Andrew W. Moore Why Maximum Margin? denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Support Vectors are those datapoints that the margin pushes up against 1.Intuitively this feels safest. 2.If we’ve made a small error in the location of the boundary (it’s been jolted in its perpendicular direction) this gives us least chance of causing a misclassification. 3.LOOCV is easy since the model is immune to removal of any non- support-vector datapoints. 4.There’s some theory (using VC dimension) that is related to (but not the same as) the proposition that this is a good thing. 5.Empirically it works very very well.

Support Vector Machines: Slide 12 Copyright © 2001, 2003, Andrew W. Moore Specifying a line and margin How do we represent this mathematically? …in m input dimensions? Plus-Plane Minus-Plane Classifier Boundary “Predict Class = +1” zone “Predict Class = -1” zone

Support Vector Machines: Slide 13 Copyright © 2001, 2003, Andrew W. Moore Specifying a line and margin Plus-plane = { x : w. x + b = +1 } Minus-plane = { x : w. x + b = -1 } Plus-Plane Minus-Plane Classifier Boundary “Predict Class = +1” zone “Predict Class = -1” zone Classify as..+1ifw. x + b >= 1 ifw. x + b <= -1 Universe explodes if-1 < w. x + b < 1 wx+b=1 wx+b=0 wx+b=-1

Support Vector Machines: Slide 14 Copyright © 2001, 2003, Andrew W. Moore Computing the margin width Plus-plane = { x : w. x + b = +1 } Minus-plane = { x : w. x + b = -1 } Claim: The vector w is perpendicular to the Plus Plane. Why? “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 M = Margin Width How do we compute M in terms of w and b?

Support Vector Machines: Slide 15 Copyright © 2001, 2003, Andrew W. Moore Computing the margin width Plus-plane = { x : w. x + b = +1 } Minus-plane = { x : w. x + b = -1 } Claim: The vector w is perpendicular to the Plus Plane. Why? “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 M = Margin Width How do we compute M in terms of w and b? Let u and v be two vectors on the Plus Plane. What is w. ( u – v ) ? And so of course the vector w is also perpendicular to the Minus Plane

Support Vector Machines: Slide 16 Copyright © 2001, 2003, Andrew W. Moore Computing the margin width Plus-plane = { x : w. x + b = +1 } Minus-plane = { x : w. x + b = -1 } The vector w is perpendicular to the Plus Plane Let x - be any point on the minus plane Let x + be the closest plus-plane-point to x -. “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 M = Margin Width How do we compute M in terms of w and b? x-x- x+x+ Any location in  m : not necessarily a datapoint Any location in R m : not necessarily a datapoint

Support Vector Machines: Slide 17 Copyright © 2001, 2003, Andrew W. Moore Computing the margin width Plus-plane = { x : w. x + b = +1 } Minus-plane = { x : w. x + b = -1 } The vector w is perpendicular to the Plus Plane Let x - be any point on the minus plane Let x + be the closest plus-plane-point to x -. Claim: x + = x - + w for some value of. Why? “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 M = Margin Width How do we compute M in terms of w and b? x-x- x+x+

Support Vector Machines: Slide 18 Copyright © 2001, 2003, Andrew W. Moore Computing the margin width Plus-plane = { x : w. x + b = +1 } Minus-plane = { x : w. x + b = -1 } The vector w is perpendicular to the Plus Plane Let x - be any point on the minus plane Let x + be the closest plus-plane-point to x -. Claim: x + = x - + w for some value of. Why? “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 M = Margin Width How do we compute M in terms of w and b? x-x- x+x+ The line from x - to x + is perpendicular to the planes. So to get from x - to x + travel some distance in direction w.

Support Vector Machines: Slide 19 Copyright © 2001, 2003, Andrew W. Moore Computing the margin width What we know: w. x + + b = +1 w. x - + b = -1 x + = x - + w |x + - x - | = M It’s now easy to get M in terms of w and b “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 M = Margin Width x-x- x+x+

Support Vector Machines: Slide 20 Copyright © 2001, 2003, Andrew W. Moore Computing the margin width What we know: w. x + + b = +1 w. x - + b = -1 x + = x - + w |x + - x - | = M It’s now easy to get M in terms of w and b “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 M = Margin Width w. (x - + w) + b = 1 => w. x - + b + w.w = 1 => -1 + w.w = 1 => x-x- x+x+

Support Vector Machines: Slide 21 Copyright © 2001, 2003, Andrew W. Moore Computing the margin width What we know: w. x + + b = +1 w. x - + b = -1 x + = x - + w |x + - x - | = M “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 M = Margin Width = M = |x + - x - | =| w |= x-x- x+x+

Support Vector Machines: Slide 22 Copyright © 2001, 2003, Andrew W. Moore Learning the Maximum Margin Classifier Given a guess of w and b we can Compute whether all data points in the correct half-planes Compute the width of the margin So now we just need to write a program to search the space of w’s and b’s to find the widest margin that matches all the datapoints. How? “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 M = Margin Width = x-x- x+x+

Support Vector Machines: Slide 23 Copyright © 2001, 2003, Andrew W. Moore Learning via Quadratic Programming QP is a well-studied class of optimization algorithms to maximize a quadratic function of some real-valued variables subject to linear constraints.

Support Vector Machines: Slide 24 Copyright © 2001, 2003, Andrew W. Moore Quadratic Programming Find And subject to n additional linear inequality constraints e additional linear equality constraints Quadratic criterion Subject to

Support Vector Machines: Slide 25 Copyright © 2001, 2003, Andrew W. Moore Quadratic Programming Find Subject to And subject to n additional linear inequality constraints e additional linear equality constraints Quadratic criterion There exist algorithms for finding such constrained quadratic optima much more efficiently and reliably than gradient ascent. (But they are very fiddly…you probably don’t want to write one yourself)

Support Vector Machines: Slide 26 Copyright © 2001, 2003, Andrew W. Moore Learning the Maximum Margin Classifier “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 M = What should our quadratic optimization criterion be? How many constraints will we have? What should they be? Given guess of w, b we can Compute whether all data points are in the correct half-planes Compute the margin width Assume R datapoints, each (x k,y k ) where y k = +/- 1

Support Vector Machines: Slide 27 Copyright © 2001, 2003, Andrew W. Moore Learning the Maximum Margin Classifier Given guess of w, b we can Compute whether all data points are in the correct half-planes Compute the margin width Assume R datapoints, each (x k,y k ) where y k = +/- 1 “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 M = What should our quadratic optimization criterion be? Minimize w.w How many constraints will we have? R What should they be? w. x k + b >= 1 if y k = 1 w. x k + b <= -1 if y k = -1

Support Vector Machines: Slide 28 Copyright © 2001, 2003, Andrew W. Moore Uh-oh! denotes +1 denotes -1 This is going to be a problem! What should we do?

Support Vector Machines: Slide 29 Copyright © 2001, 2003, Andrew W. Moore Learning Maximum Margin with Noise Given guess of w, b we can Compute sum of distances of points to their correct zones Compute the margin width Assume R datapoints, each (x k,y k ) where y k = +/- 1 wx+b=1 wx+b=0 wx+b=-1 M = What should our quadratic optimization criterion be? How many constraints will we have? What should they be?

Support Vector Machines: Slide 30 Copyright © 2001, 2003, Andrew W. Moore Learning Maximum Margin with Noise Given guess of w, b we can Compute sum of distances of points to their correct zones Compute the margin width Assume R datapoints, each (x k,y k ) where y k = +/- 1 wx+b=1 wx+b=0 wx+b=-1 M = What should our quadratic optimization criterion be? Minimize 77  11 22 How many constraints will we have? R What should they be? w. x k + b >= 1-  k if y k = 1 w. x k + b <= -1+  k if y k = -1

Support Vector Machines: Slide 31 Copyright © 2001, 2003, Andrew W. Moore Learning Maximum Margin with Noise Given guess of w, b we can Compute sum of distances of points to their correct zones Compute the margin width Assume R datapoints, each (x k,y k ) where y k = +/- 1 wx+b=1 wx+b=0 wx+b=-1 M = What should our quadratic optimization criterion be? Minimize 77  11 22 Our original (noiseless data) QP had m+1 variables: w 1, w 2, … w m, and b. Our new (noisy data) QP has m+1+R variables: w 1, w 2, … w m, b,  k,  1,…  R m = # input dimensions How many constraints will we have? R What should they be? w. x k + b >= 1-  k if y k = 1 w. x k + b <= -1+  k if y k = -1 R= # records

Support Vector Machines: Slide 32 Copyright © 2001, 2003, Andrew W. Moore How many constraints will we have? R What should they be? w. x k + b >= 1-  k if y k = 1 w. x k + b <= -1+  k if y k = -1 Learning Maximum Margin with Noise Given guess of w, b we can Compute sum of distances of points to their correct zones Compute the margin width Assume R datapoints, each (x k,y k ) where y k = +/- 1 wx+b=1 wx+b=0 wx+b=-1 M = What should our quadratic optimization criterion be? Minimize 77  11 22 There’s a bug in this QP. Can you spot it?

Support Vector Machines: Slide 33 Copyright © 2001, 2003, Andrew W. Moore Learning Maximum Margin with Noise Given guess of w, b we can Compute sum of distances of points to their correct zones Compute the margin width Assume R datapoints, each (x k,y k ) where y k = +/- 1 wx+b=1 wx+b=0 wx+b=-1 M = What should our quadratic optimization criterion be? Minimize How many constraints will we have? 2R What should they be? w. x k + b >= 1-  k if y k = 1 w. x k + b <= -1+  k if y k = -1  k >= 0 for all k 77  11 22

Support Vector Machines: Slide 34 Copyright © 2001, 2003, Andrew W. Moore An Equivalent QP Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. x - b)

Support Vector Machines: Slide 35 Copyright © 2001, 2003, Andrew W. Moore An Equivalent QP Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. x - b) Datapoints with  k > 0 will be the support vectors..so this sum only needs to be over the support vectors.

Support Vector Machines: Slide 36 Copyright © 2001, 2003, Andrew W. Moore An Equivalent QP Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. x - b) Datapoints with  k > 0 will be the support vectors..so this sum only needs to be over the support vectors. Why did I tell you about this equivalent QP? It’s a formulation that QP packages can optimize more quickly Because of further jaw- dropping developments you’re about to learn.

Support Vector Machines: Slide 37 Copyright © 2001, 2003, Andrew W. Moore Suppose we’re in 1-dimension What would SVMs do with this data? x=0

Support Vector Machines: Slide 38 Copyright © 2001, 2003, Andrew W. Moore Suppose we’re in 1-dimension Not a big surprise Positive “plane” Negative “plane” x=0

Support Vector Machines: Slide 39 Copyright © 2001, 2003, Andrew W. Moore Harder 1-dimensional dataset That’s wiped the smirk off SVM’s face. What can be done about this? x=0

Support Vector Machines: Slide 40 Copyright © 2001, 2003, Andrew W. Moore Harder 1-dimensional dataset Let’s map the data points into a higher dimensional space. x=0

Support Vector Machines: Slide 41 Copyright © 2001, 2003, Andrew W. Moore Harder 1-dimensional dataset x=0

Support Vector Machines: Slide 42 Copyright © 2001, 2003, Andrew W. Moore Common SVM basis functions z k = ( polynomial terms of x k of degree 1 to q ) z k = ( radial basis functions of x k ) z k = ( sigmoid functions of x k ) This is sensible. Is that the end of the story? No…there’s one more trick!

Support Vector Machines: Slide 43 Copyright © 2001, 2003, Andrew W. Moore Quadratic Basis Functions Constant Term Linear Terms Pure Quadratic Terms Quadratic Cross-Terms Number of terms (assuming m input dimensions) = (m+2)-choose-2 = (m+2)(m+1)/2 = (as near as makes no difference) m 2 /2 You may be wondering what those ’s are doing. You should be happy that they do no harm You’ll find out why they’re there soon.

Support Vector Machines: Slide 44 Copyright © 2001, 2003, Andrew W. Moore QP with basis functions where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w.  (x) - b) Maximize

Support Vector Machines: Slide 45 Copyright © 2001, 2003, Andrew W. Moore QP with basis functions where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w.  (x) - b) We must do R 2 /2 dot products to get this matrix ready. Each dot product requires m 2 /2 additions and multiplications The whole thing costs R 2 m 2 /4. Yeeks! …or does it? Maximize

Support Vector Machines: Slide 46 Copyright © 2001, 2003, Andrew W. Moore Quadratic Dot Products + + +

Support Vector Machines: Slide 47 Copyright © 2001, 2003, Andrew W. Moore Quadratic Dot Products Just out of casual, innocent, interest, let’s look at another function of a and b:

Support Vector Machines: Slide 48 Copyright © 2001, 2003, Andrew W. Moore Quadratic Dot Products Just out of casual, innocent, interest, let’s look at another function of a and b: They’re the same! And this is only O(m) to compute!

Support Vector Machines: Slide 49 Copyright © 2001, 2003, Andrew W. Moore QP with Quadratic basis functions where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w.  (x) - b) We must do R 2 /2 dot products to get this matrix ready. Each dot product now only requires m additions and multiplications Maximize

Support Vector Machines: Slide 50 Copyright © 2001, 2003, Andrew W. Moore Higher Order Polynomials Poly- nomial (x)(x) Cost to build Q kl matrix tradition ally Cost if 100 inputs  (a).  (b) Cost to build Q kl matrix sneakily Cost if 100 inputs QuadraticAll m 2 /2 terms up to degree 2 m 2 R 2 /42,500 R 2 (a.b+1) 2 m R 2 / 250 R 2 CubicAll m 3 /6 terms up to degree 3 m 3 R 2 /1283,000 R 2 (a.b+1) 3 m R 2 / 250 R 2 QuarticAll m 4 /24 terms up to degree 4 m 4 R 2 /481,960,000 R 2 (a.b+1) 4 m R 2 / 250 R 2

Support Vector Machines: Slide 51 Copyright © 2001, 2003, Andrew W. Moore SVM Kernel Functions K(a,b)=(a. b +1) d is an example of an SVM Kernel Function Beyond polynomials there are other very high dimensional basis functions that can be made practical by finding the right Kernel Function Radial-Basis-style Kernel Function: Neural-net-style Kernel Function: ,  and  are magic parameters that must be chosen by a model selection method such as CV or VCSRM* *see last lecture

Support Vector Machines: Slide 52 Copyright © 2001, 2003, Andrew W. Moore VC-dimension of an SVM Very very very loosely speaking there is some theory which under some different assumptions puts an upper bound on the VC dimension as where Diameter is the diameter of the smallest sphere that can enclose all the high-dimensional term-vectors derived from the training set. Margin is the smallest margin we’ll let the SVM use This can be used in SRM (Structural Risk Minimization) for choosing the polynomial degree, RBF , etc. But most people just use Cross-Validation

Support Vector Machines: Slide 53 Copyright © 2001, 2003, Andrew W. Moore What You Should Know Linear SVMs The definition of a maximum margin classifier What QP can do for you (but, for this class, you don’t need to know how it does it) How Maximum Margin can be turned into a QP problem How we deal with noisy (non-separable) data How we permit non-linear boundaries How SVM Kernel functions permit us to pretend we’re working with ultra-high-dimensional basis- function terms