Presentation on theme: "Support Vector Machines"— Presentation transcript:
2Support Vector Machines Elegant combination of statistical learning theory and machine learning – VapnikGood empirical resultsNon-trivial implementationCan be slow and memory intensiveBinary classifierMuch current workDo Quadric review first, (including each layer just linear divider, will help see polynomial kernel issues
3SVM ComparisonsIn order to have a natural way to compare and gain intuition on SVMs, we will first do a brief review of:Quadric/Higher Order MachinesRadial Basis Function NetworksNote that BP follows similar strategyNon-linear map of input features into new feature space which is now linearly separableBut, BP learns the non-linear mappingPerceptron Slide – Higher Order(BP slide 8-10) – Multi-layer learning (Rosenblatt's simple perceptron) BP-slide 3 – linearly separable for top layerIBL Slide RBF
4SVM OverviewNon-linear mapping from input space into a higher dimensional feature spaceNon adaptive – User defined by Kernel functionLinear decision surface (hyper-plane) sufficient in the high dimensional feature spaceNote that this is the same as we do with standard MLP/BPAvoid complexity of high dimensional feature space with kernel functions which allow computations to take place in the input space, while giving the power of being in the feature spaceGet improved generalization by placing hyper-plane at the maximum margin
7Standard (Primal) Perceptron Algorithm Target minus Output not used. Just add (or subtract) a portion (multiplied by learning rate) of the current pattern to the weight vectorIf weight vector starts at 0 then learning rate can just be 1R could also be 1 for this discussion
8Dual and Primal Equivalence Note that the final weight vector is a linear combination of the training patternsThe basic decision function (primal and dual) isHow do we obtain the coefficients αi
9Dual Perceptron Training Algorithm Assume initial 0 weight vector
10Dual vs. Primal FormGram Matrix: all (xi·xj) pairs – Done once and stored (can be large)αi: One for each pattern in the training set. Incremented each time it is misclassified, which would have led to a weight change in primal formMagnitude of αi is an indicator of effect of pattern on weights (embedding strength)Note that patterns on borders have large αi while easy patterns never effect the weightsCould have trained with just the subset of patterns with αi > 0 (support vectors) and ignored the othersCan train in dual. How about execution? Either way (dual could be efficient if support vectors are few)Would if transformed feature space is still not linearly separable? αi would keep growing. Could do early stopping or bound the αi with some maximum C, thus allowing outliers.
11Feature Space and Kernel Functions Since most problems require a non-linear decision surface, we do a non-linear map Φ(x) = (Φ1(x),Φ2(x), …,ΦN(x)) from input space to feature spaceFeature space can be of very high (even infinite) dimensionalityBy choosing a proper kernel function/feature space, the high dimensionality can be avoided in computation but effectively used for the decision surface to solve complex problems - "Kernel Trick"A Kernel is appropriate if the matrix of all K(xi, xj) is positive semi-definite (has non-negative eigenvalues). Even when this is not satisfied many kernels still work in practice (sigmoid).Phi
12Basic Kernel Execution Primal:Dual:Gaussian kernel example, learned reduced weighted nearest neighbor styleF is a space made from inner products of the original feature space Xjust show f(x) for output rather than sign, can always set to sign but f(x) has more infoKernel version:Now we see the real advantage of working in the dual formNote intuition of execution: Gaussian (and other) Kernel similar to reduced weighted K-nearest neighbor
14Polynomial Kernels For greater dimensionality we can do Walk through with two vectors x1 x2 and z1 z2 (next slide)For greater dimensionality we can do
15Polynomial Kernel Example Assume a simple 2-d feature vector x: x1, x2Note that a new instance x will be paired with training vectors xi from the training set using K(x, xi). We'll call these x z for this example.Note that in the input space x we are getting the 2nd order terms: x12, x22, and x1x2
16Polynomial Kernel Example Following is the 3rd degree polynomial kernel
17Polynomial Kernel Example Note that for the 2nd degree polynomial with two variables we get the 2nd order terms x12, x22, and x1x2Compare with quadric. We also add bias weight with SVM.For the 2nd degree polynomial with three variables we would get the 2nd order terms x12, x22, x32, x1x2, x1x3, and x2x3Note that we only get the dth degree terms. However, with some kernel creation/manipulation we could also include the lower order termsOnly a subset of the quadric terms, but we can get the others.
18SVM Execution Assume novel instance x = <.4, .8> Assume training set vectors (with bias = 0).5, .3 y= -1 α=1-.2, .7 y= 1 α=2What is the output for this case?Show higher-order and kernel computation
19SVM Execution Assume novel instance x = <.4, .8> Assume training set vectors (with bias = 0).5, .3 y= -1 α=1-.2, .7 y= 1 α=21·-1·(<.5,.3>·<.4,.8>)2 + 2·1·(<-.2,.7>·<.4,.8>)2 = = .2672This is Kernel version, what about higher order version?Kernel version
20SVM Execution Assume novel instance x = <.4, .8> Assume training set vectors (with bias = 0).5, .3 y= -1 α=1-.2, .7 y= 1 α=21·-1·(<.5,.3>·<.4,.8>)2 + 2·1·(<-.2,.7>·<.4,.8>)2 = = .26721·-1·(.52 · ·.5·.3·.4· ·.82) = =2·1·((-.2)2 · ·-.2·.7·.4· ·.82) = 2·( ) = .4608Showed each term separate for Phi space (higher order)
21Kernel TrickSo are we getting the same power as the Quadric machine without having to directly calculate the 2nd order terms?
22Kernel TrickSo are we getting the same power as the Quadric machine without having to directly calculate the 2nd order terms?No. With SVM we weight the scalar result of the kernel, which has constant coefficients in the 2nd order equation!With Quadric we can have a separate learned weight (coefficient) for each termAlthough do get to individually weight each support vectorAssume that the 3rd term above was really irrelevant. How would Quadric/SVM deal with that?Quadric would weight the term as 0. SVM can't – it comes with the entire equation. It can weight the equation (but not individual terms within it). But combinations of weightings of the different equations still lets us separate lots of things (ala RBF – are there enough support vectors to work from to get our desired classification)
23Kernel Trick Realities Polynomial Kernel - all monomials of degree 2x1x3y1y3 + x3x3y3y (all 2nd order terms)K(x,y) = <Φ (x)·Φ (y)> = … + (x1x3)(y1y3) + (x3x3)(y3y3) + ...Lot of stuff represented with just one <x·y>2However, in a full higher order solution we would would like adaptive coefficients for each of these higher order terms (i.e. -2x1 + 3·x1x2 + …)SVM does a weighted (embedding coefficients) sum of them all with individual constant internal coefficientsThus, not as powerful as a higher order system with arbitrary weightingThe more desirable arbitrary weighting can be done in an MLP because learning is done in the layers between inputs and hidden nodesSVM feature to higher order feature is a fixed mapping. No learning at that level.Of course, individual weighting requires a theoretically exponential increase in terms/hidden nodes for which we need to find weights as the polynomial degree increases. Also need learning algorithms which can actually find these most salient higher-order features.Do Quadric review first, will help see polynomial kernel issuesRemember, we don't learn weights (per se) even for the combined kernel values (just embedding strengths), but the strengths are based on set coefficients.
25SVM vs RBF Comparison SVM commonly uses a Gaussian kernel Kernel is a distance metric (ala K nearest neighbor)How does the SVM differ from RBF?
26SVM vs RBF Comparison SVM commonly uses a Gaussian kernel How does the SVM differ from RBF?SVM will automatically discover which training instances to use as prototypes during the quadratic optimizationSVM works only in the kernel space while RBF calculates values in the much larger exploded spaceBoth weight the different prototypesRBF uses a perceptron style learning algorithm to create weights between prototypes and output classesRBF supports multiple output classes and nodes have a vote for each output class, whereas SVM support vectors can only vote for two classes: 1 or -1SVM will create a maximum margin hyperplane decision surfaceSince internal feature coefficients are constants in the Gaussian distance kernel (for both SVM and RBF), SVM will suffer from fixed/irrelevant features just like RBF/k-nearest neighborSame Kernel Trick weakness – no learning in the kernel map from input to higher order feature spaceDiscuss irrelevant feature weakness of K-nn and the similar issue with SVM not being able to weight individual terms
27Choosing a KernelCan start from a desired feature space and try to construct a kernelMore often one starts from a reasonable kernel and may not analyze the feature spaceSome kernels are a better fit for certain problems, domain knowledge can be helpfulCommon kernels:PolynomialGaussianSigmoidalApplication specific
28Maximum Margin Maximum margin can lead to overfit due to noise Problem may not be linearly separable even in transformed feature spaceSoft Margin is a common solution, allows slack variablesαi constrained to be >= 0 and less than C. The C allows outliers.How to pick C? Can try different values for the particular application to see which works best.
30Quadratic Optimization Optimizing the margin in the higher order feature space is convex and thus there is one guaranteed solution at the minimum (or maximum)SVM Optimizes the dual representation (avoiding the higher order feature space) with variations on the followingMaximizing Σαi tends towards larger α subject to Σαiyi = 0 and α ≤ C (which both tend towards smaller α)Without this term, α = 0 could suffice2nd term minimizes number of support vectors sinceTwo positive (or negative) instances that are similar (high Kernel result) would increase size of term. Note that both (or either) instances usually not needed.Two non-matched instances which are similar should have larger α since they are likely support vectors at the decision boundary (negative term helps maximize)The optimization is quadratic in the αi terms and linear in the constraints – can drop C maximum for non soft marginWhile quite solvable, requires complex code and usually done with a numerical methods software package – Quadratic programmingDiffers from straight training set accuracy due to the alpha sub j term
31ExecutionTypically use dual form which can take advantage of Kernel efficiencyIf the number of support vectors is small then dual is fastIn cases of low dimensional feature spaces, could derive weights from αi and use normal primal executionCan get speed-up by dropping support vectors with embedding coefficients below some threshold
32Standard SVM ApproachSelect a 2 class training set, a kernel function (calculate the Gram Matrix), and choose the C value (soft margin parameter)Pass these to a Quadratic optimization package which will return an α for each training pattern based on a variation of the following (non-bias version)Patterns with non-zero α are the support vectors for the maximum margin SVM classifier.Execute by using the support vectorskey slide
33A Simple On-Line Alternative Stochastic on-line gradient ascentCan be effectiveThis version assumes no biasSensitive to learning rateStopping criteria tests whether it is an appropriate solution – can just go until little change is occurring or can test optimization conditions directlyCan be quite slow and usually quadratic programming is used to get an exact solutionNewton and conjugate gradient techniques also used – Can work well since it is a guaranteed convex surface – bowl shapedCould skip these next 2 slides
34Maintains a margin of 1 (typical in standard SVM implementation) which can always be done by scaling or equivalently w and bThis is done with the (1 - actual) term below, which can update even when correct, as it tries to make the distance of support vectors to the decision surface be exactly 1If parenthetical term < 0 (i.e. current instance is correct and beyond margin), then don’t update
35Large Training SetsBig problem since the Gram matrix (all (xi·xj) pairs) is O(n2) for n data patterns106 data patterns require 1012 memory itemsCan’t keep them in memoryAlso makes for a huge inner loop in dual trainingKey insight: most of the data patterns will not be support vectors so they are not needed
36ChunkingStart with a reasonably sized subset of the Data set (one that fits in memory and does not take too long during training)Train on this subset and just keep the support vectors or the m patterns with the highest αi valuesGrab another subset, add the current support vectors to it and continue trainingNote that this training may allow previous support vectors to be dropped as better ones are discoveredRepeat until all data is used and no new support vectors are added or some other stopping criteria is fulfilled
37SVM Issues Excellent empirical and theoretical potential Multi-class problems not handled naturally. Basic model classifies into just two classes. Can do one model for each class (class i is 1 and all else 0) and then decide between conflicting models using confidence, etc.How to choose kernel – main learning parameter other than margin penalty C. Kernel choice will include other parameters to be defined (degree of polynomials, variance of Gaussians, etc.)Speed and Size: both training and testing, how to handle very large training sets (millions of patterns and/or support vectors) not yet solvedAdaptive Kernels: trained during learning?