Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classification via Mathematical Programming Based Support Vector Machines Glenn M. Fung Computer Sciences Dept. University of Wisconsin - Madison November.

Similar presentations


Presentation on theme: "Classification via Mathematical Programming Based Support Vector Machines Glenn M. Fung Computer Sciences Dept. University of Wisconsin - Madison November."— Presentation transcript:

1 Classification via Mathematical Programming Based Support Vector Machines Glenn M. Fung Computer Sciences Dept. University of Wisconsin - Madison November 26, 2002

2 Outline of Talk  (Standard) Support vector machines (SVM)  Classify by halfspaces  Proximal support vector machines (PSVM)  Classify by proximity to planes  Numerical experiments  Incremental PSVM classifiers  Synthetic dataset consisting of 1 billion points in 10- dimensional input space classified in less than 2 hours and 26 minutes seconds  Knowledge based linear SVMs  Incorporating knowledge sets into a classifier  Numerical experiments

3 Support Vector Machines Maximizing the Margin between Bounding Planes A+ A- Support vectors

4 Standard Support Vector Machine Algebra of 2-Category Linearly Separable Case  Given m points in n dimensional space  Represented by an m-by-n matrix A  Membership of each in class +1 or –1 specified by:  An m-by-m diagonal matrix D with +1 & -1 entries  More succinctly: where e is a vector of ones.  Separate by two bounding planes,

5 Standard Support Vector Machine Formulation  Margin is maximized by minimizing  Solve the quadratic program for some : min s. t. (QP),, denotes where or membership.

6 Proximal Vector Machines (KDD 2002) Fitting the Data using two parallel Bounding Planes A+ A-

7 PSVM Formulation We have from the QP SVM formulation: (QP) min s. t. This simple, but critical modification, changes the nature of the optimization problem tremendously!! Solving for in terms of and gives: min

8 Advantages of New Formulation  Objective function remains strongly convex  An explicit exact solution can be written in terms of the problem data  PSVM classifier is obtained by solving a single system of linear equations in the usually small dimensional input space  Exact leave-one-out-correctness can be obtained in terms of problem data

9 Linear PSVM We want to solve: min  Setting the gradient equal to zero, gives a nonsingular system of linear equations.  Solution of the system gives the desired PSVM classifier

10 Linear PSVM Solution Here,  The linear system to solve depends on: which is of the size  is usually much smaller than

11 Linear Proximal SVM Algorithm Classifier: Input Define Solve Calculate

12 Nonlinear PSVM Formulation By QP “duality”,. Maximizing the margin in the “dual space”, gives: min  Replace by a nonlinear kernel : min )  Linear PSVM: (Linear separating surface: (QP) min s. t.

13 The Nonlinear Classifier  The nonlinear classifier:  Where K is a nonlinear kernel, e.g.:  Gaussian (Radial Basis) Kernel :  The -entry of represents the “similarity” of data pointsand

14 Nonlinear PSVM Defining slightly different:  Similar to the linear case, setting the gradient equal to zero, we obtain: However, reduced kernels techniques can be used (RSVM) to reduce dimensionality.  Here, the linear system to solve is of the size

15 Linear Proximal SVM Algorithm Input Solve Calculate Non Define Classifier:

16 Linear & Nonlinear PSVM MATLAB Code function [w, gamma] = psvm(A,d,nu) % PSVM: linear and nonlinear classification % INPUT: A, d=diag(D), nu. OUTPUT: w, gamma % [w, gamma] = psvm(A,d,nu); [m,n]=size(A);e=ones(m,1);H=[A -e]; v=(d’*H)’ %v=H’*D*e; r=(speye(n+1)/nu+H’*H)\v % solve (I/nu+H’*H)r=v w=r(1:n);gamma=r(n+1); % getting w,gamma from r

17 Linear PSVM Comparisons with Other SVMs Much Faster, Comparable Correctness Data Set m x n PSVM Ten-fold test % Time (sec.) SSVM Ten-fold test % Time (sec.) SVM Ten-fold test % Time (sec.) WPBC (60 mo.) 110 x 32 68.5 0.02 68.5 0.17 62.7 3.85 Ionosphere 351 x 34 87.3 0.17 88.7 1.23 88.0 2.19 Cleveland Heart 297 x 13 85.9 0.01 86.2 0.70 86.5 1.44 Pima Indians 768 x 8 77.5 0.02 77.6 0.78 76.4 37.00 BUPA Liver 345 x 6 69.4 0.02 70.0 0.78 69.5 6.65 Galaxy Dim 4192 x 14 93.5 0.34 95.0 5.21 94.1 28.33

18 Linear PSVM vs LSVM 2-Million Dataset Over 30 Times Faster DatasetMethodTraining Correctness % Testing Correctness % Time Sec. NDC “Easy” LSVM90.8691.23658.5 PSVM90.8091.1320.8 NDC “Hard” LSVM69.8069.44655.6 PSVM69.8469.5220.6

19 Nonlinear PSVM: Spiral Dataset 94 Red Dots & 94 White Dots

20 Nonlinear PSVM Comparisons Data Set m x n PSVM Ten-fold test % Time (sec.) SSVM Ten-fold test % Time (sec.) LSVM Ten-fold test % Time (sec.) Ionosphere 351 x 34 95.2 4.60 95.8 25.25 95.8 14.58 BUPA Liver 345 x 6 73.6 4.34 73.7 20.65 73.7 30.75 Tic-Tac-Toe 958 x 9 98.4 74.95 98.4 395.30 94.7 350.64 Mushroom * 8124 x 22 88.0 35.50 88.8 307.66 87.8 503.74 * A rectangular kernel was used of size 8124 x 215

21 Conclusion  PSVM is an extremely simple procedure for generating linear and nonlinear classifiers  PSVM classifier is obtained by solving a single system of linear equations in the usually small dimensional input space for a linear classifier  Comparable test set correctness to standard SVM  Much faster than standard SVMs : typically an order of magnitude less.

22 Incremental PSVM Classification (Second SIAM Data Mining Conference)  The linear system to solve depends on the compressed blocks: which are of the size and  Suppose we have two “blocks” of data

23 Linear Incremental Proximal SVM Algorithm Initialization Read from disk Compute and Store in memory Yes Compute output Update in memory No Discard: Keep:

24 Linear Incremental Proximal SVM Adding – Retiring Data  Capable of modifying an existing linear classifier by both adding and retiring data  Option of retiring old data is similar to adding new data  Financial Data: old data is obsolete  Option of keeping old data and merging it with the new data:  Medical Data: old data does not obsolesce.

25 Numerical experiments One-Billion Two-Class Dataset  Synthetic dataset consisting of 1 billion points in 10- dimensional input space  Generated by NDC (Normally Distributed Clustered) dataset generator  Dataset divided into 500 blocks of 2 million points each.  Solution obtained in less than 2 hours and 26 minutes  About 30% of the time was spent reading data from disk.  Testing set Correctness 90.79%

26 Numerical Experiments Simulation of Two-month 60-Million Dataset  Synthetic dataset consisting of 60 million points (1 million per day) in 10- dimensional input space  Generated using NDC  At the beginning, we only have data corresponding to the first month  Every day:  The oldest block of data is retired (1 Million)  A new block is added (1 Million)  A new linear classifier is calculated daily  Only an 11 by 11 matrix is kept in memory at the end of each day. All other data is purged.

27 Numerical experiments Separator changing through time

28 Numerical experiments Normals to the separating hyperplanes Corresponding to 5 day intervals

29 Conclusion  Proposed algorithm is an extremely simple procedure for generating linear classifiers in an incremental fashion for huge datasets.  The linear classifier is obtained by solving a single system of linear equations in the small dimensional input space.  The proposed algorithm has the ability to retire old data and add new data in a very simple manner.  Only a matrix of the size of the input space is kept in memory at any time

30 Support Vector Machines Linear Programming Formulation  Use the 1-norm instead of the 2-norm: min s.t.  This is equivalent to the following linear program: min s.t.

31 Conventional Data-Based SVM

32 -20-15-10-505 -45 -40 -35 -30 -25 -20 -15 {x | B 1 x  b 1 } x'w=   +1 {x | C 1 x  c 1 } 2 x  c 2 } x'w=  A - A + Knowledge-Based SVM via Polyhedral Knowledge Sets (NIPS 2002)

33 Incoporating Knowledge Sets Into an SVM Classifier  Will show that this implication is equivalent to a set of constraints that can be imposed on the classification problem.  Suppose that the knowledge set: belongs to the class A+. Hence it must lie in the halfspace :  We therefore have the implication:

34 Knowledge Set Equivalence Theorem

35 Proof of Equivalence Theorem ( Via Nonhomogeneous Farkas or LP Duality) Proof: By LP Duality:

36 Knowledge-Based SVM Classification

37  Adding one set of constraints for each knowledge set to the 1-norm SVM LP, we have:

38 Parametrized Knowledge-Based LP

39 Numerical Testing The Promoter Recognition Dataset  Promoter: Short DNA sequence that precedes a gene sequence.  A promoter consists of 57 consecutive DNA nucleotides belonging to {A,G,C,T}.  Important to distinguish between promoters and nonpromoters  This distinction identifies starting locations of genes in long uncharacterized DNA sequences.

40 The Promoter Recognition Dataset Numerical Representation  Simple “1 of N” mapping scheme for converting nominal attributes into a real valued representation:  Not most economical representation, but commonly used.

41 The Promoter Recognition Dataset Numerical Representation  Feature space mapped from 57-dimensional nominal space to a real valued 57 x 4=228 dimensional space. 57 nominal values 57 x 4 =228 binary values

42 Promoter Recognition Dataset Prior Knowledge Rules  Prior knowledge consist of the following 64 rules:

43 Promoter Recognition Dataset Sample Rules where denotes position of a nucleotide, with respect to a meaningful reference point starting at position and ending at position Then:

44 The Promoter Recognition Dataset Comparative Algorithms  KBANN Knowledge-based artificial neural network [Shavlik et al]  BP: Standard back propagation for neural networks [Rumelhart et al]  O’Neill’s Method Empirical method suggested by biologist O’Neill [O’Neill]  NN: Nearest neighbor with k=3 [Cost et al]  ID3: Quinlan’s decision tree builder[Quinlan]  SVM1: Standard 1-norm SVM [Bradley et al]

45 The Promoter Recognition Dataset Comparative Test Results

46 Wisconsin Breast Cancer Prognosis Dataset Description of the data  110 instances corresponding to 41 patients whose cancer had recurred and 69 patients whose cancer had not recurred  32 numerical features  The domain theory: two simple rules used by doctors:

47 Wisconsin Breast Cancer Prognosis Dataset Numerical Testing Results  Doctor’s rules applicable to only 32 out of 110 patients.  Only 22 of 32 patients are classified correctly by this rule (20% Correctness).  KSVM linear classifier applicable to all patients with correctness of 66.4%.  Correctness comparable to best available results using conventional SVMs.  KSVM can get classifiers based on knowledge without using any data.

48 Conclusion  Prior knowledge easily incorporated into classifiers through polyhedral knowledge sets.  Resulting problem is a simple LP.  Knowledge sets can be used with or without conventional labeled data.  In either case KSVM is better than most knowledge based classifiers.

49 Breast Cancer Treatment Response Joint with ExonHit ( French BioTech)  35 patients treated by a drug cocktail  9 partial responders; 26 nonresponders  25 gene expression measurements made on each patient  1-Norm SVM classifier selected: 12 out of 25 genes  Combinatorially selected 6 genes out of 12  Separating plane obtained: 2.7915 T11 + 0.13436 S24 -1.0269 U23 -2.8108 Z23 -1.8668 A19 -1.5177 X05 +2899.1 = 0.  Leave-one-out-error: 1 out of 35 (97.1% correctness)

50 Other papers:  A fast and Global Two Point Low Storage Optimization Technique for Tracing Rays in 2D and 3D Isotropic Media (Journal of Applied Geophysics)  Semi-Supervised Support Vector Machines for Unlabeled data Classification (Optimization Methods and Software)  Select a small subset of an unlabeled dataset to be labeled by an oracle or expert  Use the new labeled data and the remaining unlabeled data to train a SVM clasifier

51 Other papers:  Multicategory Proximal SVM Classifiers  Fast multicategory algorithm based on PSVM  Newton refinement step proposed  Data Selection for SVM Classifiers (KDD 2000)  Reduce the number of support vectors of a linear SVM  Minimal Kernel Classifiers (JMLR)  Use a concave minimization formulation to reduce the SVM model complexity.  Useful for online testing where testing time is an issue.

52 Other papers:  A Feature Selection Newton Method for SVM Classification  LP SVM solved using a Newton method  Very sparse solutions are obtained  Finite Newton method for Lagrangian SVM Classifiers (Neurocomputing Journal)  Very fast performance, specially when n>m


Download ppt "Classification via Mathematical Programming Based Support Vector Machines Glenn M. Fung Computer Sciences Dept. University of Wisconsin - Madison November."

Similar presentations


Ads by Google