Presentation is loading. Please wait.

Presentation is loading. Please wait.

Support Vector Regression David R. Musicant and O.L. Mangasarian International Symposium on Mathematical Programming Thursday, August 10, 2000

Similar presentations


Presentation on theme: "Support Vector Regression David R. Musicant and O.L. Mangasarian International Symposium on Mathematical Programming Thursday, August 10, 2000"— Presentation transcript:

1 Support Vector Regression David R. Musicant and O.L. Mangasarian International Symposium on Mathematical Programming Thursday, August 10, 2000 http://www.cs.wisc.edu/~musicant

2 2 Outline l Robust Regression –Huber M-Estimator loss function –New quadratic programming formulation –Numerical comparisons –Nonlinear kernels l Tolerant Regression –New formulation of Support Vector Regression (SVR) –Numerical comparisons –Massive regression: Row-column chunking l Conclusions & Future Work

3 Focus 1: Robust Regression a.k.a. Huber Regression   

4 4 “Standard” Linear Regression Find w, b such that: m points in R n, represented by an m x n matrix A. y in R m is the vector to be approximated.

5 5 Optimization problem l Find w, b such that: l Bound the error by s: l Minimize the error: Traditional approach: minimize squared error.

6 6 Examining the loss function l Standard regression uses a squared error loss function. –Points which are far from the predicted line (outliers) are overemphasized.

7 7 Alternative loss function l Instead of squared error, try absolute value of the error: This is the 1-norm loss function.

8 8 1-Norm Problems And Solution –Overemphasizes error on points close to the predicted line l Solution: Huber loss function hybrid approach Quadratic Linear Many practitioners prefer the Huber loss function.

9 9 Mathematical Formulation  indicates switchover from quadratic to linear    Larger  means “more quadratic.”

10 10 Regression Approach Summary l Quadratic Loss Function –Standard method in statistics –Over-emphasizes outliers l Linear Loss Function (1-norm) –Formulates well as a linear program –Over-emphasizes small errors l Huber Loss Function (hybrid approach) –Appropriate emphasis on large and small errors

11 11 Previous attempts complicated l Earlier efforts to solve Huber regression: –Huber: Gauss-Seidel method –Madsen/Nielsen: Newton Method –Li: Conjugate Gradient Method –Smola: Dual Quadratic Program l Our new approach: convex quadratic program Our new approach is simpler and faster.

12 12 Experimental Results: Census20k Time (CPU sec)  Faster! 20,000 points 11 features

13 13 Experimental Results: CPUSmall Time (CPU sec)  Faster! 8,192 points 12 features

14 14 Introduce nonlinear kernel l Begin with previous formulation: Substitute w = A’  and minimize  instead: l Substitute K(A,A’) for AA’:

15 15 Nonlinear results Nonlinear kernels improve accuracy.

16 Focus 2: Support Vector Tolerant Regression

17 17 Regression Approach Summary l Quadratic Loss Function –Standard method in statistics –Over-emphasizes outliers l Linear Loss Function (1-norm) –Formulates well as a linear program –Over-emphasizes small errors l Huber Loss Function (hybrid approach) –Appropriate emphasis on large and small errors

18 18 Optimization problem l Find w, b such that: l Bound the error by s: l Minimize the error: Minimize the magnitude of the error.

19 19 The overfitting issue l Noisy training data can be fitted “too well” –leads to poor generalization on future data l Prefer simpler regressions, i.e. where –some w coefficients are zero –line is “flatter”

20 20 Reducing overfitting l To achieve both goals –minimize magnitude of w vector l C is a parameter to balance the two goals –Chosen by experimentation l Reduces overfitting due to points far from surface

21 21 Overfitting again: “close” points l “Close points” may be wrong due to noise only –Line should be influenced by “real” data, not noise l Ignore errors from those points which are close!

22 22 Tolerant regression Allow an interval of size  with uniform error How large should  be? –Large as possible, while preserving accuracy

23 23 How about a nonlinear surface?

24 24 Introduce nonlinear kernel l Begin with previous formulation: Substitute w = A’  and minimize  instead: l Substitute K(A,A’) for AA’: K(A,A’) = nonlinear kernel function

25 25 Equivalent to Smola, Schölkopf, Rätsch (SSR) Formulation l Our formulation single error bound tolerance as a constraint

26 26 l Smola, Schölkopf, Rätsch multiple error bounds

27 27 l Reduction in: –Variables: 4m+2 --> 3m+2 –Solution time

28 28 Equivalent to Smola, Schölkopf, Rätsch (SSR) Formulation l Our formulation l Smola, Schölkopf, Rätsch l Reduction in: –Variables: 4m+2 --> 3m+2 –Solution time multiple error bounds single error bound tolerance as a constraint

29 29 l Perturbation theory results show there exists a fixed such that: l For all –we solve the above stabilized least 1-norm problem –additionally we maximize  the least error component As  goes from 0 to 1, –least error component  is monotonically nondecreasing function of  Natural interpretation for  l our linear program is equivalent to classical stabilized least 1-norm approximation problem

30 30 Numerical Testing l Two sets of tests –Compare computational times of our method (MM) and the SSR method –Row-column chunking for massive datasets l Datasets: –US Census Bureau Adult Dataset: 300,000 points in R 11 –Delve Comp-Activ Dataset: 8192 points in R 13 –UCI Boston Housing Dataset: 506 points in R 13 –Gaussian noise was added to each of these datasets. l Hardware: Locop2: Dell PowerEdge 6300 server with: –Four gigabytes of memory, 36 gigabytes of disk space –Windows NT Server 4.0 –CPLEX 6.5 solver

31 31  is a parameter which needs to be determined experimentally Use a hold-out tuning set to determine optimal value for  l Algorithm:  = 0 while (tuning set accuracy continues to improve) { Solve LP  =  + 0.1 } l Run for both our method and SSR methods and compare times Experimental Process

32 32 Comparison Results

33 33 Linear Programming Row Chunking l Basic approach: (PSB/OLM) for classification problems l Classification problem is solved for a subset, or chunk of constraints (data points) l Those constraints with positive multipliers are preserved and integrated into next chunk (support vectors) l Objective function is montonically nondecreasing l Dataset is repeatedly scanned until objective function stops increasing

34 34 Innovation: Simultaneous Row-Column Chunking l Row Chunking –Cannot handle problems with large numbers of variables –Therefore: Linear kernel only l Row-Column Chunking –New data increase the dimensionality of K(A,A’) by adding both rows and columns (variables) to the problem. –We handle this with row-column chunking. –General nonlinear kernel

35 35 while (problem termination criteria not satisfied) { choose set of rows as row chunk while (row chunk termination criteria not satisfied) { from row chunk, select set of columns solve LP allowing only these columns to vary add columns with nonzero values to next column chunk } add rows with nonzero multipliers to next row chunk } Row-Column Chunking Algorithm

36 36 Row-Column Chunking Diagram

37 37 Row-Column Chunking Diagram

38 38 Row-Column Chunking Diagram

39 39 Row-Column Chunking Diagram

40 40 Row-Column Chunking Diagram

41 41 Row-Column Chunking Diagram

42 42 Row-Column Chunking Diagram

43 43 Chunking Experimental Results

44 44 Objective Value & Tuning Set Error for Billion-Element Matrix

45 45 Conclusions and Future Work l Conclusions –Robust regression can be modeled simply and efficiently as a quadratic program –Tolerant Regression can be handled more efficiently using improvements on previous formulations –Row-column chunking is a new approach which can handle massive regression problems l Future work –Chunking via parallel and distributed approaches –Scaling Huber regression to larger problems

46 46 Questions?

47 47 LP Perturbation Regime #1 l Our LP is given by: When  = 0, the solution is the stabilized least 1- norm solution. l Therefore, by LP Perturbation Theory, there exists a such that –The solution to the LP with is a solution to the least 1-norm problem that also maximizes .

48 48 LP Perturbation Regime #2 l Our LP can be rewritten as: l Similarly, by LP Perturbation Theory, there exists a such that –The solution to the LP with is the solution that minimizes least error (  ) among all minimizers of average tolerated error.

49 49 Motivation for dual variable substitution l Primal: l Dual:


Download ppt "Support Vector Regression David R. Musicant and O.L. Mangasarian International Symposium on Mathematical Programming Thursday, August 10, 2000"

Similar presentations


Ads by Google