Presentation is loading. Please wait.

Presentation is loading. Please wait.

Second order cone programming approaches for handing missing and uncertain data P. K. Shivaswamy, C. Bhattacharyya and A. J. Smola Discussion led by Qi.

Similar presentations


Presentation on theme: "Second order cone programming approaches for handing missing and uncertain data P. K. Shivaswamy, C. Bhattacharyya and A. J. Smola Discussion led by Qi."— Presentation transcript:

1 Second order cone programming approaches for handing missing and uncertain data P. K. Shivaswamy, C. Bhattacharyya and A. J. Smola Discussion led by Qi An Mar 30 th, 2007

2 Outline Missing and uncertain Data problem Problem formulation Classification with uncertainty Extensions Experimental results Conclusions

3 Missing data problem Consider a problem of classification or regression with missing data (only in feature). –Traditional method: simple imputation –Proposed method: robust estimation Formulate the classification or regression problem into an optimization problem as long as we have information on the first and second moments of data

4 Classification problem with missing data Compute the sample mean and covariance for each classes (binary here) from the available observations Impute the missing data with their conditional mean. The classification problem can be alternatively formulated into an optimization problem, as shown in the next slide.

5 Suppose we are given n data points. First c points have complete feature vectors and last (n-c) points have feature vectors with missing values. First, Compute the sample mean and covariance for each classes (binary here) from the available observations Then, for any data vector with missing values, we can find the imputed means and covariances Classification problem with missing data Given decomposed sample mean and sample covariance

6 Then the classification problem is equivalent to

7 Once we finish estimating the weights, w and b, from the dataset, we can make the prediction for any new data vector 1, If x has no missing values, go to 4 2, if x has missing values, fill the missing values x m using the parameters of each class to get two imputed data x+ and x- 3, Find the distance of imputed data from hyper-plane, d+ and d-, and choose the imputed sample with higher distance 4, classify the data using sgn(w’x+b)

8 Classification with certainty (SVM) Linear separable +1 Classification problem can be solved by solving this optimization problem

9 Classification with certainty (SVM) Linear non-separable +1 Two equivalent formulations: These two formulations are equivalent by choosing suitable C and W second order cone constraint

10 Classification with uncertainty Here, uncertainty means that for each pair (x i,y i ) we only have a distribution over x i instead of a value for xi. As a result, x i is a random variable. In this case, we rewrite the constraint in a probabilistic form In other words, we require that the random variable x i lies on the correct side of the decision hyper-plane with some probability greater than a pre-set threshold κ i

11 If we assume each x i has mean and variance Σ i. We want to be able to classify correctly even for the worst distribution in this class. The previous constraint becomes Robust formulation If we assume each x i has mean and variance Σ i and follows a normal distribution. This should allow us to provide tighter bounds, as we have perfect knowledge on how x i is distributed The previous constraint becomes Normal formulation It can be proven that both of these two formulations lead to the same optimization problem by using multivariate Chebyshev inequality, which is summarized in Theory 1.

12 This optimization problem can be solved efficiently using various optimization methods

13 Geometric interpretation of constraint Constraint 10(b) can be interpreted in a geometric way –If we assume x takes value in an ellipsoid –The robustness constraint 10(b) is equivalent to the geometric constraint below

14 Error measure When classifying a point –Worst case error –Expected error

15 Extensions The optimization problem can be extended –Regression problem –Multi-class classification/regression –Some different constraints –Kernelized formulation Go back Go back to the missing feature problem

16 Experiments In a OCR data reorganization problem, compare SVM with simple imputation and proposed approach Some samples were misclassified by SVM but were correctly classified by the robust classifier

17 Ionospere regression problem

18 Conclusions This paper propose a second order cone programming formulation for designing robust linear prediction function. This approach is capable of tackling uncertainty in the data vectors both in classification and regression setting It is applicable to any uncertainty distribution provided the first two moments are computable


Download ppt "Second order cone programming approaches for handing missing and uncertain data P. K. Shivaswamy, C. Bhattacharyya and A. J. Smola Discussion led by Qi."

Similar presentations


Ads by Google