Presentation is loading. Please wait.

Presentation is loading. Please wait.

Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel.

Similar presentations


Presentation on theme: "Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel."— Presentation transcript:

1 Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel CVPR 2010

2 Usual low rank approximation using L 2 norm– SVD. Robust low rank approximation using L 2 norm- Wiberg Algorithm. “Robust” low rank approximation in the presence of: – missing data –Outliers –L 1 norm –Generalization of Wiberg Algorithm. Y= UV MXN MXR RXN

3 Problem W is the indicator matrix, w ij = 1 if y ij is known, else 0.

4 Wiberg Algorithm W matrix indicates the presence/absence of elements From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006,

5 Alternating Least Squares To find the minimum of φ, find derivatives Considering the two equations independently. Starting with some initial estimates u 0 and v 0, update u from v and v from u. Converges very slowly, specially for missing components and strong noise. From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006,

6 Back to Wiberg In non-linear least squares problems with multiple parameters, when assuming part of the parameters to be fixed, minimization of the least squares with respect to the rest of the parameters becomes a simple problem, e.g., a linear problem. So closed form solutions may be found. Wiberg applied it to this problem of factorization of matrix with missing components. From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006,

7 Back to Wiberg For a fixed u, the L 2 norm becomes a linear, least squares minimization problem in v. –Compute optimal v*(u) Apply Gauss-Newton method to the above non-linear least squares problem to find optimal u*. Easy to compute derivative because of L 2 norm From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006,

8 Linear Programming and Definitions

9

10 L 1 -Wiberg Algorithm Minimization problem in terms of L 1 norm Minimization problem in terms of v and u independently Substituting v* into u

11 Comparing to L 2 -Wiberg V*(U) is not easily differentiable The minimization function (u,v*) is not a least squares minimization problem, so Gauss-Newton can’t be applied directly. Idea: Let V*(U) denote the optimal basic solution. V*(U) is differentiable assuming problem is feasible, as per Fundamental Theorem of differentiability of linear programs. Jacobian for the G-N :: derivative of solution to a linear prog. problem

12 ≈ Add an additional term to the function and minimize the value of the term ?

13 Results Tested on synthetic data. –Randomly created measurement matrices Y drawn from a uniform distribution [-1,1]. –20% missing, 10% noise [-5,5]. Real data –Dinosaur sequence from oxford-vgg.

14

15 Structure from motion Projections of 319 points tracked over 36 views. Addition of noise to 10% points. Full 3d reconstruction ~ low rank matrix approximation. Above-residual for the visible points. In L 2 norm, reconstruction error is evenly distributed among all elements of residual. In L 1 norm, error concentrated on few elements.

16


Download ppt "Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel."

Similar presentations


Ads by Google