Download presentation

Presentation is loading. Please wait.

Published byReuben Bott Modified about 1 year ago

1
Hyper Least Squares and its applications Dr.Kenichi Kanatani Dr. Hirotaka Nitsuma Dr. Yasuyuki SugayaPrasanna Rangarajan Technical Summary –NEW Least Squares estimator that maximizes accuracy of estimate by removing statistical bias up to second order noise terms –Perfect candidate for initializing Maximum Likelihood estimator –Provides estimates in large noise situations, where ML computation fails

2
Least Squares Parameter Estimation “Big Picture” –standard task in science and engineering –vast majority can be formulated as the solution of a set of implicit functions that are linear in the unknown parameter measurementunknown parameter carrier –Example-1 : conic fitting–Example-2 : estimating homography ( or perspective warp ) –useful for cerating panaromas –Example-2 : estimating homography ( or perspective warp )

3
Page 3 –Given a set of noisy measurements find parameter –Example : Ellipse fitting ( Standard Least Squares ) Least Squares Parameter Estimation Mathematical formulation how well does the parameteric surface fit the data ? avoids trivial solution

4
Page 4 –Given a set of noisy measurements find parameter –Example : Ellipse fitting ( Bookstein, CGIP 1979 ) Least Squares Parameter Estimation Mathematical formulation how well does the parameteric surface fit the data ? avoids trivial solution

5
Page 5 –Example : Ellipse fitting ( Fitzgibbon et.al, TPAMI 1999 ) Least Squares Parameter Estimation Mathematical formulation –Given a set of noisy measurements find parameter how well does the parameteric surface fit the data ? avoids trivial solution

6
Page 6 –Given a set of noisy measurements find parameter –Example : Ellipse fitting ( Taubin, TPAMI 1991 ) Least Squares Parameter Estimation Mathematical formulation how well does the parameteric surface fit the data ? avoids trivial solution

7
Page 7 Least Squares Parameter Estimation Mathematical formulation –Given a set of noisy measurements find parameter Advantages – is obtained as solution to the Generalized Eigenvalue problem Disadvantages –produces biased estimates that heavily depend on choice of –algebraic distance is neither geometrically / statistically meaningful algebraic distance how well does the parameteric surface fit the data ? parameter normalization avoids trivial solution

8
Page 8 Maximum Likelihood Parameter Estimation Mathematical formulation –Given a set of noisy measurements find parameter –Example : Ellipse fitting Mahalanobis distance TODO : insert picture of orthogonal ellipse fitting

9
Page 9 Maximum Likelihood Parameter Estimation Mathematical formulation –Given a set of noisy measurements find parameter Advantages –geometrically / statistically meaningful distance measure –highly accurate estimates that nearly achieve the lower bound on MSE Disadvantages –Iterative & converges to local minimum….requires good initial estimate FNS ( Chojnacki et.al, TPAMI 2000, et.al 2008 ) HEIV ( Leedan & Meer, IJCV 200, Matei & Meer, TPAMI 2006 ) Projective Newton Iterations ( Kanatani & Sugaya CSDA 2007 ) Mahalanobis distance

10
Page 10 Proposed Work A different take on LS parameter estimation how well does the parameteric surface fit the data ? common to all LS estimators how good is the estimate ? unique to each LS estimator 1.How does the choice of the matrix affect the acuracy of the LS estimate ? 2.What chocie of matrix yields a LS estimate with the best accuracy ? Contributions of proposed work Statistical Basis for LS parameter estimation

11
Page 11 Contributions of Proposed Work Statistically motivated LS Parameter Estimation 1.How does the choice of the matrix affect the acuracy of the LS estimate ? Perturbation Analysis of the GEVP measurements are perturbations of true measurements propagate perturbations in measurements to carrier propagate perturbations in carrier to

12
Page 12 does not depend on matrixdepends on matrixaccuracy Contributions of Proposed Work Statistically motivated LS Parameter Estimation 1.How does the choice of the matrix affect the acuracy of the LS estimate ? Perturbation Analysis of the GEVP expression for mean squared error of estimator

13
Page 13 Contributions of Proposed Work Statistically motivated LS Parameter Estimation 2.What chocie of matrix yields a LS estimate with the best accuracy ? symmetric matrix 2 nd order bias matrix for taubin estimator

14
Page 14 hyper Least Square estimator Optimization Problem solved by the hyper Least Squares estimator Numerical computation of the hyper LS estimate The matrix is symmetric but not necessarily positive definite. The matrix is symmetric &positive definite for noisy measurements Use standard linear algebra routines to solve the GEVP for the eigenvector corresponding to the largest eigenvalue

15
Page 15 How well does the hyper LS estimator work ? Ellipse Fitting ( single implicit equation ) –Zero mean Gaussian noise with standard deviation is added to 31 points on the ellipse –10000 independent trials 1.Standard LS estimator 2.Taubin estimator 3.hyper LS estimator 4.ML estimator

16
Page 16 standard deviation of added noise hyper LS KCR lower bound Standard LS standard deviation of added noise How well does the hyperaccurate LS estimator work ? Ellipse Fitting ML estimator –ML has the highest accuracy but fails to converge for large noise –The bias of the hyperaccurate LS estimator < ML estimator Taubin

17
Page 17 How well does the hyperaccurate LS estimator work ? Homography estimation –Zero mean Gaussian noise with standard deviation is added to a grid of points on a plane –10000 independent trials 1.Standard LS estimator 2.Taubin estimator 3.hyper LS estimator 4.ML estimator View-1View-2 standard deviation of added noise

18
Page 18 View-2View-1 –Zero mean Gaussian noise with standard deviation is added to a curved grid of points –10000 independent trials 1.Standard LS estimator 2.Taubin estimator 3.hyper LS estimator 4.ML estimator standard deviation of added noise How well does the hyperaccurate LS estimator work ? Fundamental matrix estimation

19
Page 19 –Proposed hyper Least Squares estimator is statistically motivated designed to have the highest accuracy among existing LS estimators provides a parameter estimate for large noise levels, when the ML estimator fails –The proposed hyper Least Squares estimator is the best candidate for initializing ML iterations –Open Problems Is there a matrix that directly minimizes the mse instead of just the bias Closing Thoughts does NOT depend on matrixdepends on matrix

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google