Hyper Least Squares and its applications Dr.Kenichi Kanatani Dr. Hirotaka Nitsuma Dr. Yasuyuki SugayaPrasanna Rangarajan

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
CSci 6971: Image Registration Lecture 14 Distances and Least Squares March 2, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart,
Numeriska beräkningar i Naturvetenskap och Teknik Today’s topic: Approximations Least square method Interpolations Fit of polynomials Splines.
PCA + SVD.
Robot Vision SS 2005 Matthias Rüther 1 ROBOT VISION Lesson 3: Projective Geometry Matthias Rüther Slides courtesy of Marc Pollefeys Department of Computer.
The General Linear Model. The Simple Linear Model Linear Regression.
Data mining and statistical learning - lecture 6
Visual Recognition Tutorial
Segmentation into Planar Patches for Recovery of Unmodeled Objects Kok-Lim Low COMP Computer Vision 4/26/2000.
Sam Pfister, Stergios Roumeliotis, Joel Burdick
Prénom Nom Document Analysis: Parameter Estimation for Pattern Recognition Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Principal Component Analysis
Multiple View Geometry
3D Geometry for Computer Graphics. 2 The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
L15:Microarray analysis (Classification) The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
Curve-Fitting Regression
Parameter estimation class 5 Multiple View Geometry Comp Marc Pollefeys.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Computer Graphics Recitation The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Bootstrapping a Heteroscedastic Regression Model with Application to 3D Rigid Motion Evaluation Bogdan Matei Peter Meer Electrical and Computer Engineering.
Maximum likelihood (ML)
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM.
Correlation. The sample covariance matrix: where.
Fitting.
Point set alignment Closed-form solution of absolute orientation using unit quaternions Berthold K. P. Horn Department of Electrical Engineering, University.
Least-Squares Regression
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
The Multiple Correlation Coefficient. has (p +1)-variate Normal distribution with mean vector and Covariance matrix We are interested if the variable.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
CISE301_Topic11 CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4:
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Curve-Fitting Regression
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
Modern Navigation Thomas Herring MW 11:00-12:30 Room
Ch. 3: Geometric Camera Calibration
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Bundle Adjustment A Modern Synthesis Bill Triggs, Philip McLauchlan, Richard Hartley and Andrew Fitzgibbon Presentation by Marios Xanthidis 5 th of No.
Single-view geometry Odilon Redon, Cyclops, 1914.
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Numerical Analysis – Data Fitting Hanyang University Jong-Il Park.
Presentation : “ Maximum Likelihood Estimation” Presented By : Jesu Kiran Spurgen Date :
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Lecture 16: Image alignment
Probability Theory and Parameter Estimation I
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Ch3: Model Building through Regression
Parameter estimation class 5
Estimating 2-view relationships
Some useful linear algebra
Linear regression Fitting a straight line to observations.
Consider Covariance Analysis Example 6.9, Spring-Mass
Principal Component Analysis
Calibration and homographies
Parameter estimation class 6
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Probabilistic Surrogate Models
Presentation transcript:

Hyper Least Squares and its applications Dr.Kenichi Kanatani Dr. Hirotaka Nitsuma Dr. Yasuyuki SugayaPrasanna Rangarajan Technical Summary –NEW Least Squares estimator that maximizes accuracy of estimate by removing statistical bias up to second order noise terms –Perfect candidate for initializing Maximum Likelihood estimator –Provides estimates in large noise situations, where ML computation fails

Least Squares Parameter Estimation “Big Picture” –standard task in science and engineering –vast majority can be formulated as the solution of a set of implicit functions that are linear in the unknown parameter measurementunknown parameter carrier –Example-1 : conic fitting–Example-2 : estimating homography ( or perspective warp ) –useful for cerating panaromas –Example-2 : estimating homography ( or perspective warp )

Page  3 –Given a set of noisy measurements find parameter –Example : Ellipse fitting ( Standard Least Squares ) Least Squares Parameter Estimation Mathematical formulation how well does the parameteric surface fit the data ? avoids trivial solution

Page  4 –Given a set of noisy measurements find parameter –Example : Ellipse fitting ( Bookstein, CGIP 1979 ) Least Squares Parameter Estimation Mathematical formulation how well does the parameteric surface fit the data ? avoids trivial solution

Page  5 –Example : Ellipse fitting ( Fitzgibbon et.al, TPAMI 1999 ) Least Squares Parameter Estimation Mathematical formulation –Given a set of noisy measurements find parameter how well does the parameteric surface fit the data ? avoids trivial solution

Page  6 –Given a set of noisy measurements find parameter –Example : Ellipse fitting ( Taubin, TPAMI 1991 ) Least Squares Parameter Estimation Mathematical formulation how well does the parameteric surface fit the data ? avoids trivial solution

Page  7 Least Squares Parameter Estimation Mathematical formulation –Given a set of noisy measurements find parameter Advantages – is obtained as solution to the Generalized Eigenvalue problem Disadvantages –produces biased estimates that heavily depend on choice of –algebraic distance is neither geometrically / statistically meaningful algebraic distance how well does the parameteric surface fit the data ? parameter normalization avoids trivial solution

Page  8 Maximum Likelihood Parameter Estimation Mathematical formulation –Given a set of noisy measurements find parameter –Example : Ellipse fitting Mahalanobis distance TODO : insert picture of orthogonal ellipse fitting

Page  9 Maximum Likelihood Parameter Estimation Mathematical formulation –Given a set of noisy measurements find parameter Advantages –geometrically / statistically meaningful distance measure –highly accurate estimates that nearly achieve the lower bound on MSE Disadvantages –Iterative & converges to local minimum….requires good initial estimate FNS ( Chojnacki et.al, TPAMI 2000, et.al 2008 ) HEIV ( Leedan & Meer, IJCV 200, Matei & Meer, TPAMI 2006 ) Projective Newton Iterations ( Kanatani & Sugaya CSDA 2007 ) Mahalanobis distance

Page  10 Proposed Work A different take on LS parameter estimation how well does the parameteric surface fit the data ? common to all LS estimators how good is the estimate ? unique to each LS estimator 1.How does the choice of the matrix affect the acuracy of the LS estimate ? 2.What chocie of matrix yields a LS estimate with the best accuracy ? Contributions of proposed work Statistical Basis for LS parameter estimation

Page  11 Contributions of Proposed Work Statistically motivated LS Parameter Estimation 1.How does the choice of the matrix affect the acuracy of the LS estimate ? Perturbation Analysis of the GEVP measurements are perturbations of true measurements propagate perturbations in measurements to carrier propagate perturbations in carrier to

Page  12 does not depend on matrixdepends on matrixaccuracy Contributions of Proposed Work Statistically motivated LS Parameter Estimation 1.How does the choice of the matrix affect the acuracy of the LS estimate ? Perturbation Analysis of the GEVP expression for mean squared error of estimator

Page  13 Contributions of Proposed Work Statistically motivated LS Parameter Estimation 2.What chocie of matrix yields a LS estimate with the best accuracy ? symmetric matrix 2 nd order bias matrix for taubin estimator

Page  14 hyper Least Square estimator Optimization Problem solved by the hyper Least Squares estimator Numerical computation of the hyper LS estimate The matrix is symmetric but not necessarily positive definite. The matrix is symmetric &positive definite for noisy measurements Use standard linear algebra routines to solve the GEVP for the eigenvector corresponding to the largest eigenvalue

Page  15 How well does the hyper LS estimator work ? Ellipse Fitting ( single implicit equation ) –Zero mean Gaussian noise with standard deviation is added to 31 points on the ellipse –10000 independent trials 1.Standard LS estimator 2.Taubin estimator 3.hyper LS estimator 4.ML estimator

Page  16 standard deviation of added noise hyper LS KCR lower bound Standard LS standard deviation of added noise How well does the hyperaccurate LS estimator work ? Ellipse Fitting ML estimator –ML has the highest accuracy but fails to converge for large noise –The bias of the hyperaccurate LS estimator < ML estimator Taubin

Page  17 How well does the hyperaccurate LS estimator work ? Homography estimation –Zero mean Gaussian noise with standard deviation is added to a grid of points on a plane –10000 independent trials 1.Standard LS estimator 2.Taubin estimator 3.hyper LS estimator 4.ML estimator View-1View-2 standard deviation of added noise

Page  18 View-2View-1 –Zero mean Gaussian noise with standard deviation is added to a curved grid of points –10000 independent trials 1.Standard LS estimator 2.Taubin estimator 3.hyper LS estimator 4.ML estimator standard deviation of added noise How well does the hyperaccurate LS estimator work ? Fundamental matrix estimation

Page  19 –Proposed hyper Least Squares estimator is statistically motivated designed to have the highest accuracy among existing LS estimators provides a parameter estimate for large noise levels, when the ML estimator fails –The proposed hyper Least Squares estimator is the best candidate for initializing ML iterations –Open Problems Is there a matrix that directly minimizes the mse instead of just the bias Closing Thoughts does NOT depend on matrixdepends on matrix