Multidimensional scaling

Slides:



Advertisements
Similar presentations
L1 sparse reconstruction of sharp point set surfaces
Advertisements

Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
Spectral methods 1 © Alexander & Michael Bronstein,
Optimization 吳育德.
1 Numerical Solvers for BVPs By Dong Xu State Key Lab of CAD&CG, ZJU.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Topology-Invariant Similarity and Diffusion Geometry
1 Numerical Geometry of Non-Rigid Shapes Diffusion Geometry Diffusion geometry © Alexander & Michael Bronstein, © Michael Bronstein, 2010 tosca.cs.technion.ac.il/book.
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Isometry-Invariant Similarity
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
1 Numerical geometry of non-rigid shapes Geometry Numerical geometry of non-rigid shapes Shortest path problems Alexander Bronstein, Michael Bronstein,
Shape reconstruction and inverse problems
Invariant correspondence
1 Processing & Analysis of Geometric Shapes Shortest path problems Shortest path problems The discrete way © Alexander & Michael Bronstein, ©
1 Michael Bronstein Computational metric geometry Computational metric geometry Michael Bronstein Department of Computer Science Technion – Israel Institute.
Numerical Optimization
Chapter 5 Orthogonality
Isometry invariant similarity
1 Michael Bronstein 3D face recognition Face recognition: New technologies, new challenges Michael M. Bronstein.
Spectral embedding Lecture 6 1 © Alexander & Michael Bronstein
Numerical geometry of non-rigid shapes
Numerical geometry of objects
1 Bronstein 2 and Kimmel Extrinsic and intrinsic similarity of nonrigid shapes Michael M. Bronstein Department of Computer Science Technion – Israel Institute.
Lecture IV – Invariant Correspondence
Correspondence & Symmetry
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
1 Numerical geometry of non-rigid shapes Spectral Methods Tutorial. Spectral Methods Tutorial 6 © Maks Ovsjanikov tosca.cs.technion.ac.il/book Numerical.
1 Numerical geometry of non-rigid shapes Lecture II – Numerical Tools Numerical geometry of shapes Lecture II – Numerical Tools non-rigid Alex Bronstein.
Spectral Embedding Alexander Bronstein, Michael Bronstein
Numerical geometry of non-rigid shapes
1 Numerical geometry of non-rigid shapes In the Rigid Kingdom In the Rigid Kingdom Lecture 4 © Alexander & Michael Bronstein tosca.cs.technion.ac.il/book.
1 Numerical geometry of non-rigid shapes A journey to non-rigid world objects Invariant correspondence and shape synthesis non-rigid Alexander Bronstein.
Invariant Correspondence
1 Numerical geometry of non-rigid shapes A journey to non-rigid world objects Numerical methods non-rigid Alexander Bronstein Michael Bronstein Numerical.
Non-Euclidean Embedding
Numerical geometry of non-rigid shapes
Paretian similarity for partial comparison of non-rigid objects
1 Bronstein 2 & Kimmel An isometric model for facial animation and beyond AMDO, Puerto de Andratx, 2006 An isometric model for facial animation and beyond.
1 Numerical Geometry of Non-Rigid Shapes Invariant shape similarity Invariant shape similarity © Alexander & Michael Bronstein, © Michael Bronstein,
Why Function Optimization ?
1 Numerical geometry of non-rigid shapes Non-Euclidean Embedding Non-Euclidean Embedding Lecture 6 © Alexander & Michael Bronstein tosca.cs.technion.ac.il/book.
1 Bronstein 2 & Kimmel Matching 2D articulated shapes using GMDS AMDO, Puerto de Andratx, 2006 Matching 2D articulated shapes using Generalized Multidimensional.
1 M. Bronstein Multigrid multidimensional scaling Multigrid Multidimensional Scaling Michael M. Bronstein Department of Computer Science Technion – Israel.
1 Numerical geometry of non-rigid shapes Non-rigid correspondence Numerical geometry of non-rigid shapes Non-rigid correspondence Alexander Bronstein,
1 Numerical geometry of non-rigid shapes Nonrigid Correspondence & Calculus of Shapes Non-Rigid Correspondence and Calculus of Shapes Of bodies changed.
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
CSE554Laplacian DeformationSlide 1 CSE 554 Lecture 8: Laplacian Deformation Fall 2012.
1 Numerical geometry of non-rigid shapes Shortest path problems Shortest path problems Lecture 2 © Alexander & Michael Bronstein tosca.cs.technion.ac.il/book.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Practical Dynamic Programming in Ljungqvist – Sargent (2004) Presented by Edson Silveira Sobrinho for Dynamic Macro class University of Houston Economics.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Geometry of Multiple Views
Ch. 3: Geometric Camera Calibration
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
EECS 274 Computer Vision Affine Structure from Motion.
Lecture 21 MA471 Fall 03. Recall Jacobi Smoothing We recall that the relaxed Jacobi scheme: Smooths out the highest frequency modes fastest.
EECS 274 Computer Vision Projective Structure from Motion.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
Lecture 16: Image alignment
Morphing and Shape Processing
Spectral Methods Tutorial 6 1 © Maks Ovsjanikov
Collaborative Filtering Matrix Factorization Approach
Uncalibrated Geometry & Stratification
Presentation transcript:

Multidimensional scaling © Alexander & Michael Bronstein, 2006-2009 © Michael Bronstein, 2010 tosca.cs.technion.ac.il/book 048921 Advanced topics in vision Processing and Analysis of Geometric Shapes EE Technion, Spring 2010 1

If it doesn’t fit, you must acquit! Image: Associated Press

Metric model Shape = metric space Similarity = isometry w.r.t. EXTRINSIC GEOMETRY Euclidean metric Invariant to rigid motion Extrinsic geometry is not invariant under inelastic deformations

Similarity = isometry w.r.t. Metric model Shape = metric space Similarity = isometry w.r.t. EXTRINSIC GEOMETRY INTRINSIC GEOMETRY Euclidean metric Invariant to rigid motion Geodesic metric Invariant to inelastic deformation

in a common metric space Intrinsic vs. extrinsic similarity EXTRINSIC SIMILARITY INTRINSIC SIMILARITY Part of the same metric space Two different metric spaces SOLUTION: Find a representation of and in a common metric space

Canonical forms Isometric embedding

? Canonical form distance INTRINSIC SIMILARITY Compute canonical forms EXTRINSIC SIMILARITY OF CANONICAL FORMS = INTRINSIC SIMILARITY

Mapmaker’s problem ?

Mapmaker’s problem A sphere has non-zero curvature, therefore, it is not isometric to the plane (a consequence of Theorema egregium) Karl Friedrich Gauss (1777-1855)

Conclusion: generally, isometric embedding does not exist! Linial’s example A A 1 1 C D 2 A C D 1 B C A 1 1 C D B B 2 C D D 1 1 B Conclusion: generally, isometric embedding does not exist! No contradiction to Nash embedding theorem: Nash guarantees that any Riemannian structure can be realized as a length metric induced by Euclidean metric. We try to realize it using restricted Euclidean metric

Minimum distortion embedding

Multidimensional scaling Find an embedding into that distorts the distances the least by solving the optimization problem where are the coordinates of the canonical form The function measuring the distortion of distances is called stress The problem is called multidimensional scaling (MDS)

Stress The stress is a function of the canonical form coordinates and the distances L2-stress Lp-stress L-stress (distortion)

Matrix expression of L2-stress Some notation: - an matrix of canonical form coordinates (each row corresponds to a point) Shorthand notation for Euclidean distances Write the stress as 1 2

Term 1

Term 1 Matrices commute under trace operator where is and matrix with elements

Term 2 For and zero otherwise

Term 2 where is and matrix-valued function with elements

LS-MDS Minimization of L2-stress is called least squares MDS (LS-MDS) variables Non-linear Non-convex (thus liable to local convergence) Optimum defined up to Euclidean transformation

Gradient of L2-stress Quadratic in Nonlinear in Constant in Recall exercise in optimization

Complexity Stress computation complexity: Stress gradient computation complexity: Steepest descent with line search may be prohibitively expensive (requires multiple evaluations of the stress and the gradient)

Observation By Cauchy-Schwarz inequality for all and Equality is achieved for Write it differently:

Observation Consider the nonlinear term in the stress

Majorizing inequality Equality is achieved for [de Leeuw, 1977] We have a quadratic majorizing function to use in iterative majorization algorithm! Jan de Leeuw

Iterative majorization Construct a majorizing function satisfying . Majorizing inequality: for all is convex

Iterative majorization Start with some Find such that Update iterate Increment iteration counter Solution Until convergence

Analytic expression for Iterative majorization The majorizing function is quadratic! Analytic expression for the minimim Analytic expression for pseudoinverse:

SMACOF algorithm Start with some Update iterate Increment iteration counter Solution Until convergence The algorithm is called SMACOF (Scaling by Minimizing A COnvex Function)

SMACOF algorithm vs steepest descent SMACOF is a constant-step gradient descent! Majorization guarantees monotonically decreasing sequence of stress values (untypical behavior for constant-step steepest descent) No guarantee of global convergence Iteration cost:

MATLAB® intermezzo SMACOF algorithm Canonical forms

Near-isometric deformations of a shape Examples of canonical forms Near-isometric deformations of a shape Canonical forms

Visualization of intrinsic similarity Examples of canonical forms Visualization of intrinsic similarity

Weighted stress where are some fixed weights Matrix expression where is and matrix with elements SMACOF algorithm can be used to minimize weighted stress!

Variations on the stress theme Generic form of the stress where is some norm For example, gives the Lp-stress The necessary condition for to be the minimizer of is

Variations on the stress theme Idea: minimize weighted stress instead of the generic stress and choose the weights such that the two are equivalent If the weights are selected as the minimizers of and will coincide Since is unknown in advance, iterate!

Iteratively reweighted LS Start with some and Find using as an initialization Update weights Increment iteration counter Until convergence The algorithm is called Iteratively Reweighted Least Squares (IRLS)

L-stress Optimization problem can be written equivalently as a constrained problem is called an artificial variable

Complexity, bis MDS N=1000 points x5 less points x25 lower complexity

Multiresolution MDS Bottom-up approach: solve coarse level MDS problem to initialize fine level problem Reduce complexity (less fine-level iterations) Reduce the risk of local convergence (good initialization) Can be performed on multiple resolution levels

Solve coarse level problem Two-resolution MDS Grid Data Interpolation operator to transfer solution from coarse level to fine level Can be represented as an sparse matrix Solve fine level problem Interpolate Solve coarse level problem

Solve -st level problem Solve coarsest level problem Multiresolution MDS Solve finest level problem Interpolate Solve -st level problem Interpolate Solve coarsest level problem

Multiresolution MDS algorithm Hierarchy of grids Hierarchy of data Interpolation operators Start with coarsest-level initialization Solve the l-th level MDS problem using as initialization Interpolate to next level For Solve finest level problem using as initialization

Top-down approach Start with a fine-level initialization Decimate fine-level initialization to the coarse grid Solve the coarse-level MDS problem using as initialization Improve the fine-level solution propagating the coarse-level error Typically

A problem Fine level Coarse level

Correction The fine- and the coarse-level minimizers do not coincide! is called a residual Force consistency of the coarse- and fine-level problems by introducing a correction term into the stress which gives the gradient

Correction Fine level Coarse level

Correction In order to guarantee consistency, must satisfy at Chain rule which gives the correction term Verify: is the minimizer of coarse-level problem with correction,

Modified stress Another problem: the coarse grid problem Can grow arbitrarily is unbounded for Fix the translation ambiguity by adding a quadratic penalty to the stress is called the modified stress [Bronstein et al., 05] The modified coarse grid problem is bounded

Solve corrected coarse level problem Two-grid MDS Iteratively improve the fine-level solution using coarse grid residual External iteration is called cycle Correct fine level solution Decimate Interpolate Solve corrected coarse level problem

Solve corrected coarse level problem Two-grid MDS Perform a few optimization iterations at fine level between cycles (relaxation) Correct fine level solution Relax Decimate Interpolate Solve corrected coarse level problem

Two-grid MDS algorithm Start with fine-level initialization Produce an improved fine-level solution by making a few iterations on initialized with Decimate fine-level solution Correction: Solve the coarse-level corrected MDS problem using as initialization Transfer residual: Update iterate Solution Until convergence

Solve coarsest level problem Multigrid MDS Apply the two-grid algorithm recursively Different cycles possible, e.g. V-cycle Relax Relax Decimate Interpolate correct Relax Relax Decimate Interpolate correct Solve coarsest level problem

MATLAB® intermezzo Multigrid MDS

Convergence of SMACOF and Multigrid MDS (N=2145) Convergence example Stress Time (sec) Convergence of SMACOF and Multigrid MDS (N=2145)

How to accelerate convergence? Let be a sequence of iterates produced by some optimization algorithms converging to Denote by the remainder Construct a new sequence converging faster to in the sense

Vector extrapolation Construct the new sequence as a transformation of iterates Particular choice: linear combination Question: how to choose the coefficients ? Ideally, hence

Reduced rank extrapolation (RRE) Problem: depends on unknown Eliminate this dependence by considering the difference which ideally should vanish. Result: solve the constrained linear system of equations in variables Typically hence the system is over-determined and must be solved approximately using least-squares

Optimization with vector extrapolation Start with some and Generate iterates using optimization on initialized by Extrapolate from Update Increment iteration counter Until convergence

Safeguarded optimization with vector extrapolation Start with some and Generate iterates using optimization on initialized by Extrapolate from If else Increment iteration counter Until convergence

MATLAB® intermezzo RRE MDS