On the ICP Algorithm Esther Ezra, Micha Sharir Alon Efrat.

Slides:



Advertisements
Similar presentations
Chapter 4 Partition I. Covering and Dominating.
Advertisements

Two Segments Intersect?
Covers, Dominations, Independent Sets and Matchings AmirHossein Bayegan Amirkabir University of Technology.
Polygon Triangulation
WSPD Applications.
Map Overlay Algorithm. Birch forest Wolves Map 1: Vegetation Map 2: Animals.
 Distance Problems: › Post Office Problem › Nearest Neighbors and Closest Pair › Largest Empty and Smallest Enclosing Circle  Sub graphs of Delaunay.
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
One of the most important problems is Computational Geometry is to find an efficient way to decide, given a subdivision of E n and a point P, in which.
Approximation Algorithms Chapter 5: k-center. Overview n Main issue: Parametric pruning –Technique for approximation algorithms n 2-approx. algorithm.
I. The Problem of Molding Does a given object have a mold from which it can be removed? object not removable mold 1 object removable Assumptions The object.
The Divide-and-Conquer Strategy
Convex Hulls in Two Dimensions Definitions Basic algorithms Gift Wrapping (algorithm of Jarvis ) Graham scan Divide and conquer Convex Hull for line intersections.
How should we define corner points? Under any reasonable definition, point x should be considered a corner point x What is a corner point?
Convex Hulls in 3-space Jason C. Yang.
Optimization of Pearl’s Method of Conditioning and Greedy-Like Approximation Algorithm for the Vertex Feedback Set Problem Authors: Ann Becker and Dan.
Voronoi Diagram Presenter: GI1 11號 蔡逸凡
Chapter 3 The Greedy Method 3.
3. Delaunay triangulation
15 PARTIAL DERIVATIVES.
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
On the Union of Cylinders in Esther Ezra Duke University On the Union of Cylinders in  3 Esther Ezra Duke University.
Center for Graphics and Geometric Computing, Technion 1 Computational Geometry Chapter 9 Delaunay Triangulation.
Almost Tight Bound for a Single Cell in an Arrangement of Convex Polyhedra in R 3 Esther Ezra Tel-Aviv University.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Almost tight bound for the union of fat tetrahedra in R 3 Esther Ezra Micha Sharir Duke University Tel-Aviv University.
Voronoi Diagrams.
1 Separator Theorems for Planar Graphs Presented by Shira Zucker.
Area, buffer, description Area of a polygon, center of mass, buffer of a polygon / polygonal line, and descriptive statistics.
UMass Lowell Computer Science Advanced Algorithms Computational Geometry Prof. Karen Daniels Spring, 2007 O’Rourke Chapter 8 Motion Planning.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Planning Near-Optimal Corridors amidst Obstacles Ron Wein Jur P. van den Berg (U. Utrecht) Dan Halperin Athens May 2006.
1 By: MOSES CHARIKAR, CHANDRA CHEKURI, TOMAS FEDER, AND RAJEEV MOTWANI Presented By: Sarah Hegab.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
5 -1 Chapter 5 The Divide-and-Conquer Strategy A simple example finding the maximum of a set S of n numbers.
On the union of cylinders in 3-space Esther Ezra Duke University.
Linear Programming Data Structures and Algorithms A.G. Malamos References: Algorithms, 2006, S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani Introduction.
Geometric Matching on Sequential Data Veli Mäkinen AG Genominformatik Technical Fakultät Bielefeld Universität.
Center for Graphics and Geometric Computing, Technion 1 Computational Geometry Chapter 8 Arrangements and Duality.
CS 473Lecture X1 CS473-Algorithms I Lecture X1 Properties of Ranks.
Nonoverlap of the Star Unfolding Boris Aronov and Joseph O’Rourke, 1991 A Summary by Brendan Lucier, 2004.
Center for Graphics and Geometric Computing, Technion 1 Computational Geometry Chapter 8 Arrangements and Duality.
2/19/15CMPS 3130/6130 Computational Geometry1 CMPS 3130/6130 Computational Geometry Spring 2015 Voronoi Diagrams Carola Wenk Based on: Computational Geometry:
Lectures on Greedy Algorithms and Dynamic Programming
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
1 / 41 Convex Hulls in 3-space Jason C. Yang. 2 / 41 Problem Statement Given P: set of n points in 3-space Return: –Convex hull of P: CH (P) –Smallest.
Topics in Algorithms 2007 Ramesh Hariharan. Tree Embeddings.
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Center for Graphics and Geometric Computing, Technion 1 Computational Geometry Chapter 9 Line Arrangements.
Common Intersection of Half-Planes in R 2 2 PROBLEM (Common Intersection of half- planes in R 2 ) Given n half-planes H 1, H 2,..., H n in R 2 compute.
The Jordan Arc theorem. A Lemma: For a connected, finite union of open discs, no two of which are mutually tangent, the outer boundary (which separates.
An Introduction to Computational Geometry: Arrangements and Duality Joseph S. B. Mitchell Stony Brook University Some images from [O’Rourke, Computational.
Approximation Algorithms based on linear programming.
Polygon Triangulation
Computational Geometry
Theory of Computational Complexity M1 Takao Inoshita Iwama & Ito Lab Graduate School of Informatics, Kyoto University.
TU/e Algorithms (2IL15) – Lecture 8 1 MAXIMUM FLOW (part II)
Haim Kaplan and Uri Zwick
Chapter 5. Optimal Matchings
Distinct Distances in the Plane
Additive Combinatorics and its Applications in Theoretical CS
Enumerating Distances Using Spanners of Bounded Degree
Depth Estimation via Sampling
Instructor: Shengyu Zhang
I. The Problem of Molding
Computational Geometry
Boaz BenMoshe Matthew Katz Joseph Mitchell
Clustering.
Presentation transcript:

On the ICP Algorithm Esther Ezra, Micha Sharir Alon Efrat

The problem: Pattern Matching Input: A = {a 1, …, a m }, B = {b 1, …, b n } A,B  R d Goal: Translate A by a vector t  R d s.t.  (A+t,B) is minimized. The (1-directional) cost function: The cost function (measures resemblance). The nearest neighbor of a in B. A, B

The algorithm [Besl & Mckay 92] Use local improvements: Repeat: At each iteration i, with cumulative translation t i-1 Assign each point a+t i-1  A+t i-1 to its NN b = N B (a+t i-1 ) Compute the relative translation  t i that minimizes  with respect to this fixed (frozen) NN-assignment. Translate the points of A by  t i (to be aligned to B). t i  t i-1 +  t i. Stop when the value of  does not decrease. The overall translation vector previously computed. t 0 = 0. Some of the points of A may acquire new NN in B.

The algorithm: Convergence [Besl & Mckay 92]. 1.The value of  =  2 decreases at each iteration. 2.The algorithm monotonically converges to a local minimum. This applies for  =   as well. a1a1 a2a2 a3a3 b1b1 b2b2 b3b3 b4b4 A local minimum of the ICP algorithm. The global minimum is attained when a 1, a 2, a 3 are aligned on top of b 2, b 3, b 4.

NN and Voronoi diagrams Each NN-assignment can be interpreted by the Voronoi diagram V(B) of B. b is the NN of a a is contained in the Voronoi cell V(b) of V(B).

The problem under the RMS measure

The algorithm: An example in R 1  (A+t,B) = RMS(t) = Iteration 1: b 1 = 0 b 2 = 4 a 3 = 1a 4 = 3 a 1 = -3-   t 1  1 m=4, n=2. (b 1 +b 2 )/2 = 2 a 2 = -1  (A,B)  3 Solving the equation: RMS’(t)=0

An example in R 1 : Cont. Iteration 2: b 1 = 0 b 2 = 4 a 4 +  t 1  t 2 = 1 (b 1 +b 2 )/2 = 2 a 3 +  t 1 a 2 +  t 1 a1+t1a1+t1 Iteration 3: b 1 = 0 b 2 = 4 NN-assignment does not change.  t 3 = 0 (b 1 +b 2 )/2 = 2 a 1 +  t 1 +  t 2 a 2 +  t 1 +  t 2 a 3 +  t 1 +  t 2 a 4 +  t 1 +  t 2  (A+  t 1,B)  2  (A+  t 1 +  t 2,B)  1

Number of iterations: An upper bound The value of  is reduced at each iteration, since 1.The relative translation  t minimizes  with respect to the frozen NN-assignment. 2.The value of  can only decrease with respect to the new NN-assignment (after translating by  t). No NN-assignment arises more than once! #iterations = #NN-assignments (that the algorithm reaches)

Number of iterations: A quadratic upper bound in R 1 Critical event: The NN of some point a i is changed. Two “consecutive” NN-assignments differ by one pair (a i,b j ). #NN-assignments  |{(a i,b j ) | a i  A, b j  B}| = nm. b1b1 a2a2 amam a1a1 b2b2 bnbn aiai bjbj b j+1 tt a i crosses into a new Voronoi cell.

The bound in R 1 Is the quadratic upper bound tight??? A linear lower bound construction (n=2):  (m) b 1 = 0 a m-1 = 2k-1a m = 2k+1a 1 = -2k-1 (b 1 +b 2 )/2 = 2k Tight! (when n=2). b 2 = 4k a 2 = -2k+1 Using induction on i=1,…, k/2. m=2k+2, n=2

Structural properties in R 1 Lemma: At each iteration i  2 of the algorithm Corollary (Monotonicity): The ICP algorithm always moves A in the same (left or right) direction. Either  t i  0 for each i  0, or  t i  0 for each i  0. Use induction on i.

A super-linear lower bound construction in R 1. b 1 = 0 b 2 = 1 m=n The construction (m=n): a 1 = -n -  (n-1), a i = (i-1)/n -1/2 + , b i = i-1, for i=1, …, n. Theorem: The number of iterations of the ICP algorithm under the above construction is  (n log n).  1 = 1/2

b 2 = 1 b 3 = 2 b 4 = 3 b 2 = 1 Iteration 2: Iteration 3: At iteration 4, the NN of a 2 remains unchanged (b 3 ): Only n-2 points of A cross into V(b 4 ).

The lower bound construction in R 1 Round j: n-1 right pts of A assigned to b n-j+1 and b n-j+2 Consists of steps where pts of A cross the bisector  n-j+1 = (b n-j+1 + b n-j+2 ) / 2 Then at each such step i, # pts of A that cross  n-j+1 is exactly j (except for the last step). At the last such step, # pts of A that cross  n-j+1 or  n-j+2 is exactly j-1. # steps in the round  n/j. At the next round, # steps in the round  n/(j-1).

The lower bound construction in R 1 The overall number of steps:

The lower bound construction in R 1 : Proof of the claim Induction on j: Consider the last step i of round j : b n-j+2 b n-j+3 b n-j+1 The points a 2, …, a l+1 cross  n-j+1, and a n-(j-l-2), …, a n cross  n-j+2. Overall number of crossing points = l + j - l -1 = j - 1. Black pts still remain in V(b n-j+2 ). l< j pts have not yet crossed  n-j+1 QED

General structural properties in R d Theorem: Let A = {a 1, …, a m }, B = {b 1, …, b n } be two point sets in R d. Then the overall number of NN-assignments, over all translated copies of A, is O(m d n d ). Worst-case tight. Proof: Critical event: The NN of a i +t changes from b to b’ t lies on the common boundary of V(b–a i ) and V(b’–a i ) Voronoi cells of the shifted diagram V(B- a i ). t b - a i b’ - a i b’’ - a i

# NN Assignments in R d Critical translations t for a fixed a i = Cell boundaries in V(B-a i ). All critical translations = Union of all Voronoi edges = Cell boundaries in the overlay M(A,B) of V(B-a 1 ),…, V(B-a m ). # NN-assignments = # cells in M(A,B) Each cell of M(A,B) consists of translations with a common NN-assignment

# NN Assignments in R d Claim: The overall number of cells of M(A,B) is O(m d n d ). Proof: Charge each cell of M(A,B) to some vertex v of it (v is charged at most 2 d times). v arises as the vertex in the overlay M d of only d diagrams. The complexity of M d is O(n d ) [Koltun & Sharir 05]. There are d-tuples of diagrams V(B-a 1 ),…, V(B-a d ). QED

Lower bound construction for M(A,B) B d=2 A 1/m 1 Horizontal cells: Minkowski sums of horizontal cells of V(B) and (reflected) vertical points of A. Vertical cells: Minkowski sums of vertical cells of V(B) and (reflected) horizontal points of A. M(A,B) contains  (nm) horizontal cells that intersect  (nm) vertical cells:  (n 2 m 2 ) cells.

# NN Assignments in R d Lower bound construction can be extended to any dimension d Open problem: Does the ICP algorithm can step through all (many) NN-assignments in a single execution? d=1: Upper bound: O(n 2 ). Lower bound:  (n log n). Probably no.

Monotonicity d=1: The ICP algorithm always moves A in the same (left or right) direction. Generalization to d  2:  = connected path obtained by concatenating the ICP relative translations  t. (  starts at the origin) Monotonicity:  does not intersect itself. 

Monotonicity Theorem: Let  t be a move of the ICP algorithm from translation t 0 to t 0 +  t. Then RMS(t 0 +   t) is a strictly decreasing function of .  does not intersect itself. Cost function monotone decreasing along  t. tt

Proof of the theorem S B-a (t) = min b  B ||a+t-b|| 2 = min b  B (||t|| 2 + 2t·(a-b) + ||a-b|| 2 ) RMS(t) = S B-a (t) - ||t|| 2 is the lower envelope of n hyperplanes S B-a (t) - ||t|| 2 is the boundary of a concave polyhedron. Q(t) = RMS(t) - ||t|| 2 is the average of {S B-a (t) - ||t|| 2 } a  A Q(t) is the boundary of a concave polyhedron. The Voronoi surface, whose minimization diagram is V(B-a). The average of m Voronoi surfaces S(B-a).

Proof of the theorem NN-assignment at t 0 defines a facet f(t) of Q(t), which contains the point (t 0,Q(t 0 )). Replace f(t) by the hyperplane h(t) containing it. h(t) corresponds to the frozen NN-assignment. The value of h(t)+||t 2 || along  t is monotone decreasing Since Q(t) is concave, and h(t) is tangent to Q(t) at t 0, the value of Q(t)+||t 2 || along  t decreases faster than the value of h(t)+||t 2 ||. QED t0t0 tt

More structural properties of  Corollary: The angle between any two consecutive edges of  is obtuse. Proof: Let  t k,  t k+1 be two consecutive edges of . Claim:  t k+1   t k  0  b’ = N B (a+t k ) a b = N B (a+t k-1 ) tktk a must cross the bisector of b, b’. b’ lies further ahead QED

Cauchy-Schwarz inequality More structural properties: Potential Lemma: At each iteration i  1 of the algorithm, (i) (ii) Corollary: Let  t 1, …,  t k be the relative translations computed by the algorithm. Then Due to (i) Due to (ii)

More structural properties: Potential Corollary: d=1: Trivial d  2: Given that ||  t i ||  , for each i  1, then Since tktk  does not get too close to itself. 

The problem under the Hausdorf measure

The relative translation Lemma: Let D i-1 be the smallest enclosing ball of {a+t i-1 – N B (a+t i -1) | a  A}. Then  t i moves the center of D i-1 to the origin. Proof: tt o a1a1 b1b1 a2a2 b2b2 a3a3 b3b3 a4a4 b4b4 a 1 -b 1 a 2 -b 2 a 4 -b 4 a 3 -b 3 a 1 -b 1 +  t a 2 -b 2 +  t a 4 -b 4 +  t a 3 -b 3 +  t o The radius of D determines the (frozen) cost after the translation. Any infinitesimal move of D increases the (frozen) cost. D D QED

No monotonicity ( No monotonicity (d  2) Lemma: In dimension d  2 the cost function does not necessarily decrease along  t. Proof: a1a1 a2a2 a3a3 b b’ tt r c tt a2+ta2+t a3+ta3+t a1+ta1+t b Initially, ||a 1 -b|| = max ||a i -b|| > r, i=1,2,3. Final distances: ||a 2 +  t-b|| = ||a 3 +  t-b|| = r, || a 1 +  t-b’ || < r. The distances of a 2, a 3 from b start increasing Distance of a 1 from its NN always decreases. QED

The one-dimensional problem Lemma (Monotonicity): The algorithm always moves A in the same (left or right) direction. Proof: Let |a*-b*|=max a  A |N B (a)-a|. Suppose a* < b*. a*-b* is the left endpoint of D 0 ; its center c < 0.  t 1 > 0, |  t 1 | < |a*-b*|. a*+  t 1 < b*, so b* is still the NN of a*+  t 1. |a*+  t 1 -b*|=max a  A |a+  t 1 -N B (a)| > |a+  t 1 -N B (a+  t 1 )| a*+  t 1 -b* is the left endpoint of D 1. The lemma follows by induction. a*-b* tt D0D0 0c a–N B (a) Initial minimum enclosing “ball”. 0 a*+  t 1 -b*a+  t 1 –N B (a) a*, b* still determine the next relative translation. QED

Number of iterations: An upper bound Theorem: Let  B be the spread of B. Then the number of iterations that the ICP algorithm executes is O((m+n) log  B / log n). Corollary: The number of iterations is O(m+n) when  B is polynomial in n Ratio between the diameter of B and the distance between its closest pair.

The upper bound: Proof sketch Use: the pair a*, b* that satisfies |a*-b*|=max a  A |N B (a)-a| always determines the left endpoint of D i. Put I k = |b*-(a*+t k )|. Classify each relative translation  t k Short if Long – otherwise. Overall: O(n log  B / log n). Easy

The upper bound: Proof sketch Short relative translations: a  A involved in a short relative translation at the k-th iteration, |a+t k-1 – N B (a+t k-1 )| is large. a  A involved in an additional short relative translation a has to be moved further by |a+t k-1 – N B (a+t k-1 )|. I k = b*-(a*+t k ) must decrease significantly. b1b1 bjbj b j+1 a tktk I k-1 a1a1

Number of iterations: A lower bound construction At the i-th iteration: 1.  t i = 1/2 i, for i=1,…,n-2. 2.Only a i+1 crosses into the next cell V(b i+2 ). The overall translation length is < 1. Theorem: The number of iterations of the ICP algorithm under the following construction ( m=n) is  (n). a1a1 a2a2 anan b1b1 b2b2 bnbn n n-1 aiai bibi

Many open problems: Bounds on # iterations How many local minima? Other cost functions More structural properties Analyze / improve running time, vs. running time of directly computing minimizing translation And so on…