Presentation is loading. Please wait.

Presentation is loading. Please wait.

On the ICP Algorithm Esther Ezra, Micha Sharir Alon Efrat.

Similar presentations


Presentation on theme: "On the ICP Algorithm Esther Ezra, Micha Sharir Alon Efrat."— Presentation transcript:

1 On the ICP Algorithm Esther Ezra, Micha Sharir Alon Efrat

2 The problem: Pattern Matching Input: A = {a 1, …, a m }, B = {b 1, …, b n } A,B  R d Goal: Translate A by a vector t  R d s.t.  (A+t,B) is minimized. The (1-directional) cost function: The cost function (measures resemblance). The nearest neighbor of a in B. A, B

3 The algorithm [Besl & Mckay 92] Use local improvements: Repeat: At each iteration i, with cumulative translation t i-1 Assign each point a+t i-1  A+t i-1 to its NN b = N B (a+t i-1 ) Compute the relative translation  t i that minimizes  with respect to this fixed (frozen) NN-assignment. Translate the points of A by  t i (to be aligned to B). t i  t i-1 +  t i. Stop when the value of  does not decrease. The overall translation vector previously computed. t 0 = 0. Some of the points of A may acquire new NN in B.

4 The algorithm: Convergence [Besl & Mckay 92]. 1.The value of  =  2 decreases at each iteration. 2.The algorithm monotonically converges to a local minimum. This applies for  =   as well. a1a1 a2a2 a3a3 b1b1 b2b2 b3b3 b4b4 A local minimum of the ICP algorithm. The global minimum is attained when a 1, a 2, a 3 are aligned on top of b 2, b 3, b 4.

5 NN and Voronoi diagrams Each NN-assignment can be interpreted by the Voronoi diagram V(B) of B. b is the NN of a a is contained in the Voronoi cell V(b) of V(B).

6 The problem under the RMS measure

7 The algorithm: An example in R 1  (A+t,B) = RMS(t) = Iteration 1: b 1 = 0 b 2 = 4 a 3 = 1a 4 = 3 a 1 = -3-   t 1  1 m=4, n=2. (b 1 +b 2 )/2 = 2 a 2 = -1  (A,B)  3 Solving the equation: RMS’(t)=0

8 An example in R 1 : Cont. Iteration 2: b 1 = 0 b 2 = 4 a 4 +  t 1  t 2 = 1 (b 1 +b 2 )/2 = 2 a 3 +  t 1 a 2 +  t 1 a1+t1a1+t1 Iteration 3: b 1 = 0 b 2 = 4 NN-assignment does not change.  t 3 = 0 (b 1 +b 2 )/2 = 2 a 1 +  t 1 +  t 2 a 2 +  t 1 +  t 2 a 3 +  t 1 +  t 2 a 4 +  t 1 +  t 2  (A+  t 1,B)  2  (A+  t 1 +  t 2,B)  1

9 Number of iterations: An upper bound The value of  is reduced at each iteration, since 1.The relative translation  t minimizes  with respect to the frozen NN-assignment. 2.The value of  can only decrease with respect to the new NN-assignment (after translating by  t). No NN-assignment arises more than once! #iterations = #NN-assignments (that the algorithm reaches)

10 Number of iterations: A quadratic upper bound in R 1 Critical event: The NN of some point a i is changed. Two “consecutive” NN-assignments differ by one pair (a i,b j ). #NN-assignments  |{(a i,b j ) | a i  A, b j  B}| = nm. b1b1 a2a2 amam a1a1 b2b2 bnbn aiai bjbj b j+1 tt a i crosses into a new Voronoi cell.

11 The bound in R 1 Is the quadratic upper bound tight??? A linear lower bound construction (n=2):  (m) b 1 = 0 a m-1 = 2k-1a m = 2k+1a 1 = -2k-1 (b 1 +b 2 )/2 = 2k Tight! (when n=2). b 2 = 4k a 2 = -2k+1 Using induction on i=1,…, k/2. m=2k+2, n=2

12 Structural properties in R 1 Lemma: At each iteration i  2 of the algorithm Corollary (Monotonicity): The ICP algorithm always moves A in the same (left or right) direction. Either  t i  0 for each i  0, or  t i  0 for each i  0. Use induction on i.

13 A super-linear lower bound construction in R 1. b 1 = 0 b 2 = 1 m=n The construction (m=n): a 1 = -n -  (n-1), a i = (i-1)/n -1/2 + , b i = i-1, for i=1, …, n. Theorem: The number of iterations of the ICP algorithm under the above construction is  (n log n).  1 = 1/2

14 b 2 = 1 b 3 = 2 b 4 = 3 b 2 = 1 Iteration 2: Iteration 3: At iteration 4, the NN of a 2 remains unchanged (b 3 ): Only n-2 points of A cross into V(b 4 ).

15 The lower bound construction in R 1 Round j: n-1 right pts of A assigned to b n-j+1 and b n-j+2 Consists of steps where pts of A cross the bisector  n-j+1 = (b n-j+1 + b n-j+2 ) / 2 Then at each such step i, # pts of A that cross  n-j+1 is exactly j (except for the last step). At the last such step, # pts of A that cross  n-j+1 or  n-j+2 is exactly j-1. # steps in the round  n/j. At the next round, # steps in the round  n/(j-1).

16 The lower bound construction in R 1 The overall number of steps:

17 The lower bound construction in R 1 : Proof of the claim Induction on j: Consider the last step i of round j : b n-j+2 b n-j+3 b n-j+1 The points a 2, …, a l+1 cross  n-j+1, and a n-(j-l-2), …, a n cross  n-j+2. Overall number of crossing points = l + j - l -1 = j - 1. Black pts still remain in V(b n-j+2 ). l< j pts have not yet crossed  n-j+1 QED

18 General structural properties in R d Theorem: Let A = {a 1, …, a m }, B = {b 1, …, b n } be two point sets in R d. Then the overall number of NN-assignments, over all translated copies of A, is O(m d n d ). Worst-case tight. Proof: Critical event: The NN of a i +t changes from b to b’ t lies on the common boundary of V(b–a i ) and V(b’–a i ) Voronoi cells of the shifted diagram V(B- a i ). t b - a i b’ - a i b’’ - a i

19 # NN Assignments in R d Critical translations t for a fixed a i = Cell boundaries in V(B-a i ). All critical translations = Union of all Voronoi edges = Cell boundaries in the overlay M(A,B) of V(B-a 1 ),…, V(B-a m ). # NN-assignments = # cells in M(A,B) Each cell of M(A,B) consists of translations with a common NN-assignment

20 # NN Assignments in R d Claim: The overall number of cells of M(A,B) is O(m d n d ). Proof: Charge each cell of M(A,B) to some vertex v of it (v is charged at most 2 d times). v arises as the vertex in the overlay M d of only d diagrams. The complexity of M d is O(n d ) [Koltun & Sharir 05]. There are d-tuples of diagrams V(B-a 1 ),…, V(B-a d ). QED

21 Lower bound construction for M(A,B) B d=2 A 1/m 1 Horizontal cells: Minkowski sums of horizontal cells of V(B) and (reflected) vertical points of A. Vertical cells: Minkowski sums of vertical cells of V(B) and (reflected) horizontal points of A. M(A,B) contains  (nm) horizontal cells that intersect  (nm) vertical cells:  (n 2 m 2 ) cells.

22 # NN Assignments in R d Lower bound construction can be extended to any dimension d Open problem: Does the ICP algorithm can step through all (many) NN-assignments in a single execution? d=1: Upper bound: O(n 2 ). Lower bound:  (n log n). Probably no.

23 Monotonicity d=1: The ICP algorithm always moves A in the same (left or right) direction. Generalization to d  2:  = connected path obtained by concatenating the ICP relative translations  t. (  starts at the origin) Monotonicity:  does not intersect itself. 

24 Monotonicity Theorem: Let  t be a move of the ICP algorithm from translation t 0 to t 0 +  t. Then RMS(t 0 +   t) is a strictly decreasing function of .  does not intersect itself. Cost function monotone decreasing along  t. tt

25 Proof of the theorem S B-a (t) = min b  B ||a+t-b|| 2 = min b  B (||t|| 2 + 2t·(a-b) + ||a-b|| 2 ) RMS(t) = S B-a (t) - ||t|| 2 is the lower envelope of n hyperplanes S B-a (t) - ||t|| 2 is the boundary of a concave polyhedron. Q(t) = RMS(t) - ||t|| 2 is the average of {S B-a (t) - ||t|| 2 } a  A Q(t) is the boundary of a concave polyhedron. The Voronoi surface, whose minimization diagram is V(B-a). The average of m Voronoi surfaces S(B-a).

26 Proof of the theorem NN-assignment at t 0 defines a facet f(t) of Q(t), which contains the point (t 0,Q(t 0 )). Replace f(t) by the hyperplane h(t) containing it. h(t) corresponds to the frozen NN-assignment. The value of h(t)+||t 2 || along  t is monotone decreasing Since Q(t) is concave, and h(t) is tangent to Q(t) at t 0, the value of Q(t)+||t 2 || along  t decreases faster than the value of h(t)+||t 2 ||. QED t0t0 tt

27 More structural properties of  Corollary: The angle between any two consecutive edges of  is obtuse. Proof: Let  t k,  t k+1 be two consecutive edges of . Claim:  t k+1   t k  0  b’ = N B (a+t k ) a b = N B (a+t k-1 ) tktk a must cross the bisector of b, b’. b’ lies further ahead QED

28 Cauchy-Schwarz inequality More structural properties: Potential Lemma: At each iteration i  1 of the algorithm, (i) (ii) Corollary: Let  t 1, …,  t k be the relative translations computed by the algorithm. Then Due to (i) Due to (ii)

29 More structural properties: Potential Corollary: d=1: Trivial d  2: Given that ||  t i ||  , for each i  1, then Since tktk  does not get too close to itself. 

30 The problem under the Hausdorf measure

31 The relative translation Lemma: Let D i-1 be the smallest enclosing ball of {a+t i-1 – N B (a+t i -1) | a  A}. Then  t i moves the center of D i-1 to the origin. Proof: tt o a1a1 b1b1 a2a2 b2b2 a3a3 b3b3 a4a4 b4b4 a 1 -b 1 a 2 -b 2 a 4 -b 4 a 3 -b 3 a 1 -b 1 +  t a 2 -b 2 +  t a 4 -b 4 +  t a 3 -b 3 +  t o The radius of D determines the (frozen) cost after the translation. Any infinitesimal move of D increases the (frozen) cost. D D QED

32 No monotonicity ( No monotonicity (d  2) Lemma: In dimension d  2 the cost function does not necessarily decrease along  t. Proof: a1a1 a2a2 a3a3 b b’ tt r c tt a2+ta2+t a3+ta3+t a1+ta1+t b Initially, ||a 1 -b|| = max ||a i -b|| > r, i=1,2,3. Final distances: ||a 2 +  t-b|| = ||a 3 +  t-b|| = r, || a 1 +  t-b’ || < r. The distances of a 2, a 3 from b start increasing Distance of a 1 from its NN always decreases. QED

33 The one-dimensional problem Lemma (Monotonicity): The algorithm always moves A in the same (left or right) direction. Proof: Let |a*-b*|=max a  A |N B (a)-a|. Suppose a* < b*. a*-b* is the left endpoint of D 0 ; its center c < 0.  t 1 > 0, |  t 1 | < |a*-b*|. a*+  t 1 < b*, so b* is still the NN of a*+  t 1. |a*+  t 1 -b*|=max a  A |a+  t 1 -N B (a)| > |a+  t 1 -N B (a+  t 1 )| a*+  t 1 -b* is the left endpoint of D 1. The lemma follows by induction. a*-b* tt D0D0 0c a–N B (a) Initial minimum enclosing “ball”. 0 a*+  t 1 -b*a+  t 1 –N B (a) a*, b* still determine the next relative translation. QED

34 Number of iterations: An upper bound Theorem: Let  B be the spread of B. Then the number of iterations that the ICP algorithm executes is O((m+n) log  B / log n). Corollary: The number of iterations is O(m+n) when  B is polynomial in n Ratio between the diameter of B and the distance between its closest pair.

35 The upper bound: Proof sketch Use: the pair a*, b* that satisfies |a*-b*|=max a  A |N B (a)-a| always determines the left endpoint of D i. Put I k = |b*-(a*+t k )|. Classify each relative translation  t k Short if Long – otherwise. Overall: O(n log  B / log n). Easy

36 The upper bound: Proof sketch Short relative translations: a  A involved in a short relative translation at the k-th iteration, |a+t k-1 – N B (a+t k-1 )| is large. a  A involved in an additional short relative translation a has to be moved further by |a+t k-1 – N B (a+t k-1 )|. I k = b*-(a*+t k ) must decrease significantly. b1b1 bjbj b j+1 a tktk I k-1 a1a1

37 Number of iterations: A lower bound construction At the i-th iteration: 1.  t i = 1/2 i, for i=1,…,n-2. 2.Only a i+1 crosses into the next cell V(b i+2 ). The overall translation length is < 1. Theorem: The number of iterations of the ICP algorithm under the following construction ( m=n) is  (n). a1a1 a2a2 anan b1b1 b2b2 bnbn n n-1 aiai bibi

38 Many open problems: Bounds on # iterations How many local minima? Other cost functions More structural properties Analyze / improve running time, vs. running time of directly computing minimizing translation And so on…


Download ppt "On the ICP Algorithm Esther Ezra, Micha Sharir Alon Efrat."

Similar presentations


Ads by Google