Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lucas-Kanade Image Alignment Iain Matthews. Paper Reading Simon Baker and Iain Matthews, Lucas-Kanade 20 years on: A Unifying Framework, Part 1

Similar presentations


Presentation on theme: "Lucas-Kanade Image Alignment Iain Matthews. Paper Reading Simon Baker and Iain Matthews, Lucas-Kanade 20 years on: A Unifying Framework, Part 1"— Presentation transcript:

1 Lucas-Kanade Image Alignment Iain Matthews

2 Paper Reading Simon Baker and Iain Matthews, Lucas-Kanade 20 years on: A Unifying Framework, Part 1 http://www.ri.cmu.edu/pub_files/pub3/baker_simon_2002_3/baker_simon_2002_3.pdf And Project 3 Description

3 Some operations preserve the range but change the domain of f : What kinds of operations can this perform? Still other operations operate on both the domain and the range of f. Recall - Image Processing Lecture

4 Face Morphing

5

6

7 Applications of Image Alignment Ubiquitous computer vision technique Mosaicing Tracking Parametric and layered motion estimation Image registration and alignment Face coding / parameterization Super-resolution

8 Generative Model for an Image Parameterized model ParametersImage shape appearance

9 Fitting a Model to an Image What are the best model parameters to match an image? ParametersImage shape appearance Nonlinear optimization problem

10 Region of interest Appearance Shape Active Appearance Model Landmarks Cootes, Edwards, Taylor, 1998 Warp to reference

11 Image Alignment Image, I(x) Warp, W(x;p) Template, T(x) Image coordinates x = (x, y) T Warp parameters, p = (p 1, p 2, …, p n ) T Warp, W(x;p+  p)

12 Want to: Minimize the Error Warp image to get compute Template, T(x) Warped, I(W(x;p)) T(x) - I(W(x;p))

13 How to: Minimize the Error Minimise SSD with respect to p, Solution: solve for increments to current estimate, - = Generally a nonlinear optimisation problem 

14 Taylor series expansion, linearize function f about x 0 : Linearize For image alignment:

15 Gradient Descent Solution Least squares problem, solve for  p Solution, Gradient Error Image Jacobian Hessian

16 Gradient Images Compute image gradient IxIx IyIy W(x;p)W(x;p)W(x;p)W(x;p) I(W(x;p))

17 Jacobian Compute Jacobian Mesh parameterization Warp, W(x;p) Template, T(x)Image, I(x) Image coordinates x = (x, y) T Warp parameters, p = (p 1, p 2, …, p n ) T = (dx 1, dy 1, …, dx n, dy n ) T 1 23 4 3 2 1 4 = 1 3 24

18 Lucas-Kanade Algorithm 1.Warp I with W(x;p)  I(W(x;p)) 2.Compute error image T(x) - I(W(x;p)) 3.Warp gradient of I to compute  I 4.Evaluate Jacobian 5.Compute Hessian 6.Compute  p 7.Update parameters p  p   p -= 

19 Fast Gradient Descent? To reduce Hessian computation: 1.Make Jacobian simple (or constant) 2.Avoid computing gradients on I

20 Shum-Szeliski Image Aligment Additive Image Alignment – Lucas, Kanade I(x)I(x) W(x;p)W(x;p) T(x)T(x) W(x;p+  p) I(x)I(x) T(x)T(x) W(x;p)W(x;p) I(W(x;p)) W(x;p) o W(x;  p) Compositional Alignment – Shum, Szeliski W(x;p)W(x;p) W(x;0 +  p) = W(x;  p)

21 I(x)I(x) W(x;p)W(x;p) T(x)T(x) W(x;p)W(x;p) I(W(x;p)) W(x;p) o W(x;  p) Compositional Image Alignment Minimise, Jacobian is constant, evaluated at (x, 0)  “simple”.

22 Compositional Algorithm 1.Warp I with W(x;p)  I(W(x;p)) 2.Compute error image T(x) - I(W(x;p)) 3.Warp gradient of I to compute  I 4.Evaluate Jacobian 5.Compute Hessian 6.Compute  p 7.Update W(x;p)  W(x;p) o W(x;  p) -= 

23 Inverse Compositional Why compute updates on I? Can we reverse the roles of the images? Yes! [Baker, Matthews CMU-RI-TR-01-03] Proof that algorithms take the same steps (to first order)

24 I(x)I(x) W(x;p)W(x;p) T(x)T(x) W(x;p)W(x;p) I(W(x;p)) W(x;p) o W(x;  p) Inverse Compositional Forwards compositional I(x)I(x) W(x;p)W(x;p) T(x)T(x) W(x;p)W(x;p) I(W(x;p)) W(x;p) o W(x;  p) -1 Inverse compositional

25 Inverse Compositional Minimise, Solution Update 

26 Inverse Compositional Jacobian is constant - evaluated at (x, 0) Gradient of template is constant Hessian is constant Can pre-compute everything but error image!

27 Inverse Compositional Algorithm 1.Warp I with W(x;p)  I(W(x;p)) 2.Compute error image T(x) - I(W(x;p)) 3.Warp gradient of I to compute  I 4.Evaluate Jacobian 5.Compute Hessian 6.Compute  p 7.Update W(x;p)  W(x;p) o W(x;  p) -1 -= 

28 Framework Baker and Matthews 2003 Formulated framework, proved equivalence AlgorithmCan be applied toEfficient?Authors Forwards Additive AnyNoLucas, Kanade Forwards Compositional Any semi-groupNoShum, Szeliski Inverse Compositional Any groupYesBaker, Matthews Inverse AdditiveSimple linear 2D+Yes Hager, Belhumeur

29 Example


Download ppt "Lucas-Kanade Image Alignment Iain Matthews. Paper Reading Simon Baker and Iain Matthews, Lucas-Kanade 20 years on: A Unifying Framework, Part 1"

Similar presentations


Ads by Google