Presentation is loading. Please wait.

Presentation is loading. Please wait.

Intelligent Scissors for Image Composition Anthony Dotterer 01/17/2006.

Similar presentations


Presentation on theme: "Intelligent Scissors for Image Composition Anthony Dotterer 01/17/2006."— Presentation transcript:

1 Intelligent Scissors for Image Composition Anthony Dotterer 01/17/2006

2 Citation Title –Intelligent Scissors for Image Composition Author –Eric N. Mortensen –William A. Barrett Publication –1995

3 Intelligent Scissors Tool Interactive image segmentation and composition tool –Easy to use –Quick –Quality output Features –Best path along image edges –Cooling –On-the-fly training –Source to destination warping and composition –Destination matching

4 Intelligent Scissors Need –Optimal path along edges starting at a ‘seed’ point –Optimal path creation must be quick Solution –Use dynamic programming to create path reference Local cost definition Path reference creation

5 Local Cost Definition Define l(p,q) as the cost for going from pixel p to pixel q Incorporate the several edge functions into the cost –Laplacian Zero-Crossing, f Z (q) –Gradient Magnitude, f G (q) –Gradient Direction, f D (p,q) Relate edge functions to the cost function –Use ω Z, ω D, ω G as constants to weight features l(p,q) = ω Z · f Z (q) + ω D · f D (p,q) + ω G · f G (q)

6 Laplacian Zero-Crossing Properties –Approximate 2 nd partial derivative of Image –Zero-crossings represent maxima and minima Good image edges Cost –Define I L (q) as Laplacian at pixel q –Get low cost by defining Laplacian as a binary f Z (q) = { 0; if I L (q) = 0, 1; if I L (q) ≠ 0

7 Laplacian Zero-Crossing (cont.) Issue –Zeros rarely occur Solution –Use pixel closest to zero Examples –Image (top) –Laplacian (bottom)

8 Gradient Magnitude Properties –Magnitude of 1 st partial derivatives of an image –Direct correlation between edge strength and local cost Cost –Define G as gradient magnitude G = √(I x ² + I y ²) –Get low cost by inverting and scaling f G = 1 – G / (max(G)) –Also factor in Euclidean distance Scale adjacent pixels cost by 1 Scale diagonal pixels cost by 1/√2

9 Gradient Magnitude (cont.) Examples –Original (top left) –Gradient Magnitude (top right) –Inverted & Scaled Gradient Magnitude (bottom)

10 Gradient Direction Properties –Vectors created by the 1 st derivatives of an image –High cost for shape changes Adds smoothing constraint Cost –Give low costs to gradients in the same direction –Define D(p) as the unit vector perpendicular to the gradient vector at point p D(p) = norm(I y (p), -I x (p))

11 Gradient Direction (cont.) Cost –Define L(p, q) to be the link between point q and p, such that L(p, q) = { q – p; if D(p) · (q – p) ≥ 0, p – q; if D(p) · (q – p) < 0 –Let d p (p, q) and d q (p, q) as follows d p (p, q) = D(p) · L(p, q) d q (p, q) = L(p, q) · D(q) –Finally, the cost function f D (p, q) = 1/π ( cos -1 (d p (p, q)) + cos -1 (d q (p, q)) )

12 Gradient Direction (cont.) Let –p = (3, 3) –q = (3, 4) –D(p) = (0, 1) –D(q) = (0, 1) Calculate L(p, q) – L(p, q) = ((3, 4) – (3, 3)) = (0, 1) Determine d(p, q) –d p (p, q) = (0, 1) · (0, 1) = 1 –d q (p, q) = (0, 1) · (0, 1) = 1 Finally f D (p, q) –f D (p, q) = 1/π ( 0 + 0 ) = 0 –Low Cost! Let –p = (3, 3) –q = (4, 3) –D(p) = (0, 1) –D(q) = (0, 1) Calculate L(p, q) – L(p, q) = ((4, 3) – (3, 3)) = (1, 0) Determine d(p, q) –d p (p, q) = (0, 1) · (1, 0) = 0 –d q (p, q) = (1, 0) · (0, 1) = 0 Finally f D (p, q) –f D (p, q) = 1/π ( π/2 + π/2 ) = 1 –High Cost!

13 Path Reference Creation Differs from method studied in class –No stages –Link cost between nodes changes –No destination Inputs –Seed point, s –Local cost function, l(q, r)

14 Path Reference Creation (cont.) Data structures –Sorted list of active pixels, L –Neighborhood of pixel q, N(q) –Flag map of expanded pixels, e(q) –Cumulative cost from seed point, g(q) Output –Path reference map, p

15 Path Reference Creation (cont.) Start at seed point –Cost is adjusted for Euclidean distance Put all neighbor pixels into the active list –No other pixel has yet to be expanded Set pointers for all neighbors to the seed point 123456789123456789 1 2 3 4 5 6 7 8 9 10 11 1 112 4 7137 7 L = (4,8), (3,7), …

16 Path Reference Creation (cont.) Expand to least cost node –Remove that node from active list Calculate cumulative cost of all neighbor pixels –Excludes seed point Change pointers of neighbor pixels –Only if new cost is smaller and pixel is not expanded Add or replace neighbor pixels into active list –Do nothing if pointer was not updated 123456789123456789 1 2 3 4 5 6 7 8 9 10 11 1 92 4 7136 7 L = (3,7), (2,8), (5,7) … 14 6 5

17 Path Reference Creation (cont.) Expand to least cost node –Remove that node from active list Calculate cumulative cost of all neighbor pixels –Excludes expanded pixels Change pointers of neighbor pixels –Only if new cost is smaller and pixel is not expanded Add or replace neighbor pixels into active list –Do nothing if pointer was not updated 123456789123456789 1 2 3 4 5 6 7 8 9 10 11 1 92 4 7136 7 L = (2,6), (3,6), (4,9), … 14 6 5 18 16 20 66121423 9 13

18 Path Reference Creation (cont.) Finished –No more pixels to expand –No more pixels on active list

19 ‘Live-Wire’ Action Mouse will constantly redraw optimal path –A wire will ‘snap’ to objects with an image New seed points –New seed points must be defined to surround an object –Points will ‘snap’ to nearest edge

20 Cooling Problem –All seeds must be manually selected Complex objects may require many seed points Solution –Apply automatic seed point –As the user wraps the object, a common path is formed Make common path ‘cool’ into a new seed point

21 Cooling (cont.) Examples –Manual seed points (bottom left) –Auto seed points via cooling (bottom right)

22 Interactive Dynamic Training Problem –Some objects have stronger edges then others If the desired edge is weaker than a nearby edge, then the path ‘jumps’ over to the stronger edge Solution –Train the gradient magnitude to desire the weaker edge Use a sample of good path to train gradient magnitude Update sample as path moves along the desired edge –Allow user to enable and disable training as needed

23 Interactive Dynamic Training Examples –Path segment jumps without training (top) –Path segment follows trained edge (middle) Cost f G –Normal response without training (lower left) –Trained response from edge sample (lower right)

24 Image Composition Need –Source objects need blend in with a new background –Background may need to be in front of objects Solution –Allow for 2-D transformations to occur on source objects –Use low pass filters to blend the object into the destination’s scene –Mask background objects to appear in front of source object

25 Image Composition (cont.)

26 Critique Paper –Describes a tool Selects image objects quickly and easily Provides the means manipulate and paste them into different images Abstract –Brief mention of need –List of abilities for a tool called ‘Intelligent Scissors’ Introduction –Defines need –Claims current methods are not enough –Claims this tool will help the problem –Gives a small background on similar segmentation tools and their flaws

27 Critique (cont.) Algorithms –The paper does a good job on explaining how dynamic programming is used –‘Cooling’ was explained well, but no suggested times were given –The section on ‘Dynamic Training’ could be explained more to better understand it –Spatial Frequency and Contrast Matching needs more explanation

28 Critique (cont.) Dynamic Programming –Used as the main driving force of this tool –The authors spend a lot of time on the dynamic programming section but not gratuitously –Cost must be correctly attributed to the different edge features to take advantage of dynamic programming –Optimal path is ‘Optimal’, not just a local answer

29 Questions?


Download ppt "Intelligent Scissors for Image Composition Anthony Dotterer 01/17/2006."

Similar presentations


Ads by Google