Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels.

Slides:



Advertisements
Similar presentations
Boundary Detection - Edges Boundaries of objects –Usually different materials/orientations, intensity changes.
Advertisements

Instructor: Mircea Nicolescu Lecture 6 CS 485 / 685 Computer Vision.
Hough Transform Reading Watt, An edge is not a line... How can we detect lines ?
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Edge Detection CSE P 576 Larry Zitnick
Edge and Corner Detection Reading: Chapter 8 (skip 8.1) Goal: Identify sudden changes (discontinuities) in an image This is where most shape information.
EE663 Image Processing Edge Detection 1
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 4 Edge Detection
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
Announcements Mailing list: –you should have received messages Project 1 out today (due in two weeks)
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
CS223b, Jana Kosecka Image Features Local, meaningful, detectable parts of the image. Line detection Corner detection Motivation Information content high.
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
CS485/685 Computer Vision Dr. George Bebis
Segmentation (Section 10.2)
EE663 Image Processing Edge Detection 3 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Lecture 2: Image filtering
Announcements Since Thursday we’ve been discussing chapters 7 and 8. “matlab can be used off campus by logging into your wam account and bringing up an.
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
Edge Detection.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30 – 11:50am.
CS559: Computer Graphics Lecture 3: Digital Image Representation Li Zhang Spring 2008.
Lecture 2: Edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Generalized Hough Transform
Image Processing Edge detection Filtering: Noise suppresion.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
Edge Detection Today’s reading Cipolla & Gee on edge detection (available online)Cipolla & Gee on edge detection From Sandlot ScienceSandlot Science.
Edge Detection Today’s reading Cipolla & Gee on edge detection (available online)Cipolla & Gee on edge detection Szeliski, Ch 4.1.2, From Sandlot.
SHINTA P. Juli What are edges in an image? Edge Detection Edge Detection Methods Edge Operators Matlab Program.
Instructor: S. Narasimhan
CS654: Digital Image Analysis Lecture 24: Introduction to Image Segmentation: Edge Detection Slide credits: Derek Hoiem, Lana Lazebnik, Steve Seitz, David.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
CSE 185 Introduction to Computer Vision Edges. Scale space Reading: Chapter 3 of S.
EE 4780 Edge Detection.
Many slides from Steve Seitz and Larry Zitnick
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Brent M. Dingle, Ph.D Game Design and Development Program Mathematics, Statistics and Computer Science University of Wisconsin - Stout Edge Detection.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Announcements Project 0 due tomorrow night. Edge Detection Today’s readings Cipolla and Gee (handout) –supplemental: Forsyth, chapter 9Forsyth For Friday.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Lecture 8: Edges and Feature Detection
Last Lecture photomatix.com. Today Image Processing: from basic concepts to latest techniques Filtering Edge detection Re-sampling and aliasing Image.
Digital Image Processing CSC331
CS440 / ECE 448 – Fall 2006 Lecture #2 CS 440 / ECE 448 Introduction to Artificial Intelligence Fall 2006 Instructor: Eyal Amir TAs: Deepak Ramachandran.
Edge Detection Images and slides from: James Hayes, Brown University, Computer Vision course Svetlana Lazebnik, University of North Carolina at Chapel.
Miguel Tavares Coimbra
Edge Detection slides taken and adapted from public websites:
Edge Detection CS 678 Spring 2018.
Edge Detection EE/CSE 576 Linda Shapiro.
Jeremy Bolton, PhD Assistant Teaching Professor
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Review: Linear Systems
Edge Detection Today’s reading
Edge Detection CSE 455 Linda Shapiro.
Edge Detection Today’s reading
Lecture 2: Edge detection
Hough Transform.
Edge Detection Today’s reading
Edge Detection Today’s readings Cipolla and Gee Watt,
Lecture 2: Edge detection
IT472 Digital Image Processing
IT472 Digital Image Processing
Edge Detection ECE P 596 Linda Shapiro.
Presentation transcript:

Edge Detection

Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels

Origin of Edges Edges are caused by a variety of factors depth discontinuity surface color discontinuity illumination discontinuity surface normal discontinuity

Edge detection How can you tell that a pixel is on an edge?

Profiles of image intensity edges

Edge detection 1.Detection of short linear edge segments (edgels) 2.Aggregation of edgels into extended edges (maybe parametric description)

Edgel detection Difference operators Parametric-model matchers

Edge is Where Change Occurs Change is measured by derivative in 1D Biggest change, derivative has maximum magnitude Or 2 nd derivative is zero.

Edge Detection Gradient operators Roberts Prewitt Sobel Gradient of Gaussian (Canny) Laplacian of Gaussian (Marr-Hildreth) Facet Model Based Edge Detector (Haralick) Possible detectors:

Edge Detection Using the Gradient Definition of the gradient: To save computations, the magnitude of gradient is usually approximated by:

Image gradient The gradient of an image: The gradient points in the direction of most rapid change in intensity The gradient direction is given by: how does this relate to the direction of the edge? The edge strength is given by the gradient magnitude

Edge Detection Using the Gradient Properties of the gradient: The magnitude of gradient provides information about the strength of the edge The direction of gradient is always perpendicular to the direction of the edge Main idea: Compute derivatives in x and y directions Find gradient magnitude Threshold gradient magnitude

Edge Detection Using the Gradient Estimating the gradient with finite differences Approximation by finite differences:

Edge Detection Using the Gradient Using pixel-coordinate notation (remember: j corresponds to the x direction and i to the negative y direction):

Edge Detection Using the Gradient Example: Suppose we want to approximate the gradient magnitude at z 5 We can implement  I/  x and  I/  y using the following masks: Note: M x is the approximation at (i, j + 1/2) and M y is the approximation at (i + 1/2, j)

Edge Detection Using the Gradient The Roberts edge detector This approximation can be implemented by the following masks: Note: M x and M y are approximations at (i + 1/2, j + 1/2)

Edge Detection Using the Gradient The Prewitt edge detector The partial derivatives can be computed by: Note: M x and M y are approximations at (i, j)) Consider the arrangement of pixels about the pixel (i, j): The constant c implies the emphasis given to pixels closer to the center of the mask. Setting c = 1, we get the Prewitt operator:

Edge Detection Using the Gradient The Sobel edge detector Note: M x and M y are approximations at (i, j)) Setting c = 2, we get the Sobel operator:

Edge Detection Using the Gradient Main steps in edge detection using masks:

Edge Detection Using the Gradient (an example using the Prewitt edge detector - don’t divide by 2)

Edge Detection Using the Gradient Example:

Edge Detection Using the Gradient Example – cont.:

Edge Detection Using the Gradient Example – cont.:

Edge Detection Using the Gradient

Isotropic property of gradient magnitude: The magnitude of gradient is an isotropic operator (it detects edges in any direction !!)

The discrete gradient How can we differentiate a digital image f[x,y]? Option 1: reconstruct a continuous image, then take gradient Option 2: take discrete derivative (finite difference) How would you implement this as a cross-correlation?

The Sobel operator Better approximations of the derivatives exist The Sobel operators below are very commonly used The standard defn. of the Sobel operator omits the 1/8 term –doesn’t make a difference for edge detection –the 1/8 term is needed to get the right gradient value, however

Basic Relationship of Pixels x y (0,0) Conventional indexing method (x,y)(x,y)(x+1,y)(x-1,y) (x,y-1) (x,y+1) (x+1,y-1)(x-1,y-1) (x-1,y+1)(x+1,y+1)

Neighbors of a Pixel p (x+1,y) (x-1,y) (x,y-1) (x,y+1) 4-neighbors of p: N 4 (p) = (x  1,y) (x+1,y) (x,y  1) (x,y+1) Neighborhood relation is used to tell adjacent pixels. It is useful for analyzing regions. Note: q  N 4 (p) implies p  N 4 (q) 4-neighborhood relation considers only vertical and horizontal neighbors.

p(x+1,y)(x-1,y) (x,y-1) (x,y+1) (x+1,y-1)(x-1,y-1) (x-1,y+1)(x+1,y+1) Neighbors of a Pixel (cont.) 8-neighbors of p: (x  1,y  1) (x,y  1) (x+1,y  1) (x  1,y) (x+1,y) (x  1,y+1) (x,y+1) (x+1,y+1) N 8 (p) = 8-neighborhood relation considers all neighbor pixels.

p (x+1,y-1)(x-1,y-1) (x-1,y+1)(x+1,y+1) Diagonal neighbors of p: N D (p) = (x  1,y  1) (x+1,y  1) (x  1,y  1) (x+1,y+1) Neighbors of a Pixel (cont.) Diagonal -neighborhood relation considers only diagonal neighbor pixels.

Template, Window, and Mask Operation Sometime we need to manipulate values obtained from neighboring pixels Example: How can we compute an average value of pixels in a 3x3 region center at a pixel z? Pixel z Image

Template, Window, and Mask Operation (cont.) Pixel z Step 1. Selected only needed pixels …… … …

…… … … Template, Window, and Mask Operation (cont.) Step 2. Multiply every pixel by 1/9 and then sum up the values X Mask or Window or Template

Template, Window, and Mask Operation (cont.) Question: How to compute the 3x3 average values at every pixels? Solution: Imagine that we have a 3x3 window that can be placed everywhere on the image Masking Window

4.3 Template, Window, and Mask Operation (cont.) Step 1: Move the window to the first location where we want to compute the average value and then select only pixels inside the window Step 2: Compute the average value Sub image p Original image Output image Step 3: Place the result at the pixel in the output image Step 4: Move the window to the next location and go to Step 2

Template, Window, and Mask Operation (cont.) The 3x3 averaging method is one example of the mask operation or Spatial filtering.  The mask operation has the corresponding mask (sometimes called window or template).  The mask contains coefficients to be multiplied with pixel values. w(2,1)w(3,1) w(3,3) w(2,2) w(3,2) w(1,1) w(1,2) w(3,1) Mask coefficients Example : moving averaging The mask of the 3x3 moving average filter has all coefficients = 1/9

Template, Window, and Mask Operation (cont.) The mask operation at each point is performed by: 1. Move the reference point (center) of mask to the location to be computed 2. Compute sum of products between mask coefficients and pixels in subimage under the mask. p(2,1) p(3,2)p(2,2) p(2,3) p(2,1) p(3,3) p(1,1) p(1,3) p(3,1) …… … … Subimage w(2,1)w(3,1) w(3,3) w(2,2) w(3,2) w(1,1) w(1,2) w(3,1) Mask coefficients Mask frame The reference point of the mask

Template, Window, and Mask Operation (cont.) The mask operation on the whole image is given by: 1.Move the mask over the image at each location. 2.Compute sum of products between the mask coefficeints and pixels inside subimage under the mask. 3.Store the results at the corresponding pixels of the output image. 4.Move the mask to the next location and go to step 2 until all pixel locations have been used.

Template, Window, and Mask Operation (cont.) Examples of the masks Sobel operators x3 moving average filter 8 3x3 sharpening filter

Gradient operators (a): Roberts’ cross operator (b): 3x3 Prewitt operator (c): Sobel operator (d) 4x4 Prewitt operator

Effects of noise Consider a single row or column of the image Plotting intensity as a function of position gives a signal Where is the edge?

Solution: smooth first Look for peaks in

Derivative theorem of convolution This saves us one operation:

Laplacian of Gaussian Consider Laplacian of Gaussian operator Where is the edge?Zero-crossings of bottom graph

2D edge detection filters is the Laplacian operator: Laplacian of Gaussian Gaussianderivative of Gaussian

Edge Detection Practical issues: Differential masks act as high-pass filters – tend to amplify noise. Reduce the effects of noise - first smooth with a low-pass filter. The noise suppression-localization tradeoff a larger filter reduces noise, but worsens localization (i.e., it adds uncertainty to the location of the edge) and vice-versa.

Edge Detection How should we choose the threshold?

Edge Detection Edge thinning and linking required to obtain good contours

Edge Detection Good detection: the optimal detector must minimize the probability of false positives (detecting spurious edges caused by noise), as well as that of false negatives (missing real edges) Good localization: the edges detected must be as close as possible to the true edges. Single response constraint: the detector must return one point only for each true edge point; that is, minimize the number of local maxima around the true edge Criteria for optimal edge detection:

Edge Detection Examples: True edge Poor robustness to noise Poor localization Too many responses

The Canny Edge Detector This is probably the most widely used edge detector in computer vision. Canny has shown that the first derivative of the Gaussian closely approximates the operator that optimizes the product of signal-to-noise ratio and localization. His analysis is based on "step-edges" corrupted by "additive Gaussian noise". The Canny edge detector:

The Canny Edge Detector

The derivative of the Gaussian:

The Canny Edge Detector Canny – smoothing and derivatives:

The Canny Edge Detector Canny – gradient magnitude: imagegradient magnitude

Edge Detection To find the edge points, we need to find the local maxima of the gradient magnitude. Broad ridges must be thinned so that only the magnitudes at the points of greatest local change remain. All values along the direction of the gradient that are not peak values of a ridge are suppressed. Non-maxima suppression

Edge Detection Non-maxima suppression – cont.

Edge Detection Non-maxima suppression – cont. What are the neighbors? –Look along gradient normal Quantization of normal directions:

Edge Detection Canny – Non-maxima suppression: gradient magnitudethinned

Edge Detection The output of non-maxima suppression still contains the local maxima created by noise. Can we get rid of them just by using a single threshold? –if we set a low threshold, some noisy maxima will be accepted too. –if we set a high threshold, true maxima might be missed (the value of true maxima will fluctuate above and below the threshold, fragmenting the edge). Hysteresis thresholding / Edge linking A more effective scheme is to use two thresholds: –a low threshold t l –a high threshold t h –usually, t h ≈ 2t l

Optimal Edge Detection: Canny Assume: Linear filtering Additive iid Gaussian noise Edge detector should have: Good Detection. Filter responds to edge, not noise. Good Localization: detected edge near true edge. Single Response: one per edge.

Optimal Edge Detection: Canny (continued) Optimal Detector is approximately Derivative of Gaussian. Detection/Localization trade-off More smoothing improves detection And hurts localization. This is what you might guess from (detect change) + (remove noise)

The Canny edge detector original image (Lena)

The Canny edge detector norm of the gradient

The Canny edge detector thresholding

The Canny edge detector thinning (non-maximum suppression)

Non-maximum suppression Check if pixel is local maximum along gradient direction requires checking interpolated pixels p and r

Predicting the next edge point Assume the marked point is an edge point. Then we construct the tangent to the edge curve (which is normal to the gradient at that point) and use this to predict the next points (here either r or s). (Forsyth & Ponce)

Hysteresis Check that maximum value of gradient value is sufficiently large drop-outs? use hysteresis –use a high threshold to start edge curves and a low threshold to continue them.

Effect of  (Gaussian kernel size) Canny with original The choice of depends on desired behavior large detects large scale edges small detects fine features

(Forsyth & Ponce) Scale Smoothing Eliminates noise edges. Makes edges smoother. Removes fine detail.

fine scale high threshold

coarse scale, high threshold

coarse scale low threshold

Scale space (Witkin 83) Properties of scale space (w/ Gaussian smoothing) edge position may shift with increasing scale (  ) two edges may merge with increasing scale an edge may not split into two with increasing scale larger Gaussian filtered signal first derivative peaks

Edge detection by subtraction original

Edge detection by subtraction smoothed (5x5 Gaussian)

Edge detection by subtraction smoothed – original (scaled by 4, offset +128) Why does this work? filter demo

Gaussian - image filter Laplacian of Gaussian Gaussian delta function

An edge is not a line... How can we detect lines ?

Finding lines in an image Option 1: Search for the line at every possible position/orientation What is the cost of this operation? Option 2: Use a voting scheme: Hough transform

Finding lines in an image Connection between image (x,y) and Hough (m,b) spaces A line in the image corresponds to a point in Hough space To go from image space to Hough space: –given a set of points (x,y), find all (m,b) such that y = mx + b x y m b m0m0 b0b0 image spaceHough space

Finding lines in an image Connection between image (x,y) and Hough (m,b) spaces A line in the image corresponds to a point in Hough space To go from image space to Hough space: –given a set of points (x,y), find all (m,b) such that y = mx + b What does a point (x 0, y 0 ) in the image space map to? x y m b image spaceHough space –A: the solutions of b = -x 0 m + y 0 –this is a line in Hough space x0x0 y0y0

Hough transform algorithm Typically use a different parameterization d is the perpendicular distance from the line to the origin  is the angle this perpendicular makes with the x axis Why?

Hough transform algorithm Typically use a different parameterization d is the perpendicular distance from the line to the origin  is the angle this perpendicular makes with the x axis Why? Basic Hough transform algorithm 1.Initialize H[d,  ]=0 2.for each edge point I[x,y] in the image for  = 0 to 180 H[d,  ] += 1 3.Find the value(s) of (d,  ) where H[d,  ] is maximum 4.The detected line in the image is given by What’s the running time (measured in # votes)?

Extensions Extension 1: Use the image gradient 1.same 2.for each edge point I[x,y] in the image compute unique (d,  ) based on image gradient at (x,y) H[d,  ] += 1 3.same 4.same What’s the running time measured in votes? Extension 2 give more votes for stronger edges Extension 3 change the sampling of (d,  ) to give more/less resolution Extension 4 The same procedure can be used with circles, squares, or any other shape

Extensions Extension 1: Use the image gradient 1.same 2.for each edge point I[x,y] in the image compute unique (d,  ) based on image gradient at (x,y) H[d,  ] += 1 3.same 4.same What’s the running time measured in votes? Extension 2 give more votes for stronger edges Extension 3 change the sampling of (d,  ) to give more/less resolution Extension 4 The same procedure can be used with circles, squares, or any other shape

Hough Transform for Curves The H.T. can be generalized to detect any curve that can be expressed in parametric form: Y = f(x, a1,a2,…ap) a1, a2, … ap are the parameters The parameter space is p-dimensional The accumulating array is LARGE!

Generalizing the H.T. The H.T. can be used even if the curve has not a simple analytic form! 1.Pick a reference point (x c,y c ) 2.For i = 1,…,n : 1.Draw segment to P i on the boundary. 2. Measure its length r i, and its orientation  i. 3.Write the coordinates of (x c,y c ) as a function of r i and  i 4.Record the gradient orientation  i at P i. 3.Build a table with the data, indexed by  i. (x c,y c ) iiii riririri PiPiPiPi iiii x c = x i + r i cos(  i ) y c = y i + r i sin(  i )

Generalizing the H.T. (x c,y c ) PiPiPiPi iiii riririri iiii x c = x i + r i cos(  i ) y c = y i + r i sin(  i ) Suppose, there were m different gradient orientations: (m <= n) 112 mm112 mm... (r 1 1,  1 1 ),(r 1 2,  1 2 ),…,(r 1 n1,  1 n1 ) (r 2 1,  2 1 ),(r 2 2,  1 2 ),…,(r 2 n2,  1 n2 )... (r m 1,  m 1 ),(r m 2,  m 2 ),…,(r m nm,  m nm ) jjjj rjrjrjrj jjjj H.T. table

Generalized H.T. Algorithm: x c = x i + r i cos(  i ) y c = y i + r i sin(  i ) Finds a rotated, scaled, and translated version of the curve: (x c,y c ) PiPiPiPi iiii Sr i iiii  PjPjPjPj jjjj Sr j jjjj  PkPkPkPk iiii Sr k kkkk  1.Form an A accumulator array of possible reference points (x c,y c ), scaling factor S and Rotation angle . 2.For each edge (x,y) in the image: 1.Compute  (x,y) 2.For each (r,  ) corresponding to  (x,y) do: 1.For each S and  : 1.x c = x i + r(  ) S cos[  (  ) +  ] 2.y c = y i + r(  ) S sin[  (  ) +  ] 3.A(x c,y c,S,  ) ++ 3.Find maxima of A.

H.T. Summary H.T. is a “voting” scheme points vote for a set of parameters describing a line or curve. The more votes for a particular set the more evidence that the corresponding curve is present in the image. Can detect MULTIPLE curves in one shot. Computational cost increases with the number of parameters describing the curve.

Corners contain more edges than lines. A point on a line is hard to match. Corner detection

Corners contain more edges than lines. A corner is easier

Edge Detectors Tend to Fail at Corners

Finding Corners Intuition: Right at corner, gradient is ill defined. Near corner, gradient has two different values.

Formula for Finding Corners We look at matrix: Sum over a small region, the hypothetical corner Gradient with respect to x, times gradient with respect to y Matrix is symmetric WHY THIS?

First, consider case where: This means all gradients in neighborhood are: (k,0) or (0, c) or (0, 0) (or off-diagonals cancel). What is region like if:    and   and 

General Case: From Linear Algebra, it follows that because C is symmetric: With R a rotation matrix. So every case is like one on last slide.

So, to detect corners Filter image. Compute magnitude of the gradient everywhere. We construct C in a window. Use Linear Algebra to find  and  If they are both big, we have a corner.

10. Convolution-based Edge Detection Why do edge detection? What is an edge? What is a good edge? How do you find an edge?

Laplacian

Why Laplacian? because it is zero when the rate of intensity change in the image plane is zero or linear.

Log filters=prefilter+laplacian

Mexican Hat (LoG Kernel)

The Log kernel

Oriented Filters are steerable

Changing Scale at 0 Degrees

Changing Phase at 0 degrees