Presentation is loading. Please wait.

Presentation is loading. Please wait.

Edge Detection & Image Segmentation Dr. Md. Altab Hossain Associate Professor Dept. of Computer Science & Engineering, RU 1.

Similar presentations


Presentation on theme: "Edge Detection & Image Segmentation Dr. Md. Altab Hossain Associate Professor Dept. of Computer Science & Engineering, RU 1."— Presentation transcript:

1 Edge Detection & Image Segmentation Dr. Md. Altab Hossain Associate Professor Dept. of Computer Science & Engineering, RU 1

2 2 Edge Detection

3 Preprocess Image acquisition, restoration, and enhancement Intermediate process Image segmentation and feature extraction High level process Image interpretation and recognition Element of Image Analysis 3

4 1. Similarity properties of pixels inside the object are used to group pixels into the same set. 2. Discontinuity of pixel properties at the boundary between object and background is used to distinguish between pixels belonging to the object and those of background. Discontinuity: Intensity change at boundary Point, Line, Edge Similarity: Internal pixels share the same intensity Image Attributes for Image Segmentation 4

5 Point Detection  We can use Laplacian masks for point detection.  Laplacian masks have the largest coefficient at the center of the mask while neighbor pixels have an opposite sign.  This mask will give the high response to the object that has the similar shape as the mask such as isolated points.  Notice that sum of all coefficients of the mask is equal to zero. This is due to the need that the response of the filter must be zero inside a constant intensity area 8 0 0 4 0 0 5

6 Point Detection X-ray image of the turbine blade with porosity Laplacian imageAfter thresholding Location of porosity  Point detection can be done by applying the thresholding function: (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2 nd Edition. 6

7 Line Detection  Similar to point detection, line detection can be performed using the mask the has the shape look similar to a part of a line  There are several directions that the line in a digital image can be.  For a simple line detection, 4 directions that are mostly used are Horizontal, +45 degree, vertical and –45 degree. (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2 nd Edition. Line detection masks 7

8 Line Detection Example Binary wire bond mask image Absolute value of result after processing with -45 line detector Result after thresholding (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2 nd Edition. Notice that –45 degree lines are most sensitive 8

9 Definition of Edges  Edges are significant local changes of intensity in an image.  Edges typically occur on the boundary between two different regions in an image. 9

10 What Causes Intensity Changes?  Geometric events  surface orientation (boundary) discontinuities  depth discontinuities  color and texture discontinuities  Non-geometric events  illumination changes  specularities  shadows  inter-reflections depth discontinuity color discontinuity illumination discontinuity surface normal discontinuity 10

11 Why is Edge Detection Useful?  Important features can be extracted from the edges of an image (e.g., corners, lines, curves).  These features are used by higher-level computer vision algorithms (e.g., recognition). 11

12 Where are the edges? 12

13 13  Edge normal: unit vector in the direction of maximum intensity change.  Edge direction: unit vector to perpendicular to the edge normal.  Edge position or center: the image position at which the edge is located.  Edge strength: related to the local image contrast along the normal. Edge Descriptors

14 Modeling Intensity Changes  Step edge: the image intensity abruptly changes from one value on one side of the discontinuity to a different value on the opposite side.  Ramp edge: a step edge where the intensity change is not instantaneous but occur over a finite distance. 14

15 Modeling Intensity Changes (cont’d)  Ridge edge: the image intensity abruptly changes value but then returns to the starting value within some short distance (i.e., usually generated by lines).  Roof edge: a ridge edge where the intensity change is not instantaneous but occur over a finite distance (i.e., usually generated by the intersection of two surfaces). 15

16 Main Steps in Edge Detection (1) Smoothing: suppress as much noise as possible, without destroying true edges. (2) Enhancement: apply differentiation to enhance the quality of edges (i.e., sharpening). (3) Thresholding: determine which edge pixels should be discarded as noise and which should be retained (i.e., threshold edge magnitude). (4) Localization: determine the exact edge location. sub-pixel resolution might be required for some applications to estimate the location of an edge to better than the spacing between pixels. 16

17 Edge Detection Using Derivatives  Often, points that lie on an edge are detected by: (1) Detecting the local maxima or minima of the first derivative. (2) Detecting the zero-crossings of the second derivative. 2 nd derivative 1 st derivative 17

18 Gray level profile The 1 st derivative -1.5-0.500.511.522.533.5 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 -1.5-0.500.511.522.533.5 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 Edge Minimum point Maximum point + + -- Zero crossing Intensity Smoothed Step Edge and Its Derivatives The 2 nd derivative 18

19 Image Derivatives  How can we differentiate a digital image?  Option 1: reconstruct a continuous image, f(x,y), then compute the derivative.  Option 2: take discrete derivative (i.e., finite differences) Consider this case first! 19

20 Edge Detection Using First Derivative (upward) step edge (centered at x) 1D functions (not centered at x) 20

21 Edge Detection Using Second Derivative  Approximate finding maxima/minima of gradient magnitude by finding places where:  Can’t always find discrete pixels where the second derivative is zero – look for zero-crossing instead. 21

22 Replace x+1 with x (i.e., centered at x): 1D functions: (centered at x+1) Edge Detection Using Second Derivative (cont’d) 22

23 Edge Detection Using Second Derivative (cont’d) 23

24 Edge Detection Using Second Derivative (cont’d) 24

25 Noise Effect on Images Edges  First column: images and gray-level profiles of a ramp edge corrupted by random Gaussian noise of mean 0 and  = 0.0, 0.1, 1.0 and 10.0, respectively.  Second column: first- derivative images and gray-level profiles.  Third column : second- derivative images and gray-level profiles. 25

26 Edge Detection Using First Derivative (Gradient)  The first derivate of an image can be computed using the gradient: 2D functions: 26

27  The gradient is a vector which has magnitude and direction:  Magnitude: indicates edge strength.  Direction: indicates edge direction. i.e., perpendicular to edge direction or (approximation) Gradient Representation 27

28 Approximate Gradient  Approximate gradient using finite differences: 28

29 Approximate Gradient (cont’d)  Cartesian vs pixel-coordinates: - j corresponds to x direction - i to -y direction 29

30 Approximating Gradient (cont’d)  We can implement and using the following masks: (x+1/2,y) (x,y+1/2) * * good approximation at (x+1/2,y) good approximation at (x,y+1/2) 30

31 Approximating Gradient (cont’d)  A different approximation of the gradient:  3 x 3 neighborhood: 31

32 Approximating Gradient (cont’d)  and can be implemented using the following masks: 32

33 Another Approximation  Consider the arrangement of pixels about the pixel (i, j):  The partial derivatives can be computed by:  The constant c implies the emphasis given to pixels closer to the center of the mask. 3 x 3 neighborhood: 33

34 Prewitt Operator  Setting c = 1, we get the Prewitt operator: 34

35 Sobel Operator  Setting c = 2, we get the Sobel operator: 35

36 Edge Detection Steps Using Gradient 36

37 An Example 37

38 An Example (cont’d) 38

39 Comparison of Gradient based Operator 39

40 40 The Canny Edge Detector

41 41 The Canny Edge Detector  Canny – smoothing and derivatives:

42 42 The Canny Edge Detector  Canny – gradient magnitude: imagegradient magnitude

43 Practical Issues  Choice of threshold. gradient magnitude low threshold high threshold 43

44 Practical Issues (cont’d) Edge thinning and linking. 44

45 Criteria for Optimal Edge Detection  (1) Good detection  Minimize the probability of false positives (i.e., spurious edges).  Minimize the probability of false negatives (i.e., missing real edges).  (2) Good localization  Detected edges must be as close as possible to the true edges.  (3) Single response  Minimize the number of local maxima around the true edge. 45

46 46 Edge Contour Extraction  Two main categories of methods:  local methods (extend edges by seeking the most "compatible" candidate edge in a neighborhood).  global methods (more computationally expensive - domain knowledge can be incorporated in their cost function).  Edge detectors typically produce short, disjoint edge segments.  These segments are generally of little use until they are aggregated into extended edges.  We assume that edge thinning has already be done (e.g., non-maxima suppression).

47 47 Edge Contour Extraction

48 48 Local Processing Methods  Edge Linking using neighbor

49 49 Local Processing Methods  Contour extraction using heuristic search  A more comprehensive approach to contour extraction is based on graph searching.  Graph representation of edge points:  (1) Edge points at position p i correspond to graph nodes.  (2) The nodes are connected to each other if local edge linking rules are satisfied.

50 50 Local Processing Methods  Contour extraction using heuristic search – cont.  The generation of a contour (if any) from pixel p A to pixel p B  the generation of a minimum-cost path in the directed graph.  A cost function for a path connecting nodes p 1 = p A to p N = p B could be defined as follows:  Finding a minimum-cost path is not trivial in terms of computation.

51 51 Local Processing Methods  Contour extraction using dynamic programming  Dynamic programming  It is an optimization method that searches for optima of functions in which not all the variables are simultaneously interrelated.  It subdivides a problem recursively into smaller sub-problems that may need to be solved in the future, solving each sub-problem – proceeding from the smaller ones to the larger ones – and storing the solutions in a table that can be looked up when need arises.  Principle of optimality (applied to the case of graph searching): the optimal path between two nodes p A and p B can be split into two optimal sub-paths p A p i and p i p B for any p i lying on the optimal path p A p B.

52 52 Global Processing Methods  If the gaps between pixels are very large, local processing methods are not effective.  Global methods are more effective in this case !!  Hough Transform can be used to determine whether points lie on a curve of a specified shape (model-based method).  Deformable Models (Snakes) can be used to extract the boundaries of objects having arbitrary shapes.  Grouping can be used to decide which groups of features are likely to be part of the same object.

53 53 Image segmentation

54 Definition of Segmentation  Segmentation: Image segmentation refers to the partition of an image into a set of regions that have a strong correlation with objects or areas of the real world.  Complete segmentation (High-level): Region(s) correspond(s) uniquely with image object(s);  not an easy task;  requires domain specific knowledge; context is known and a priori knowledge is available and used for segmentation.  Partial segmentation (Low-level): Regions do not always correspond directly with image objects. Image is divided into separate regions that are homogenous with respect to a chosen property such as brightness, color, texture, motion, etc. 54

55 Example-Complete Segmentation  The image is segmented into two regions: face and background.  We know that the human face is of some certain color.  Several other heuristics using the knowledge of human face can be used.  Small holes and isolated regions are eliminated. 55

56 Example-Partial Segmentation  Regions do not directly correspond to objects, but to regions that are similar in color  Needs further processing 56

57 57 Segmentation Criteria  Segmentation of an image I into a set of regions S should satisfy: 1.  S i = SPartition covers the whole image. 2. S i  S j = , i  jNo regions intersect. 3.  S i, P(S i ) = trueHomogeneity predicate is satisfied by each region. 4. P(S i  S j ) = false,Union of adjacent regions i  j, S i adjacent S j does not satisfy it.

58 58 Image segmentation  So, all we have to do is to define and implement the similarity predicate.  But, what do we want to be similar in each region?  Is there any property that will cause the regions to be meaningful objects?  Example approaches:  Histogram-based  Clustering-based  Region growing  Split-and-merge  Graph-based

59 Histogram-based segmentation Global Thresholding = Choose threshold T that separates object from background. 59

60  Simple thresholding is not always possible:  Many objects at different gray levels.  Variations in background gray level.  Noise in image. Histogram-based segmentation 60

61 Local Thresholding-4 Thresholds  Divide image in to regions. Perform thresholding independently in each region. 61

62 Adaptive Thresholding  Every pixel in image is thresholded according to the histogram of the pixel neighborhood. 62

63  Any feature vector that can be associated with a pixel can be used to group pixels into clusters. Once pixels have been grouped into clusters, it is easy to find connected regions using connected components labeling. Clustering-based segmentation  K-means algorithm can be used for clustering 63

64 64 Clustering-based segmentation K-means clustering of color.

65 65 Clustering-based segmentation  Clustering can also be used with other features (e.g., texture) in addition to color. Original Images Color Regions Texture Regions

66 Region Growing 66

67 67 Region growing  Usually a statistical test is used to decide which pixels can be added to a region.  Region is a population with similar statistics.  Use statistical test to see if neighbor on border fits into the region population.  Let R be the N pixel region so far and p be a neighboring pixel with gray level.  Define the mean X and scatter S 2 (sample variance) by

68 68 Region growing  The T for a pixel p statistic is defined by  It has a T N-1 distribution if all the pixels in R and the test pixel p are independent and identically distributed Gaussians (i.i.d. assumption).

69 69 Region growing  For the T distribution, statistical tables give us the probability Pr(T ≤ t) for a given degrees of freedom and a confidence level. From this, pick a suitable threshold t.  If the computed T ≤ t for desired confidence level, add p to region R and update the mean and scatter using p.  If T is too high, the value p is not likely to have arisen from the population of pixels in R. Start a new region.  Many other statistical and distance-based methods have also been proposed for region growing.

70 70 Region growing image segmentation

71 Region growing 71

72 Split and Merge Segmentation  2 Stage Algorithm:  Stage 1: Split  Split image into regions using a Quad Tree representation.  Stage 2: Merge  Merge "leaves" of the Quad Tree which are neighboring and "similar". 72

73 Quad Tree Representation 73

74 Split and Merge Example 74

75 Graph Based Segmentation  Segmentation can be viewed as a graph partitioning problem 75

76 Forming Graph  Each pixel is treated as a node in a graph.  Each pixel has an edge to its neighbours. (e.g. 8 adjacent neighbours in 2D).  Edge weight w(i,j) is a function of the similarity between nodes i and j.  high weight vertices are probably part of the same element.  Low weight nodes are probably part of different element. 76

77 Images as graphs  Nodes for every pixel  Edge between every pair of pixel (or every pair of “sufficiently close” pixels)  Each edge is weighted by the affinity or similarity of the two nodes w ij i j 77

78 Constructing a Graph from an Image  Two kinds of vertices  Two kinds of edges  Cut-Segmentation 78

79 What is a “cut”?  A graph G = (V,E) can be partitioned into two disjoint sets, by simply removing edges connecting the two parts.  The degree of dissimilarity between these two pieces can be computed as total weight of the edges that have been removed. In graph theoretic language it is called the cut: 79

80 Example cut 80

81 Graph Cut  Each cut corresponds to some cost (cut ): sum of the weights for the edges that have been removed. 81

82 Graph Based Algorithms  Graph Cut-Wu and Leahy 1993  Segmentation by normalized-cut 82

83 Interactive Image Segmentation Magic Wand (198?) Intelligent Scissors [Mortensen and Barrett, 1995] GrabCut [Rother et al., 2004] Graph Cuts [Boykov and Jolly, 2001] LazySnapping [Li et al., 2004] 83

84 Thank You 84


Download ppt "Edge Detection & Image Segmentation Dr. Md. Altab Hossain Associate Professor Dept. of Computer Science & Engineering, RU 1."

Similar presentations


Ads by Google