Image Segmentation Chapter 10.

Slides:



Advertisements
Similar presentations
Image Segmentation Longin Jan Latecki CIS 601. Image Segmentation Segmentation divides an image into its constituent regions or objects. Segmentation.
Advertisements

Spatial Filtering.
Chapter 3 Image Enhancement in the Spatial Domain.
Lecture 6 Sharpening Filters
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
Content Based Image Retrieval
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
6/9/2015Digital Image Processing1. 2 Example Histogram.
EE 7730 Image Segmentation.
Chapter 10 Image Segmentation.
Machinen Vision and Dig. Image Analysis 1 Prof. Heikki Kälviäinen CT50A6100 Lectures 8&9: Image Segmentation Professor Heikki Kälviäinen Machine Vision.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Median Filter If the objective is to achieve noise reduction rather than blurring, an alternative approach is to use median filters. That is, the gray.
Segmentation Divide the image into segments. Each segment:
Image Segmentation. Introduction The purpose of image segmentation is to partition an image into meaningful regions with respect to a particular application.
Segmentation (Section 10.2)
Chapter 10 Image Segmentation.
The Segmentation Problem
Thresholding Thresholding is usually the first step in any segmentation approach We have talked about simple single value thresholding already Single value.
Image Segmentation by Clustering using Moments by, Dhiraj Sakumalla.
Chapter 10: Image Segmentation
Chapter 9.  Mathematical morphology: ◦ A useful tool for extracting image components in the representation of region shape.  Boundaries, skeletons,
G52IIP, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
Lecture 16 Image Segmentation 1.The basic concepts of segmentation 2.Point, line, edge detection 3.Thresh holding 4.Region-based segmentation 5.Segmentation.
S EGMENTATION FOR H ANDWRITTEN D OCUMENTS Omar Alaql Fab. 20, 2014.
University of Kurdistan Digital Image Processing (DIP) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
Chap. 9: Image Segmentation Jen-Chang Liu, Motivation Segmentation subdivides an image into its constituent regions or objects Example: 生物細胞在影像序列中的追蹤.
© by Yu Hen Hu 1 ECE533 Digital Image Processing Image Segmentation.
Digital Image Processing Lecture 18: Segmentation: Thresholding & Region-Based Prof. Charlene Tsai.
Chapter 10 Image Segmentation.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
Image Processing Segmentation 1.Process of partitioning a digital image into multiple segments (sets of pixels). 2. Clustering pixels into salient image.
Chapter 10, Part I.  Segmentation subdivides an image into its constituent regions or objects.  Image segmentation methods are generally based on two.
Pixel Connectivity Pixel connectivity is a central concept of both edge- and region- based approaches to segmentation The notation of pixel connectivity.
Spatial Filtering.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Chapter 3 Intensity Transformations.
EE 4780 Edge Detection.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Image Segmentation by Histogram Thresholding Venugopal Rajagopal CIS 581 Instructor: Longin Jan Latecki.
Sharpening Filters To highlight fine detail or to enhance blurred detail. –smoothing ~ integration –sharpening ~ differentiation Categories of sharpening.
主講者 : 陳建齊. Outline & Content 1. Introduction 2. Thresholding 3. Edge-based segmentation 4. Region-based segmentation 5. conclusion 2.
Digital Image Processing
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Machine Vision ENT 273 Regions and Segmentation in Images Hema C.R. Lecture 4.
Evaluation of Image Segmentation algorithms By Dr. Rajeev Srivastava.
Wonjun Kim and Changick Kim, Member, IEEE
1 Mathematic Morphology used to extract image components that are useful in the representation and description of region shape, such as boundaries extraction.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Thresholding Foundation:. Thresholding In A: light objects in dark background To extract the objects: –Select a T that separates the objects from the.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Sharpening Spatial Filters ( high pass)  Previously we have looked at smoothing filters which remove fine detail  Sharpening spatial filters seek to.
Digital Image Processing CSC331
BYST Seg-1 DIP - WS2002: Segmentation Digital Image Processing Image Segmentation Bundit Thipakorn, Ph.D. Computer Engineering Department.
Chapter 10: Image Segmentation The whole is equal to the sum of its parts. Euclid The whole is greater than the sum of its parts. Max Wertheimer.
Machine Vision ENT 273 Lecture 4 Hema C.R.
Chapter 10 Image Segmentation
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Image Segmentation – Detection of Discontinuities
Image Segmentation – Edge Detection
Fall 2012 Longin Jan Latecki
Digital Image Processing
پردازش تصاویر دیجیتال- احمدی فرد
Image Segmentation Image analysis: First step:
Digital Image Processing
IT472 Digital Image Processing
Presentation transcript:

Image Segmentation Chapter 10

Image Segmentation Image segmentation subdivides an image into its constituent regions or objects. The level of detail to which the subdivision is carried depends on the problem being solved. That is, segmentation should stop when the objects or regions of interest in an application have been detected. Segmentation accuracy determines the eventual success or failure of computerized analysis procedures. For this reason mentioned in the above point a spatial care should be taken to improve the probability of accurate segmentation.

Image Segmentation Most of the segmentation algorithms used in this chapter are based on one of two basic properties of intensity values: discontinuity and similarity. In the first category, the approach is to partition the image based on abrupt changes in intensity, such as edges. The principal approaches in the second category are based on partitioning an image into regions that are similar according to a set of predefined criteria. Thresholding, region growing, and region splitting and merging are examples of methods in this category. In this chapter, we discuss and illustrate a number of these approaches and show that improvements in segmentation performance can be achieved by combining methods from distinct categories, such as techniques in which edge detection is combined with thresholding.

Fundamentals Suppose the Image R using the image segmentation process, R will be partitioned into n sub regions: R1 , R2 ,…,Rn . In the segmentation process there are five possible conditions can be obtained: (a) nUi=1 Ri = R. (b) Ri is is a connected set, i=1,2, … , n. (c) Ri ∩ Ri = Ø for all i and j, i ≠ j. (d) Q(Ri ) = TRUE for i = 1, 2, . . .,n. (e) Q(Ri ∪ Ri ) = FALSE for any adjacent regions Ri and Ri . here, Q(Rk ) is a logical predicate defined over the point in set Rk, and Ø is the null set. The symbols ∪ and ∩ represent set union and intersection, respectively. Two regions Ri and Ri are said to be adjacent if their union forms a connected set.

Fundamentals Condition (a) indicates that the segmentation must be complete; that is, every pixel must be in a region. Condition (b) requires that points in a region be connected in some predefined sense(e.g., the points must be 4- or 8- connected. Condition (c) indicates that the regions must be disjoint. Condition (d) deals with the properties that must be satisfied by the pixels in a segmented region for example, Q(Ri) = TRUE if all pixels in Ri have the same intensity level. Condition (e) indicates that two adjacent regions Ri and Rj must be different in the sense of predicate Q.

Fundamentals Thus we see that the fundamental problem in segmentation is to partition an image into regions that satisfy the preceding conditions. Segmentation algorithms for monochrome images generally are based on one of the two basic categories dealing with properties of intensity values: discontinuity and similarity. In the first category, the assumption is that boundaries of the regions are sufficiently different from each other and form the background to allow boundary detection based on local discontinuities in the intensity. Edge-based segmentation is the principle approach used in this category. Region-based segmentation approaches in the second category are based on partitioning an image into regions that are similar according to a set of predefines criteria.

Fig 10.1 In (a) shows an image of a region of constant intensity superimposed on a darker background, also of constant intensity on the foreground. These two regions comprise the overall image region. In (B) shows the result of computing the boundary of the inner region based on intensity discontinuities. Points on the inside and outside of the boundary are black (zero) because there are no discontinuities in intensity in those regions. To segment the image, we assign one level (say, white) to the pixels on, or interior to the boundary and another level (say, black) to all points exterior to the boundary. In (c) shows the result of such a procedure. In (d) is: if a pixel is on, or inside the boundary, label it white; otherwise label it black. We see that this predicate is TRUE for the points labeled black and white in Fig.10.1(c).

Fig 10.1 Similarly the two segmented regions(object and background) satisfy condition (e). The next three images illustrate region-based segmentation. In (d) in similar to (a) but the intensities of the inner region form a textured pattern In (e) it shows the result of computing the edges of this image clearly it is difficult to identify a unique boundary because many of non-zero intensity changes are connected to the boundary, so edge-based segmentation is not a suitable approach. To solve this problem a predicate that differentiate between textured and constant regions. The standard deviation is used for this purpose because it is non-zero (i.e. if the predicate was TRUE) in textured region and zero otherwise. Finally note that these results also satisfy the five conditions stated at the beginning of this section.

Point, Line, and Edge Detection The focus in this section is on segmentation methods that are based on detecting sharp, local changes in intensity. The three types of image features in which we are interested are isolated points, lines, and edges. Edge pixels are pixels at which the intensity of an image function changes abruptly (suddenly), and edges or edge segments are sets of connected edge pixels. Edge detectors are local image processing methods designed to detect edge pixels. A line may be viewed as an edge segment in which the intensity of the background on either side of the line is either much higher or much lower than the intensity of the line pixels.

Background As we know that local averaging smoothes an image. Given that averaging is analogous to integration, so abrupt and local changes can be detected using derivatives. First- and- second derivatives are best suited for this purpose. The following approximations should be used for the first derivative: Must be zero in areas of constant intensity Must be nonzero at the onset of an intensity step or ramp. Must be nonzero at points along an intensity ramp. The following approximations should be used for the second derivative: Must be nonzero at the onset and end of an intensity step or ramp Must be zero along intensity ramps. To illustrate this and to highlight the fundamental similarities and differences between first and second derivatives in the context of image processing, consider Fig.10.2.

Fig 10.2.

Fig 10.2. In (a) it shows an image that contains various solid objects, a line, and a single noise point. In(b) it shows a horizontal intensity profile (scan line) of the image approximately through its center, including the isolated point. Transitions in intensity between solid objects and the background along the scan line show two types of edges: ramp edges (on the left) and step edges (on the right), intensity transitions involving thin objects such as lines often are referred to as roof edges. In (c) it shows a simplification of the profile, with just enough points to make it possible for us to analyze numerically how the first-and- second order derivatives behave as they encounter a noise point, a line, and the edges objects.

1st and 2nd order derivative summery First-order derivatives generally produce thicker edges in an image. Second-order derivatives have a stronger response to fine details, such as thin lines, isolated points, and noise. Second-order derivatives produce a double-edge response at ramp and step transitions in intensity. The sign of the second derivative can be used to determine whether a transition into an edge is from light to dark or dark to light. The approach of choice for computing first and second derivatives at every pixel location in an image is to use spatial filters.

Detection of Isolated Points Based on the previous conclusions, we know that point detection should be based on the second derivative. This implies using the Laplacian filter.

Line Detection The next level of complexity is line detection. Based on previous conclusions, we know that for line detection we can expect second derivatives to result in a stronger response and to produce thinner lines than first derivatives. Thus the Laplacian mask. Also keep in mind that double-line effect of the second derivative must be handled properly. The following example illustrates the procedure in Fig.10.5

Line Detection

Fig.10.5. In (a) it shows a 486 x 486 binary portion of an image. In (b) it shows its Laplacian Image. Since Laplacian contains negative values, scaling is necessary for display. As the magnified section shows, mid gray represents zero, darker gray shades represents negative values, and lighter shades are positive. The double line effect is clearly visible in the magnified region. First, it might appear that the negative values can be handled simply by taking the absolute value of the Laplacian image. However, as In (c) it shows that this approach doubles the thickness of the lines. A more suitable approach is to use the positive values of the Laplacian image. As in (d) this approach results in thinner lines, which are considerably more useful.

Edge Models Edge detection is the approach used most frequently for segmenting images based on abrupt (local) changes in intensity. Edge models are classified according to their intensity profiles: A step edge involves a transition between two intensity levels occurring ideally over the distance of 1 pixel. For example in images generated by a computer for use in areas such as solid modeling and animation. These clean, ideal edges can occur over the distance of 1 pixel, provided that no additional processing (such as smoothing) is used to make them look “real”.

Edge Models Digital Step edges : are used frequently as edge models in an algorithm development. For example, the canny edge detection algorithm was derived using a step-edge model. In practice, digital images have edges that are blurred and noisy, with the degree of blurring determined by the limitation in the focusing mechanism (e.g., lenses in the case of optical images). In such situation. Edges are more closely modeled as having an intensity ramp profile. A third model of an edge is the so-called roof edge , having the characteristics illustrated in fig.10.8(c) . Roof edges are models of lines through a region, with the base (width) of the roof edge being determined by the thickness and sharpness of the line.

Edge Models

Edge Models We conclude the edge models section by noting the there are three fundamental steps performed in edge detection: 1. Image Smoothing for noise reduction: the need of this step is to reduce the noise in an image. 2. Detection of edge points: it is a local operation that extracts all points from an image, these points are potential candidates to become edge points. 3. Edge Localization: the objective of this point is to select from edge points only the points that are true members of the set of points comprising an edge.

Edge Detection

Edge Detection

Edge Detection

Thresholding In this section the images are partitioned directly into regions based on intensity values. The basics of intensity thresholding: Suppose that the intensity histogram in Fig.10.35(a) corresponds to an image, f(x,y), composed of light objects on a dark background, in such a way that object and background pixels have intensity values grouped into two dominant modes. One obvious way to extract the objects from the background is to select a threshold, T, that separates these modes. Then, any point (x,y) in the image at which f(x,y) >T is called an object point; otherwise, the point is called a background point.

Segmentation Using Morphological Watersheds: Separating touching objects in an image is one of the more difficult image processing operations. The watershed transform is often applied to this problem. The watershed transform finds "watershed ridge lines" in an image by treating it as a surface where light pixels are high and dark pixels are low. Segmentation using the watershed transform works better if you can identify, or "mark," foreground objects and background locations Refer to: http://www.mathworks.com/products/image/demos.html?fil e=/products/demos/shipping/images/ipexwatershed.html