Joonas Vanninen Antonio Palomino Alarcos.  One of the objectives of biomedical image analysis  The characteristics of the regions are examined later.

Slides:



Advertisements
Similar presentations
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Advertisements

Chapter 3 Image Enhancement in the Spatial Domain.
Instructor: Mircea Nicolescu Lecture 6 CS 485 / 685 Computer Vision.
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Segmentation (2): edge detection
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Content Based Image Retrieval
Biomedical Image Analysis Rangaraj M. Rangayyan Ch. 5 Detection of Regions of Interest: Sections , Presentation March 3rd 2005 Jukka Parviainen.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
6/9/2015Digital Image Processing1. 2 Example Histogram.
Chapter 10 Image Segmentation.
Machinen Vision and Dig. Image Analysis 1 Prof. Heikki Kälviäinen CT50A6100 Lectures 8&9: Image Segmentation Professor Heikki Kälviäinen Machine Vision.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Segmentation Divide the image into segments. Each segment:
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering
Segmentation (Section 10.2)
Chapter 10 Image Segmentation.
Lecture 2: Image filtering
Lecture 4: Edge Based Vision Dr Carole Twining Thursday 18th March 2:00pm – 2:50pm.
Machinen Vision and Dig. Image Analysis 1 Prof. Heikki Kälviäinen CT50A6100 Lectures 8&9: Image Segmentation Professor Heikki Kälviäinen Machine Vision.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Introduction to Image Processing Grass Sky Tree ? ? Review.
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Chapter 10: Image Segmentation
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30 – 11:50am.
G52IIP, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
Edge Linking & Boundary Detection
Lecture 16 Image Segmentation 1.The basic concepts of segmentation 2.Point, line, edge detection 3.Thresh holding 4.Region-based segmentation 5.Segmentation.
© by Yu Hen Hu 1 ECE533 Digital Image Processing Image Segmentation.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
Chapter 10 Image Segmentation.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
Chapter 10, Part I.  Segmentation subdivides an image into its constituent regions or objects.  Image segmentation methods are generally based on two.
Pixel Connectivity Pixel connectivity is a central concept of both edge- and region- based approaches to segmentation The notation of pixel connectivity.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
CS654: Digital Image Analysis Lecture 24: Introduction to Image Segmentation: Edge Detection Slide credits: Derek Hoiem, Lana Lazebnik, Steve Seitz, David.
EE 4780 Edge Detection.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
3.7 Adaptive filtering Joonas Vanninen Antonio Palomino Alarcos.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Digital Image Processing
Chapter 9: Image Segmentation
Image Segmentation Image segmentation (segmentace obrazu)
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Thresholding Foundation:. Thresholding In A: light objects in dark background To extract the objects: –Select a T that separates the objects from the.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Miguel Tavares Coimbra
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Chapter 10 Image Segmentation
Miguel Tavares Coimbra
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Detection of discontinuity using
Image Segmentation – Edge Detection
Mean Shift Segmentation
Fitting Curve Models to Edges
Detection of Regions of Interest
ECE 692 – Advanced Topics in Computer Vision
Computer Vision Lecture 16: Texture II
Digital Image Processing
IT472 Digital Image Processing
IT472 Digital Image Processing
Presentation transcript:

Joonas Vanninen Antonio Palomino Alarcos

 One of the objectives of biomedical image analysis  The characteristics of the regions are examined later in detail  Segmentation is the process of dividing an image into different parts by  Discontinuity  Similarity  Many of the methods can be used more generally in detection of features

 If the gray levels of the objects of interest are known, the image can be thresholded to include only them  Doesn’t generally produce uniform regions

 Can be useful in noise removal and the analysis of particles  Isolated points can be detected with the following convolution mask Can be thresholded Straight lines can be detected with masks

 An edge:  A large change in the gray level  The change is in a particular direction, depending upon the orientation of the edge  Can be measured for example with derivatives or gradients:

 Derivatives and gradients can be approximated from differences with convolution masks  Prewitt operators:

 Sobel operators have larger weights for the pixels in the same row / column: Roberts operators use 2 x 2 neighborhoods to compute cross-differences

 With two masks we can get a vector value for the gradient:

 A second-order difference operator Omnidirectional: sensitive to edges in all directions but can’t detect the direction of the edge Sensitive to noise since there is no averaging Positive and negative values for each edge Zero crossings in between, can be used to find the local maximas of the first-order gradients

 The noise in an image can be reduced by first convolving it with a Gaussian: The order of the operators can be changed:

 The result is called the Laplacian of Gaussian operator, LoG  Often referred as the Mexican hat function  Can be approximated by the difference between two gaussians, DoG operator

 Uses zero-crossings of the image convolved with the LoG-operator to represent edges  Problems:  If the edges are not well separated, zero crossings may also represent local minimas (false zero crossings)  The edge localization may be poor

 Different structures are visible with different scales, parameter σ of the Gaussian  Ideally an edge would be seen with as many scales as possible  Stability map measures the persistence of boundaries over a range of filter scales

 Ideal detector for step-type edges corrupted with additive white noise by three criterias:  Detection: no false or missing edges  Localization: detected edges are spatially near the real ones  A single output for a single edge  The image is convolved with a Gaussian  The gradient is estimated for each pixel The direction of the gradient is the normal of the edge The amplitude of the gradient is the strength of the edge

 Non-maximal suppression: the values of gradients that are not local maximas are set to zero  The gradients are hysteresis thresholded: a pixel is considered to be an edge pixel if It has a gradient value larger than the higher threshold, It has a gradient value larger than the lower threshold and it is spatially connected to another edge pixel  The zero-crossings of the second derivative in the direction of the normal can also be used This can be used for sub-pixel accuracy

 Highpass filters in the Fourier-domain can be used to find edges  High-frequency noise → use a bandpass filter  LoG –filter: a high-frequency emphasising Laplacian and a Gaussian lowpass filter  Use of frequency domain may be computationally advantageous if the LoG is specified with a large array (large σ)

 Edges are usually not linked  The similarity of edge pixels can be measured by:  The strenght of the gradient  The direction of the gradient  Most similar pixels should be used to link edges to each other

 Dividing the image into regions that could correspond to ROIs is an important prerequisite apply the image analysis techniques  Computer analysis of images usually starts with segmentation  Reduces pixel data to region-based information about the objects present in the image

 Thresholding techniques  Assumption: all pixels whose values lie within a certain range belong to the same class  Threshold may be determined based upon the histogram of the image  Boundary-based methods  Assumption: pixel values change rapidly at the boundaries between regions  Intensity discontinuities lying at the boundaries between objects and backgrounds must be detected

 Region-based methods  Assumption: neighboring pixels within a region have similar values  May be divided into two groups  Region Splitting and Merging  Region Growing  Hybrid techniques  Combine boundary and region criteria

 Noise modify the gray levels to distributions represented by Gaussian PDFs  Probability of erroneus classification is  Differentiating whith respect to T, equating the result to zero and taking some simplifications (σ1=σ2=σ)

 This method partitions R (entire space of the given image) into n subregions such that:   is a connected region   for i=1,2,…,n   Results are highly dependent upon the procedure used to select the seed pixels and the inclusion criteria used

 Initially, we will divide the given image arbitrarily into a set of disjoint quadrants  If F(R i )=FALSE for any quadrant, subdivide that quadrant into subquadrants  Iterate the procedure until no further changes are made  Splitting procedure could result in adjacent regions that are similar, a merging step would be required, as follows: F(R i UR k )=TRUE  Iterate until no further merging is possible

 A neighboring pixel f(m,n) is appended to the region if:  T Ξ ‘Additive Tolerance Level’  Problem: The size and shape of the region depend on the seed pixel selected

 Running-mean algorithm  The new pixel is compared with the mean gray level (running mean) of the region being grown  ”Current center pixel” method  After a pixel C is appended to the region, its 4 (or 8) connected neighbours would be checked for inclusion in the region as follows:

 A relative difference, based upon a ”multiplicative tolerance level” ( τ ) could be employed: f(m,n)Ξgray level of the pixel being checked μRcΞ original seed pixel value current center pixel value running-mean gray level

 Last methods presents difficulties in the selection of the range of the tolerance value  Possible solution: make use of some of the characteristics of the HVS  New parameter ”Just-noticeable difference” (JND) is used: JND=L.C T  L Ξ background luminance  C T Ξ threshold contrast

 Determination of the JND as a function of background gray level is needed to apply this method  It is possible to determine this relationship based upon psychophysical esperiments

1. It starts with a 4-connected neighbor-pixel grouping. The condition is defined as: 2. Removal of small regions is performed 3. Merging of connected regions is performed if any of two neighboring regions meet the JND condition 4. The procedure is iterated until no nieghboring region satisfies the JND condition

 Hough domain: straight lines are characterized by the pair of parameters (m,c)  m is the slope  c is the position  Disadvantage: m and c have unbounded ranges  Parametric representation  θ limited to [0,π] (or to [0,2π])  ρ limited by the size of the image  Limits of (ρ,θ) affected by the choice of the origin

 If normal parameters of the line are (ρ 0,θ 0 )  Derived properties:  Point in (x,y) space corresponds to a sinusoidal curve in the (ρ,θ) space  Point in the (ρ,θ) space correspond to a straigh line in the (x,y) space  Points in the same straigh line in the (x,y) space corresponds to curves through a common point in the (ρ,θ) space  Points on the same curve in the parameter space corrspond to lines through a common point in the (x,y) space

1. Discretize the (ρ,θ) space into accumulator cells by quantizing ρ and θ 2. Accumulator cells are increased by one (new curve ’ρ=x(n)cosθ+y(n)cosθ’ found) for each pixel with a value of 1 3. Cordinates of points of intersection of the curves in the parameter space provide the parameters of the line

Image: Wikipedia

 Any circle in the (x,y) space is represented by a single point in the 3D (a,b,c) parameter space  Points along the perimeter of the circle describe a circular cone in the (a,b,c) space  Algorithm for detection of straight lines may be extended for the detection of circles

 The methods here are quite general, their applications are not limited to just segmentation  The purpose of any image processing method is to obtain a result that can be presented to humans or used in a further analysis  The result should be consistent with a human observer’s assessment  A priori information about the shapes and features in an image is important in segmentation