CGMB424: IMAGE PROCESSING AND COMPUTER VISION

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
Regional Processing Convolutional filters. Smoothing  Convolution can be used to achieve a variety of effects depending on the kernel.  Smoothing, or.
Image Processing A brief introduction (by Edgar Alejandro Guerrero Arroyo)
Sliding Window Filters and Edge Detection Longin Jan Latecki Computer Graphics and Image Processing CIS 601 – Fall 2004.
EDGE DETECTION.
Multimedia communications EG 371Dr Matt Roach Multimedia Communications EG 371 and EE 348 Dr Matt Roach Lecture 6 Image processing (filters)
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
CS 376b Introduction to Computer Vision 04 / 11 / 2008 Instructor: Michael Eckmann.
Image Analysis Preprocessing Image Quantization Binary Image Analysis
3. Introduction to Digital Image Analysis
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Segmentation (Section 10.2)
Preprocessing ROI Image Geometry
Image Enhancement.
CS 376b Introduction to Computer Vision 04 / 14 / 2008 Instructor: Michael Eckmann.
EE663 Image Processing Edge Detection 3 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Hough Transform. Detecting Lines Hough transform detects lines in images Equation of line is: y = mx + b or Hough transform uses an array called accumulator.
Image Analysis Preprocessing Arithmetic and Logic Operations Spatial Filters Image Quantization.
© 2010 Cengage Learning Engineering. All Rights Reserved.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Image processing Lecture 4.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Edge Detection Hao Huy Tran Computer Graphics and Image Processing CIS 581 – Fall 2002 Professor: Dr. Longin Jan Latecki.
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
CS 376b Introduction to Computer Vision 02 / 26 / 2008 Instructor: Michael Eckmann.
Chapter 10: Image Segmentation
Neighborhood Operations
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
Chapter 3: Image Restoration Geometric Transforms.
Digital Image Processing CCS331 Relationships of Pixel 1.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
Joonas Vanninen Antonio Palomino Alarcos.  One of the objectives of biomedical image analysis  The characteristics of the regions are examined later.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
SHINTA P. Juli What are edges in an image? Edge Detection Edge Detection Methods Edge Operators Matlab Program.
Chapter 10, Part I.  Segmentation subdivides an image into its constituent regions or objects.  Image segmentation methods are generally based on two.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
October 7, 2014Computer Vision Lecture 9: Edge Detection II 1 Laplacian Filters Idea: Smooth the image, Smooth the image, compute the second derivative.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
1 CMPB 345: IMAGE PROCESSING DISCRETE TRANSFORM 2.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Sliding Window Filters Longin Jan Latecki October 9, 2002.
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Image Pre-Processing in the Spatial and Frequent Domain
An Adept Edge Detection Algorithm for Human Knee Osteoarthritis Images
Mean Shift Segmentation
Fitting Curve Models to Edges
Computer Vision Lecture 9: Edge Detection II
Image Processing, Lecture #10
: Chapter 8: Edge Detection
Image Filtering Readings: Ch 5: 5. 4, 5. 5, 5. 6, , 5
IT472 Digital Image Processing
IT472 Digital Image Processing
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

CGMB424: IMAGE PROCESSING AND COMPUTER VISION image analysis (part 2) edge/line detection

Overview Many edge and line detection are implemented with convolution masks, where most are based on discrete approximation to differential operators → measure the rate of change in image brightness function A large change in image brightness over a short spatial distance indicates the presence of an edge To detect line, first we need to detect the edge Mark edge points Merge edge points to form lines and object outlines

Problem in Edge and Line Detection Noise in an image can create problem → eliminate or at least minimize noise in an image Need to have tradeoff between sensitivity and accuracy of edge detector Too sensitive → detect more noise points Less sensitive → might miss valid edges Larger mask → less sensitive to noise Lower grey level threshold → reduce noise effects

Edge Detection It based on the idea of what relation that a pixel has with its neighbors If a pixel’s grey level value is similar to those around it → most probably, there is no edge If a pixel has neighbors with large difference in grey level value → it may represent an edge

Edge Detection Types of edges Ideal edge Real edge Has abrupt changes in brightness The edge appears very distinct Real edge Brightness changes gradually The gradual changes is a minor form of blurring caused by the imaging device → imaging device, lenses, lighting, etc

Edge Detection Ideal edge Real edge brightness brightness Spatial coordinates Spatial coordinates Ideal edge Real edge

Edge Detection Roberts Operator Mark edge points only; does not return any information about the edge orientation The simplest and oldest method of edge detection Work best with binary images There are two forms to calculate the magnitude of the edge The square root of the sum of the diagonal neighbors squared The sum of the magnitude of the difference of the diagonal neighbors The second form of the equation is often used since it is computational efficient The disadvantage of Roberts Operator is that it is very sensitive to noise

Edge Detection Roberts Operator The first form is √[I(r,c)-I(r-1,c-1)]2 + [I(r,c-1)-I(r-1,c)]2 The second form is |I(r,c)-I(r-1,c-1)| + |I(r,c-1)-I(r-1,c)| I (r,c)

Edge Detection Sobel Operator This operator looks for edges in both horizontal and vertical directions and then combine the information into a single metric Sobel operator is sensitive to high frequency noise The mask -1 -2 1 2 -1 1 -2 2 Row mask Column mask

Edge Detection Sobel Operator After the convolution we will get 2 values s1 → corresponding result from row mask s2 → corresponding result from column mask To get the edge magnitude √s12+s22 To get the edge direction s1 s2 Tan-1

Edge Detection Prewitt Operator This operator is the same as Sobel, but using different mask coefficient The mask -1 1 -1 1 Row mask Column mask Effect of Horizontal Prewitt mask Effect of Vertical Prewitt mask Prewitt edge magnitude Image taken from http://ari.cankaya.edu.tr/~reza/ImLab4.htm

Edge Detection Prewitt Operator After the convolution we will get 2 values p1 → corresponding result from row mask p2 → corresponding result from column mask To get the edge magnitude √p12+p22 To get the edge direction p1 p2 Tan-1

Edge Detection Kirsch Compass Masks Called compass masks because they are defined taking a single mask and rotating it to the eight major compass orientations The masks are as follows -3 5 -3 5 5 -3 5 -3 k0 k1 k2 k3 5 -3 -3 5 -3 5 -3 5 k4 k5 k6 k7

Edge Detection Kirsch Compass Masks The edge magnitude is defined as the maximum value found by the convolution of the masks with the image The edge direction is defined by the mask that produces the maximum magnitude, e.g. k2 corresponds to the horizontal edge The last four masks are actually the same as first four, but flipped about a central axis

Edge Detection Robinson Compass Masks Are used in similar manner as Kirsch compass masks → easier because rely on coefficient of 0, 1 and 2 Symmetrical about their directional axis (axis with 0) We only need to compute the results of 4 of the masks, the other 4 can be obtained by negating the results from the first 4.

Edge Detection Robinson Compass Masks The masks -1 1 -2 2 1 2 -1 -2 1 2 -1 -2 2 1 -1 -2 r0 r1 r2 r3 1 -1 2 -2 -1 -2 1 2 -1 -2 1 2 -2 -1 1 2 r4 r5 r6 r7

Edge Detection Robinson Compass Masks The edge magnitude is defined by the maximum value found by the convolution of edge of the masks with the image The edge direction is defined by the mask that produces the maximum magnitude

Edge Detection Laplacian Operators There are 3 laplacian masks and each represent different approximations of the laplacian operator The masks are rotationally symmetry They are applied by selecting one mask and convolving it with the image The sign of the result (positive/negative) from 2 adjacent pixel locations provides directional information and also tells us which side of the edge is brighter

Edge Detection Laplacian Operators -1 4 1 -2 4 -1 8 Laplacian masks The coefficient of the masks are all 0 This is to make sure at a region of interest with constant value, it will return 0 If the coefficient is increased by 1, we will get the original grey level The larger the sum, the less the processed image is changed from the original image

Edge Detection Frei-Chen Masks The masks form a complete set of basic vectors → we can represent any 3X3 sub-image as a weighted sum of the 9 Frei-Chen masks These weights are found by projecting a 3X3 su-bimage onto each of these masks It is similar to convolution process

Edge Detection Frei-Chen Masks 1 √2 -1 - √2 1 -1 √2 - √2 -1 √2 1 - √2 1 2 √2 1 2 √2 1 2 √2 f1 f2 f3 √2 -1 1 - √2 1 -1 -1 1 1 2 √2 1 2 1 2 f4 f5 f6 1 -2 4 -2 1 4 1 1 6 1 6 1 3 f8 f9 f7

Edge Detection Frei-Chen Masks Let say we have the following subimage 1 -2 4 Is = Start projecting the sub-image onto the masks Overlay the sub-image on the mask (in this case it is f1) Do normal convolution Multiply the product of the convolution with 1/(2√2) factor 1/(2√2)[1(1)+0(√2)+1(1)+1(0)+0(0)+1(0)+1(-1)+0(- √2)+1(-1)] = 0 Keep on calculating until f9, and we will get f1→0, f2→0, f3→0, f4→0, f5→-1, f6→0, f7→0, f8→-1,f9→2 Take the weights (non-zero) and multiply them by each mask; then sum the corresponding values

Edge Detection Frei-Chen Masks Referring to the example, non-zero weights are at f5, f8 and f9 1 -1 -2 1 4 1 1 (-1)(1/2) = Is +(-1)(1/6) +(2)(1/3) = To be used for edge detection Group frei-chen masks into a set of 4 masks for and edge subspace, 4 masks for a line subspace and 1 mask for an average subspace

Edge Detection Frei-Chen Masks To use for edge detection, select a particular subspace of interest and find the relative projection if the image onto the particular subspace by given equation Cos Θ = M S √ where M = ∑ (Is,fk)2 kЄ{e} S = ∑ (Is,fk)2 k=1 9 The set {e} consists of the masks of interest The (Is,fk) notation refers to the process of overlaying the mask on the subimage, multiplying the coincident terms and summing the results

Edge Operators Performance In order to develop a performance metric for edge detector, we need to define: What constitutes success What types of errors can occur Types of error that can occur for edge detection Missing valid edge points Classifying noise pulses as valid edge points Smearing edges

Edge Operators Performance Real image Missed edge points Noise misclassified as edge points Smeared edge

Edge Operators Performance Pratt Figure of Metric Rating Factor IN = the maximum of II and IF II = the number of ideal edge points in the image IF = the number of edge points found by the edge detector  = a scaling constant that can be adjusted to adjust the penalty for offset edges d = the distance of a found edge point to an ideal edge point

Edge Operators Performance Pratt Figure of Metric Rating Factor For this metric, R will be 1 for a perfect edge. In general, this metric assigns a better rating to smeared edges than to offset or missing edges. This is done because there exist techniques to thin smeared edges, but it is difficult to determine when an edge is missed.

Edge Operators Performance Pratt Figure of Metric Rating Factor The objective metrics are often of limited use in practical applications. The subjective evaluation by human visual system is still superior than any computer vision system.

Edge Operators Performance Edge Detection Examples Original image Sobel Operator Prewitt Operator Laplacian Operator Roberts Operator

Edge Operators Performance Edge Detection Examples

Edge Operators Performance Edge Detection Examples

Edge Operators Performance Edge Detection Examples

Edge Operators Performance Edge Detection Examples

Edge Operators Performance All operators return almost similar results except the laplacian operators. Laplacian returns positive and negative numbers that get linearly remapped to 0 to 255. The background value of 0 is mapped to some intermediate gray level. Only the magnitude is used for displaying the results.

Edge Operators Performance If we add noise to the image, the results are not as good. There are a number of ways to solve this: Preprocess the image with spatial filters to remove the noise. Expand the edge detection operators.

Edge Operators Performance Extended Prewitt Edge Detection Mask Extended Sobel Edge Detection Mask

Edge Operators Performance We can also define a truncated pyramid operator. This operator provides weights that decrease as we get away from the center pixel.

Edge Operators Performance Edge Detection Examples - Noise

Edge Operators Performance Edge Detection Examples - Noise

Edge Operators Performance Edge Detection Examples - Noise

Edge Operators Performance Edge Detection Examples - Noise

Edge Operators Performance On images with noise, the extended operators exhibit better performance than the 3x3 masks. However, they have a number of setbacks: Requires more computation. Tends to slightly blur the edges.

Hough Transform Hough transform is used to find lines. A line is defined as a collection of edge points that are adjacent and have the same direction. Hough transform will take a collection of edge points found by an edge detector, and find all the lines on which these edge points lie.

Hough Transform In order to understand Hough transform, consider the normal (perpendicular) representation of a line: ρ = r cos θ + c sin θ Each pair of ρ and θ corresponds to a possible line equation. The range of θ is  90o The range of ρ is from 0 to 2N (N is image size)

Hough Transform

Hough Transform A line is a path where points that lie on it have the same direction (share the same line equation). The idea is to find the number of points that lie in each possible line within an image plane. The line that accommodates the most points shall be the candidate line.

Hough Transform However, in practical, there tend to be many lines. Therefore, in practice, a line is selected if the number of points that lie on it is more than certain threshold (user-defined). We cannot have infinite precision for ρ and θ or we will have infinite line equations. Therefore, we need to quantize the ρθ parameter space.

Hough Transform

Hough Transform Each block in the quantized space corresponds to a line, or group of possible lines. The algorithm used for Hough Transform consists of three primary steps: Define the desired increments on ρ and θ, Δρ and Δθ, and quantize the space accordingly.

Hough Transform For every point of interest (edge points), plug the values for r and c into the line equation: ρ = r cos θ + c sin θ Then, for each value of θ in the quantized space, solve for ρ. For each ρθ pair from step 2, record the rc pair in the corresponding block in the quantized space. This constitutes a hit for that particular block.

Hough Transform When this process is completed, the number of hits in each block corresponds to the number of pixels on the line as defined by the values ρ and θ in that block. Next, select a threshold and select the quantization blocks that contain more points than the threshold.

Hough Transform When this process is completed, the lines are marked in the output image. There is a tradeoff in choosing the size of the quantization blocks: Large blocks will reduce search time. However, it may reduce the line resolution in the image space.

Hough Transform

Hough Transform

Hough Transform

Hough Transform

Hough Transform Example Consider three data points, shown here as black dots.

Hough Transform Example For each data point, a number of lines are plotted going through it, all at different angles. These are shown here as solid lines. For each solid line a line is plotted which is perpendicular to it and which intersects the origin. These are shown as dashed lines. The length and angle of each dashed line is measured. In the diagram above, the results are shown in tables. This is repeated for each data point. A graph of length against angle, known as a Hough space graph, is then created.

Hough Transform Example The point where the lines intersect gives a distance and angle. This distance and angle indicate the line which bisects the points being tested. In the graph shown the lines intersect at the purple point; this corresponds to the solid purple line in the diagrams above, which bisects the three points