Image Analysis Manipulate an image to extract information to help solve a problem. Preprocessing - get rid of unnecessary information Data reduction -

Slides:



Advertisements
Similar presentations
3-D Computer Vision CSc83020 / Ioannis Stamos  Revisit filtering (Gaussian and Median)  Introduction to edge detection 3-D Computater Vision CSc
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Spatial Filtering.
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
Sliding Window Filters and Edge Detection Longin Jan Latecki Computer Graphics and Image Processing CIS 601 – Fall 2004.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 4 Edge Detection
Filtering and Edge Detection
6/9/2015Digital Image Processing1. 2 Example Histogram.
Computer Vision Group Edge Detection Giacomo Boracchi 5/12/2007
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
Image processing. Image operations Operations on an image –Linear filtering –Non-linear filtering –Transformations –Noise removal –Segmentation.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
CSCE 641 Computer Graphics: Image Filtering & Feature Detection Jinxiang Chai.
Digital Image Processing
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Segmentation (Section 10.2)
Image Enhancement.
Computational Photography: Image Processing Jinxiang Chai.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Edge Detection Hao Huy Tran Computer Graphics and Image Processing CIS 581 – Fall 2002 Professor: Dr. Longin Jan Latecki.
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Chapter 10: Image Segmentation
Neighborhood Operations
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30 – 11:50am.
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
Spatial Filtering: Basics
Introduction to Image Processing Grass Sky Tree ? ? Sharpening Spatial Filters.
Digital Image Processing Lecture 5: Neighborhood Processing: Spatial Filtering Prof. Charlene Tsai.
Image Processing Edge detection Filtering: Noise suppresion.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
Introduction to Image Processing
Edge Detection Today’s reading Cipolla & Gee on edge detection (available online)Cipolla & Gee on edge detection From Sandlot ScienceSandlot Science.
AdeptSight Image Processing Tools Lee Haney January 21, 2010.
Spatial Filtering.
EE 4780 Edge Detection.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
Sliding Window Filters Longin Jan Latecki October 9, 2002.
1 Edge Operators a kind of filtering that leads to useful features.
Edges Edges = jumps in brightness/color Brightness jumps marked in white.
Miguel Tavares Coimbra
Edge Detection slides taken and adapted from public websites:
Chapter 10 Image Segmentation
Linear Filters and Edges Chapters 7 and 8
Image gradients and edges
Fourier Transform: Real-World Images
Jeremy Bolton, PhD Assistant Teaching Professor
Computer Vision Lecture 16: Texture II
a kind of filtering that leads to useful features
a kind of filtering that leads to useful features
Edge Detection Today’s reading
Edge Detection Today’s readings Cipolla and Gee Watt,
Image Filtering Readings: Ch 5: 5. 4, 5. 5, 5. 6, , 5
IT472 Digital Image Processing
IT472 Digital Image Processing
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Image Analysis Manipulate an image to extract information to help solve a problem. Preprocessing - get rid of unnecessary information Data reduction - transform the image to a useable form Feature analysis - make inferences about the image.

Image Analysis Preprocessing Noise reduction Gray level quantization Spatial quantization Finding regions of interest Reducing the number of bits

Image Analysis Data reduction process, in which the image(s) are transformed into a more convenient form. RGB  HSL Image subtraction Histogram Feature extraction

Image Analysis Feature analysis - Specific results Blood cell counting Tumor size and location 3D model

Image Analysis Geometric Noise Reduction Edge Detection Resizing Rotating Noise Reduction Image Smoothing Median Filtering Edge Detection Searching for discontinuities Histogram slicing Blob detection

Geometric Transformation

Interpolation used to simplify data for further processing Interpolation to determine image values at integer pixel locations of transformed image. Numerous interpolation schemes exist 2 simple methods are Nearest neighbor: assign value of nearest known point Bilinear interpolation: example on next slide Interpolation used to simplify data for further processing

Nearest Neighbor Interpolation Interpolate this point To this value

Linear Interpolation

Bi-linear Interpolation I(x+1,y) I(x+1,y+1) I(x,y+1) I(x,y) I(x’,y’) 0  (a,b)  1 I(x’,y) = (1-a)I(x,y) + (a)I(x+1,y) I(x’,y+1) = (1-a)I(x,y+1) + (a)I(x+1,y+1) I(x’,y’) = (1-b)I(x’,y) + (b)I(x’,y+1) I(x’,y’) = (1-b)(1-a)I(x,y) + (1-b)(a)I(x+1,y) + (b)(1-a)I(x,y+1) + (a)(b)I(x,y)

Bi-linear Interpolation I(x’,y’)

Neighborhood Operations

Objectives Why are neighborhoods important? What is linear convolution? discrete templates, masks or filters algorithm mechanics graphical interpretation Describe non-linear operators maximum minimum median What is tiling?

Why are neighbourhoods important? pixel

Because… Provide context for individual pixels. Relationships between neighbors determine image features. Neighborhood operations: noise reduction edge enhancement zooming

Noise reduction Edge Enhancement Zooming

Neighbourhood Operations Linear convolution (*) A*B*C*D = B*C*D*A = …. Non-linear operators median, max, min, ...

Convolution versus Spectral We learnt two methods of processing images: Convolution Spectral We analyzed and demonstrated how to build a processor (systolic, pipelined, parallel, cellular automaton) for 1D convolution. 1D convolution is used in speech processing and in polynomial multiplication. We will use visualized animations now to show in more detail how 2D convolution works for images. This should convince you how important it is to do convolution quickly in modern Spectral Architectures, especially for 3D etc.

2D Convolution We will show more examples of convolution now, especially for 2D data Consists of filtering an image A using a filter (mask, template) B. Mask is a small image whose pixel values are called weights. Weights modify relationships between pixels.

 = A C B Filter, mask or template Input image Convolved Image A1,1 2 2 3 3 4 4

A1,1 A1,2 A2,1 A2,2 B1,1 B1,2 B2,1 B2,2 A1,3 A1,4 A2,3 A2,4 A3,1 A3,2 A3,3 A3,4 A4,1 A4,2 A4,3 A4,4 A1,1B1,1 A1,2B1,2 A2,1B2,1 A2,2B2,2 C1,1= A1,1B1,1  A1,2B1,2  A2,1B2,1  A2,2B2,2

A1,1 A1,2 A2,1 A2,2 A1,3 A1,4 A2,3 A2,4 A3,1 A3,2 A3,3 A3,4 A4,1 A4,2 A4,3 A4,4 B1,1 B1,2 B2,1 B2,2 A1,2B1,1 A1,3B1,2 A2,2B2,1 A2,3B2,2 C1,2= A1,2B1,1  A1,3B1,2  A2,2B2,1  A2,3B2,2

A1,1 A1,2 A2,1 A2,2 A1,3 A1,4 A2,3 A2,4 A3,1 A3,2 A3,3 A3,4 A4,1 A4,2 A4,3 A4,4 B1,1 B1,2 B2,1 B2,2 A1,3B1,1 A1,4B1,2 A2,3B2,1 A2,4B2,2 C1,3= A1,3B1,1  A1,4B1,2  A2,3B2,1  A2,4B2,2

A1,3 A1,4 A2,3 A2,4 A3,1 A3,2 A3,3 A3,4 A4,1 A4,2 A4,3 A4,4 A1,1 A1,2 A2,1 A2,2 A2,1B1,1 A2,2B1,2 B1,1 B1,2 B2,1 B2,2 A3,1B2,1 A3,2B2,2 C2,1= A2,1B1,1  A2,2B1,2  A3,1B2,1  A3,2B2,2

A1,1 A1,2 A2,1 A2,2 A1,3 A1,4 A2,3 A2,4 A3,1 A3,2 A3,3 A3,4 A4,1 A4,2 A4,3 A4,4 B1,1 B1,2 B2,1 B2,2 B1,1 B1,2 B2,1 B2,2 B1,1 B1,2 B2,1 B2,2

Mathematical Notation A1,1B1,1  A1,2B1,2  A2,1B2,1  A2,2B2,2

 Convolution = A C B Filter, mask or template Input image Convolved 4 7 9 6 23 21  -1 2 4 3 8 9 = 9 26 19 -1 2 3 5 9 6 10 16 27 17 2 2 3 3 4 4

Convolution size Image size = M1 N1 Mask size = M2 N2 M1- M2 +1 N1-N2+1 N1 N2 N1-N2+1 Typical Mask sizes = 33, 5 5, 77, 9 9, 1111 What is the convolved image size for a 128   128 image and 7  7 mask?

1 1 1 * = We convolve with 9*9 averaging filter

Nonlinear Neighbourhood Operations Maximum Minimum Median We discussed already sorter architecture (three variants – pipelined, butterfly combinational and sequential controller). It can be used for all these operations, and also for other non-linear operators

Max and Min Operations C1,2= 61 62 57 60 59 65 63 56 55 58 49 53 45 1 63=max, 59=min

Median Operation 9 8 7 6 5 4 3 2 1 65 63 62 61 62 57 60 59 65 63 56 55 58 49 53 45 1 60 62 60 59 63 65 56 55 58 57 rank 59 58 57 56 C1,2= 59 55

9x9 Median

Edge Detection What do we mean by edge detection? What is an edge?

What is Edge Detection? Redraws the image with only the edges showing Detects large intensity transitions between pixels Redraws the image with only the edges showing 0 0 0 33 0 0 45 78 0 45 23 33 0 0 42 76 0 0 0 38

What is an Edge? Edge easy to find

What is an Edge? Where is edge? Single pixel wide or multiple pixels?

What is an Edge? Noise is here Noise: have to distinguish noise from actual edge

What is an Edge? Is this one edge or two?

What is an Edge? Texture discontinuity

Edge Detection – so what is an edge to be detected? A large change in image brightness of a short spatial distance Edge strength = (I(x,y)-I(x+dx,y))/dx But this general definition still allows for many theories, software implementation and hardware architectures.

Now we will discuss and illustrate various kinds of filter operators

Edge Detection Filters High - Pass Filtering Eliminates Uniform Regions (Low Frequencies) edge “detection” or “enhancement”

Edge Detection Filters

Edge Detection Filters Edge Detection Continued Sum of Kernel Coefficients = 0 differences in signs emphasize differences in pixel values reduces average image intensity Negative pixel values in output?

Edge Direction vertical horizontal diagonal

Directional High Pass Filters

Convolution Edge Detection using Sobel and similar operators

Example of Sobel Operator

Sobel Edge Detection

Convolution Application Examples We apply Sobel Operator --Edge Detection -1 -2 -1 0 0 0 1 2 1 -1 0 1 -2 0 2 Column Mask Row Mask as mask to a sub-field of a picture p0, p1, p2 p3, p4, p5 p6, p7, p8 -1 2 -1 0 0 0 1 2 1 * = (p6-p0)+2(p7-p1)+(p8-p2) The final step of the convolution equation, dividing by the weight , must be ignored We can learn from the result obviously The result of the above calculation for column mask is horizontal difference With Row Mask we will get vertical difference

Convolution Application Examples --Edge Detection with Sobel Operator The weight of a mask determines the grey level of the image after convolution. Like the weight of Sobel Mask W W= (-1) + (-2) + (-1) + 0 + 0 + 0 +1 + 2 +1= 0 The resulting image lost its “lightness” to be dark.

Sobel Operator

Sobel Operator -1 -2 -1 0 0 0 1 2 1 -1 0 1 -2 0 2 S2 = S1= -1 -2 -1 0 0 0 1 2 1 -1 0 1 -2 0 2 S2 = S1= Edge Magnitude = Edge Direction =

Comparison of Edge Detection Algorithms Sobel Canny Prewitt Ticbetts

Edge Direction Assymetric kernels detect edges from specific directions NorthEast 1 -1 -1 1 -2 -1 1 1 -1 East 1 1 -1 1 -2 -1 North -1 -1 -1 1 -2 1 1 1 1

Robinson Operator

Robinson Compass Masks -1 0 1 -2 0 2 0 1 2 -1 0 1 -2 -1 0 1 2 1 0 0 0 -1 -2 -1 2 1 0 1 0 -1 0 -1 -2 1 0 -1 2 0 -2 1 1 -1 0 -1 -2 -1 0 -1 2 1 0 -1 -2 -1 0 0 0 1 2 1 -2 -1 0 -1 0 1 0 1 2 Arrows show edge directions

Roberts Operator

Roberts Operator or 1 0 0 -1 0 1 -1 0 + Does not return any information about the orientation of the edge

Prewitt Operator -1 -1 -1 0 0 0 1 1 1 -1 0 1 P2 = P1= Edge Magnitude = -1 -1 -1 0 0 0 1 1 1 -1 0 1 P2 = P1= Edge Magnitude = Edge Direction =

Edge Detection Filters Prewitt Row

0 1 2 -1 0 1 -2 -1 0 1 2 1 0 0 0 -1 -2 -1 Original and filtered cow

Edge Detection Filters: compare Prewitt and Sobel Edge Detection (continued) First Order (Gradient) Kernels Prewitt Row 1 0 -1 Sobel Row 1 0 -1 2 0 -2 Combine Row and Column Operators

first derivative 1D Laplacian Operator second derivative

2D Laplacian Operator Convolution masks approximating a Laplacian 0 -1 0 -1 4 -1 1 -2 1 -2 4 -2 -1 -1 -1 -1 8 -1 This is just one example of Laplacian, we can use much larger window

0 -1 0 -1 4 -1 Input Mask Output

Image Processing Operations for Early Vision: Edge Detection

Reminder: Effect of Filters low high

Edges … are the important part of images intensity color edges simplest, least robust intensity color edges textures contours condensation... most difficult, most robust There are many letters B occluded by black shape here. How to find them?

Image Processing Operations Edge Detection Edges are curves in the image plane across which there is a “significant” change in image brightness. The goal of edge detection is the construction of an idealized line drawing

Pixels on edges

Edges found

Edge effects: rarely ideal edges Not all information is created equal...

Causes of edges Depth discontinuity Surface orientation discontinuity One surface occludes another Surface orientation discontinuity the edge of a block reflectance discontinuity texture or color changes illumination discontinuity shadows

Edges: causes What are they? Why? four physical events that cause image edges...

Edges: causes What are they? Why? discontinuities in four physical events that cause image edges... surface color/intensity surface normal discontinuities in depth lighting (specularities)

Edges: causes Edges are image locations with a local maximum in image gradient in the direction of that gradient (steepness)

Edge Detection Finding simple descriptions of objects in complex images find edges interrelate edges

Examples of edges

Edges are not ideal... One idea to detect edges is to differentiate the image and look for places where the brightness undergoes a sharp change Consider a 1-D example. Below is an intensity profile for a 1-D image.

Edge Detection Below we have the derivative of the previous graph. Here we have a peak at x=18, x=50 and x=75. These errors are due to the presence of noise in the image.

Finding Edges Image Intensity along a line First derivative of intensity Smoothed via convolving with gaussian

Edge Detection This problem is countered by convolving a smoothing function along with the differentiation operation. The mathematical concept of convolution allows us to perform many useful image-processing operations.

Image Processing Operations One standard form of smoothing is to use a Gaussian function. Now using the idea of convolving with the Gaussian function we can revisit the 1-D example.

Convolving to find edges With the convolving applied we can more easily see the edge at x=50. Using convolving we are able to discover where edges are located and this allows us to make an accurate line drawing.

Edge Detection by Convolution Here is an example of using convolving in an 2-D picture of Mona Lisa

Zero Crossing edge derivative Second derivative

Edge

Edge Parameters

Remainder: How the Point Detection Mask operates on one color image

Edge Detection uses Convolution Goal: to find regions of an image with locally maximal gradient magnitude. (1) Smooth the image to reduce the effects of noise Replace each pixel by a weighted sum of its neighbors. weight “mask” old Image new Image 34 37 137 148 35 210 210 210 1 2 1 I(x,y) = S wij I(x+i,y+j) i,j = -1 1 29 46 141 140 x 43 210 210 210 2 4 2 = 34 130 149 131 60 210 210 210 1 2 1 new old 41 142 152 144 72 210 210 210 weights (scaled)

Edge Detection uses Convolution Goal: to find regions of an image with locally maximal gradient magnitude. (1) Smooth the image to reduce the effects of noise Replace each pixel by a weighted sum of its neighbors. weight “mask” old Image new Image 34 37 137 148 35 62 116 143 1 2 1 I(x,y) = S wij I(x+i,y+j) i,j = -1 1 29 46 141 140 x 43 76 122 141 2 4 2 = 34 130 149 131 60 102 136 140 1 2 1 new old 41 142 152 144 72 112 145 143 weights (scaled) It’s possible to do this one dimension at a time... original image smoothed vertically smoothed horizontally

Edge Detection uses Convolution Goal: to find regions of an image with locally maximal gradient magnitude. (1) Smooth the image to reduce the effects of noise I(x,y) = S wij I(x+i,y+j) i,j = -1 1 Replace each pixel by a weighted sum of its neighbors. new old weights (2) Estimate the gradient at each pixel same procedure (convolution) Use another mask of weights -- this time to approximate taking derivatives. with new weights 1 In the y direction -1 -1 1 In the x direction d dx I d dy I = Ix = Iy

Edge Detection uses Convolution Goal: to find regions of an image with locally maximal gradient magnitude. (1) Smooth the image to reduce the effects of noise I(x,y) = S wij I(x+i,y+j) i,j = -1 1 Replace each pixel by a weighted sum of its neighbors. new old weights (2) Estimate the gradient at each pixel same procedure with new weights (convolution) Use another mask of weights -- this time to approximate taking derivatives. 1 -1 1 y -1 x (3) Find the gradient magnitude and thin the resulting lines.  I(x,y) = sqrt(Ix + Iy ) 2 q = atan2(Ix, Iy) q Seek out maxima in the gradient direction q.

Edge Detection uses Convolution Goal: to find regions of an image with locally maximal gradient magnitude. (1) Smooth the image to reduce the effects of noise I(x,y) = S wij I(x+i,y+j) i,j = -1 1 Replace each pixel by a weighted sum of its neighbors. new old weights (2) Estimate the gradient at each pixel same procedure with new weights (convolution) Use another mask of weights -- this time to approximate taking derivatives. 1 -1 1 y -1 x (3) Find the gradient magnitude and thin the resulting information. In the gradient direction, look for local maxima:  I(x,y) = sqrt(Ix + Iy ) 2 q = atan2(Ix, Iy) (4) Choose a threshold -- any gradients above it classify as edges.

Theory of Gradient Based Edge Detection

Formal Model of Edge

Formal Model of Edge

Formal Model of Edge (cont)

Formal Model of Edge (cont)

Formal Model of Edge (cont)

Formal Model of Edge: Roberts Formal Model of Edge (cont) Formal Model of Edge: Roberts

Formal Model of Edge: Laplacian and Marr-Hildreth Formal Model of Edge (cont) Formal Model of Edge: Laplacian and Marr-Hildreth

Formal Model of Edge (cont)

Formal Model of Edge (cont)

Thresholds Thresholds are important, done before or during edge detection. original image very high threshold

Thresholds Thresholds original image very high threshold

Thresholds original image very high threshold

Thresholds original image very high threshold reasonable

Thresholds original image very high threshold reasonable too low ! this all takes time...