Image Visualization. Outline 9.1. Image Data Representation 9.2. Image Processing and Visualization 9.3. Basic Imaging Algorithms Contrast Enhancement.

Slides:



Advertisements
Similar presentations
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Advertisements

Spatial Filtering (Chapter 3)
Image Processing Lecture 4
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
Sliding Window Filters and Edge Detection Longin Jan Latecki Computer Graphics and Image Processing CIS 601 – Fall 2004.
CDS 301 Fall, 2009 Image Visualization Chap. 9 November 5, 2009 Jie Zhang Copyright ©
Computer Vision Lecture 16: Texture
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
Each pixel is 0 or 1, background or foreground Image processing to
Introduction to Morphological Operators
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
6/9/2015Digital Image Processing1. 2 Example Histogram.
3. Introduction to Digital Image Analysis
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
MSU CSE 803 Stockman Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute.
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering
Segmentation (Section 10.2)
Image Enhancement.
Lecture 2: Image filtering
MSU CSE 803 Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute some result.
E.G.M. PetrakisBinary Image Processing1 Binary Image Analysis Segmentation produces homogenous regions –each region has uniform gray-level –each region.
Introduction to Image Processing Grass Sky Tree ? ? Review.
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Presentation Image Filters
Neighborhood Operations
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
Spatial Filtering: Basics
Lecture 5. Morphological Image Processing. 10/6/20152 Introduction ► ► Morphology: a branch of biology that deals with the form and structure of animals.
Chapter 9.  Mathematical morphology: ◦ A useful tool for extracting image components in the representation of region shape.  Boundaries, skeletons,
Under Supervision of Dr. Kamel A. Arram Eng. Lamiaa Said Wed
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
Digital Image Processing CCS331 Relationships of Pixel 1.
Morphological Image Processing
Digital Image Processing Lecture 5: Neighborhood Processing: Spatial Filtering Prof. Charlene Tsai.
Image Segmentation and Morphological Processing Digital Image Processing in Life- Science Aviad Baram
Chapter 10 Image Segmentation.
AdeptSight Image Processing Tools Lee Haney January 21, 2010.
Digital Image Processing CSC331 Image Enhancement 1.
CS654: Digital Image Analysis Lecture 24: Introduction to Image Segmentation: Edge Detection Slide credits: Derek Hoiem, Lana Lazebnik, Steve Seitz, David.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Digital Image Processing CSC331 Morphological image processing 1.
Autonomous Robots Vision © Manfred Huber 2014.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
Digital Image Processing Lecture 5: Neighborhood Processing: Spatial Filtering March 9, 2004 Prof. Charlene Tsai.
CS654: Digital Image Analysis
CDS 301 Fall, 2008 Image Visualization Chap. 9 November 11, 2008 Jie Zhang Copyright ©
Computer Graphics & Image Processing Chapter # 4 Image Enhancement in Frequency Domain 2/26/20161.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Machine Vision ENT 273 Hema C.R. Binary Image Processing Lecture 3.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
Lecture(s) 3-4. Morphological Image Processing. 3/13/20162 Introduction ► ► Morphology: a branch of biology that deals with the form and structure of.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Miguel Tavares Coimbra
Edge Detection slides taken and adapted from public websites:
- photometric aspects of image formation gray level images
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Chapter 10 Image Segmentation
Introduction to Morphological Operators
Computer Vision Lecture 16: Texture II
CSC 381/481 Quarter: Fall 03/04 Daniela Stan Raicu
CS Digital Image Processing Lecture 5
Image Enhancement in the Spatial Domain
Review and Importance CS 111.
DIGITAL IMAGE PROCESSING Elective 3 (5th Sem.)
Presentation transcript:

Image Visualization

Outline 9.1. Image Data Representation 9.2. Image Processing and Visualization 9.3. Basic Imaging Algorithms Contrast Enhancement Histogram Equalization Gaussian Smoothing Edge Detection 9.4. Shape Representation and Analysis Segmentation Connected Components Morphological Operations Distance Transforms Skeletonization

Image Data Representation An image is a well-behaved uniform dataset. An image is a two-dimensional array, or matrix of pixels, e.g., bitmaps, pixmaps, RGB images A pixel is square-shaped A pixel has a constant value over the entire pixel surface The value is typically encoded in 8 bits integer What is an image?

Image Processing and Visualization Image processing follows the visualization pipeline, e.g., image contrast enhancement following the rendering operation Image processing may also follow every step of the visualization pipeline

Image Processing and Visualization

Basic Image Processing Image enhancement operation is to apply a transfer function on the pixel luminance values Transfer function is usually based on image histogram analysis High-slope function enhance image contrast Low-slope function attenuate the contrast.

(Continued) Image Visualization Chap. 9 November 12, 2009

Basic Image Processing The basic image processing is the contrast enhancement through applying a transfer function

Image Enhancement Linear TransferNon-linear Transfer

Histogram Equalization All luminance values covers the same number of pixels Histogram equalization method is to compute a transfer function such as the resulted image has a near-constant histogram

Histogram Equalization Histogram equalization is a technique for adjusting image intensities to enhance contrast. In general, a histogram is the estimation of the probability distribution of a particular type of data. An image histogram is a type of histogram which offers a graphical representation of the tonal distribution of the gray values in a digital image. By viewing the image’s histogram, we can analyze the frequency of appearance of the different gray levels contained in the image.

Histogram Equalization Original ImageAfter equalization

Smoothing Fact: Images are noisy Noise is anything in the image that we are not interested in Noise can be described as rapid variation of high amplitude Or regions where high-order derivatives of f have large values Smoothing is often used to reduce noise within an image How to remove noise?

Smoothing Noise image After filtering

Fourier Transform The Fourier Transform is an important image processing tool which is used to decompose an image into its sine and cosine components. The output of the transformation represents the image in the Fourier or frequency domain, while the input image is the spatial domain equivalent.frequency domain spatial domain In the Fourier domain image, each point represents a particular frequency contained in the spatial domain image.

Spatial Domain Vs Frequency Domain Spatial Domain (Image Enhancement) is manipulating or changing an image representing an object in space to enhance the image for a given application. Techniques are based on direct manipulation of pixels in an image Used for filtering basics, smoothing filters, sharpening filters, unsharp masking and laplacian 2. Frequency Domain Techniques are based on modifying the spectral transform of an image Transform the image to its frequency representation Perform image processing Compute inverse transform back to the spatial domain

Frequency Filtering 1.Computer the Fourier transform F(w x,w y ) of f(x,y) 2.Multiple F by the transfer function Φ to obtain a new function G, e.g., high frequency components are removed or attenuated. 3.Compute the inverse Fourier transform G -1 to get the filtered version of f

Frequency Filtering Frequency filter function Φ can be classified into three different types: 1.Low-pass filter: A low-pass filter is a filter that passes signals with a frequency lower than a certain cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. 2.High-pass filter: A high-pass filter is an electronic filter that passes signals with a frequency higher than a certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. 3.Band-pass filter: A band-pass filter is a device that passes frequencies within a certain range and rejects (attenuates) frequencies outside that range. To remove noise, low-pass filter is used

Gaussian Filter In image processing, a Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function.image processingGaussian function It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. image noise Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales. computer vision The Gaussian outputs a `weighted average' of each pixel's neighborhood, with the average weighted more towards the value of the central pixels. This is in contrast to the mean filter's uniformly weighted average. Because of this, a Gaussian provides gentler smoothing and preserves edges better than a similarly sized mean filter. To remove noise, low-pass filter is used

Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness. Edge detection is used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision. Edges are curves that separate image regions of different luminance The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. Edge Detection

Origin of Edges Edges are caused by a variety of factors depth discontinuity surface color discontinuity illumination discontinuity surface normal discontinuity

Edge Detection Original ImageEdge Detection

Generally, the first order derivative operators are very sensitive to noise and produce thicker edges. a.1) Roberts filtering: diagonal edge gradients, susceptible to fluctations. Gives no information about edge orientation and works best with binary images. a.2) Prewitt filter: The Prewitt operator is a discrete differentiation operator which functions similar to the Sobel operator, by computing the gradient for the image intensity function. Makes use of the maximum directional gradient. As compared to Sobel, the Prewitt masks are simpler to implement but are very sensitive to noise. a.3) Sobel filter: Detects edges are where the gradient magnitude is high.This makes the Sobel edge detector more sensitive to diagonal edge than horizontal and vertical edges. First Order Derivative Edge Detection

If there is a significant spatial change in the second derivative, an edge is detected. Good on producing thinner edges. 2nd Order Derivative operators are more sophisticated methods towards automatized edge detection, however, still very noise-sensitive. As differentiation amplifies noise, smoothing is suggested prior to applying the Laplacians. In that context, typical examples of 2nd order derivative edge detection are the Difference of Gaussian (DOG) and the Laplacian of Gaussian 2 nd Order Derivative Edge Detection

(Continued) Image Visualization Chap. 9 November 19, 2009

Shape Representation and Analysis Shape Analysis Pipeline

Filtering high-volume, low level datasets into low volume dataset containing high amounts of information Shape is defined as a compact subset of a given image Shape is characterized by a boundary and an interior Shape properties include geometry (form, aspect ratio, roundness, or squareness) Topology (type, kind, number) Texture (luminance, shading) Shape Representation and Analysis

Segment or classify the image pixels into those belonging to the shape of interest, called foreground pixels, and the remainder, also called background pixels. Segmentation results in a binary image Segmentation is related to the operation of selection, i.e., thresholding Segmentation

Find soft tissueFind hard tissue

Connected Components Find non-local properties Algorithm: start from a given foreground pixels, find all foreground pixels that are directly or indirectly neighbored

To close holes and remove islands in segmented images a: original image b: segmentation c: close holes d: remove island Morphological Operations

3/5/ Introduction Morphology: a branch of image processing that deals with the form and structure of an object. Morphological image processing is used to extract image components for representation and description of region shape, such as boundaries, skeletons, and shape of the image.

Structuring Element (Kernel) Structuring Elements can have varying sizes Usually, element values are 0,1 and none(!) Structural Elements have an origin For thinning, other values are possible Empty spots in the Structuring Elements are don’t care’s! 5-Mar-1635 Box Disc Examples of stucturing elements

Dilation & Erosion Basic operations Are dual to each other: –Erosion –Erosion shrinks foreground, enlarges Background –Dilation enlarges foreground, shrinks background 5-Mar-1636

Erosion Erosion is the set of all points in the image, where the structuring element “fits into”. Consider each foreground pixel in the input image –If the structuring element fits in, write a “1” at the origin of the structuring element! Simple application of pattern matching Input: –Binary Image (Gray value) –Structuring Element, containing only 1s! 5-Mar-1637

Erosion Operations (cont.) Structuring Element (B) Original image (A) Intersect pixelCenter pixel

Erosion Operations (cont.) Result of Erosion Boundary of the “center pixels” where B is inside A

Dilation Dilation is the set of all points in the image, where the structuring element “touches” the foreground. Consider each pixel in the input image –If the structuring element touches the foreground image, write a “1” at the origin of the structuring element! Input: –Binary Image –Structuring Element, containing only 1s!! 5-Mar-1640

Dilation Operations (cont.) Structuring Element (B) Original image (A) Reflection Intersect pixelCenter pixel

Dilation Operations (cont.) Result of Dilation Boundary of the “center pixels” where intersects A

Opening & Closing Important operations Derived from the fundamental operations –Dilatation –Erosion Usually applied to binary images, but gray value images are also possible Opening and closing are dual operations 5-Mar-1643

Morphological closing: dilation followed by an erosion Morphological opening: erosion followed by a dilation operation Morphological Operations

Opening Similar to Erosion –Spot and noise removal –Less destructive Erosion next dilation the same structuring element for both operations. Input: –Binary Image –Structuring Element, containing only 1s! 5-Mar-1645

Opening erosion followed by a dilation operation Take the structuring element (SE) and slide it around inside each foreground region. –All pixels which can be covered by the SE with the SE being entirely within the foreground region will be preserved. –All foreground pixels which can not be reached by the structuring element without lapping over the edge of the foreground object will be eroded away! 5-Mar-1646

Opening Structuring element: 3x3 square 5-Mar-1647

Opening Example Opening with a 11 pixel diameter disc 5-Mar-1648

Closing Similar to Dilation –Removal of holes –Tends to enlarge regions, shrink background Closing is defined as a Dilatation, followed by an Erosion using the same structuring element for both operations. Dilation next erosion! Input: –Binary Image –Structuring Element, containing only 1s! 5-Mar-1649

Closing : dilation followed by an erosion Take the structuring element (SE) and slide it around outside each foreground region. –All background pixels which can be covered by the SE with the SE being entirely within the background region will be preserved. –All background pixels which can not be reached by the structuring element without lapping over the edge of the foreground object will be turned into a foreground. 5-Mar-1650

Closing Structuring element: 3x3 square 5-Mar-1651

Closing Example Closing operation with a 22 pixel disc Closes small holes in the foreground 5-Mar-1652

Closing Example 1 1.Threshold 2.Closing with disc of size 20 5-Mar-1653 Thresholded closed

Opening Erosion and dilation are not inverse transforms. An erosion followed by a dilation leads to an interesting morphological operation

Opening Erosion and dilation are not inverse transforms. An erosion followed by a dilation leads to an interesting morphological operation

Opening Erosion and dilation are not inverse transforms. An erosion followed by a dilation leads to an interesting morphological operation

Closing Closing is a dilation followed by an erosion followed

Closing Closing is a dilation followed by an erosion followed

Closing Closing is a dilation followed by an erosion followed

Closing Closing is a dilation followed by an erosion followed

Distance Transform

The distance transform DT of a binary image I is a scalar field that contains, at every pixel of I, the minimal distance to the boundary ∂ Ω of the foreground of I Distance Transform

Distance transform can be used for morphological operation Consider a contour line C(δ) of DT Distance Transform δ = 0 … δ > 0 … δ < 0 …

The contour lines of DT are also called level sets Distance Transform ShapeLevel SetsElevation plot

Find the closest boundary points, so called feature points Feature Transform Given a: Feature point is b Given p: Feature points are q 1 and q 2

Skeletonization

Skeletonization: the Goals Geometric analysis: aspect ratio, eccentricity, curvature and elongation Topological analysis: genus Retrieval: find the shape matching a source shape Classification: partition the shape into classes Matching: find the similarity between two shapes

Skeletons are the medial axes Or skeleton S( Ω) was the set of points that are centers of maximally inscribed disks in Ω Or skeletons are the set of points situated at equal distance from at least two boundary feature points of the given shape Skeletonization

Feature Transform Method: Select those points whose feature transform contains more than two boundary points. Skeleton Computation Works well on continuous data Fails on discreate data

Using distance field singularities: Skeleton points are local maxima of distance transform Skeleton Computation

End of Chap. 9 Note: covered all sections except (skeleton in 3D)