Topic 10 - Image Analysis DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.

Slides:



Advertisements
Similar presentations
By: Mani Baghaei Fard.  During recent years number of moving vehicles in roads and highways has been considerably increased.
Advertisements

電腦視覺 Computer and Robot Vision I
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Topic 6 - Image Filtering - I DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Image Processing Lecture 4
Course Syllabus 1.Color 2.Camera models, camera calibration 3.Advanced image pre-processing Line detection Corner detection Maximally stable extremal regions.
Sliding Window Filters and Edge Detection Longin Jan Latecki Computer Graphics and Image Processing CIS 601 – Fall 2004.
July 27, 2002 Image Processing for K.R. Precision1 Image Processing Training Lecture 1 by Suthep Madarasmi, Ph.D. Assistant Professor Department of Computer.
Computer Vision Lecture 16: Texture
Topic 4 - Image Mapping - I DIGITAL IMAGING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Digital Image Processing
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
Course Syllabus 1.Color 2.Camera models, camera calibration 3.Advanced image pre-processing Line detection Corner detection Maximally stable extremal regions.
Image Segmentation Region growing & Contour following Hyeun-gu Choi Advisor: Dr. Harvey Rhody Center for Imaging Science.
6/9/2015Digital Image Processing1. 2 Example Histogram.
3. Introduction to Digital Image Analysis
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Introduction to Computer and Human Vision Shimon Ullman, Ronen Basri, Michal Irani Assistants: Tal Hassner Eli Shechtman.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
Segmentation Divide the image into segments. Each segment:
Digital Image Processing
Face Detection: a Survey Speaker: Mine-Quan Jing National Chiao Tung University.
The Segmentation Problem
Chapter 11 Representation and Description. Preview Representing a region involves two choices: In terms of its external characteristics (its boundary)
Multimedia Systems & Interfaces Karrie G. Karahalios Spring 2007.
Topic 7 - Fourier Transforms DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Introduction to Image Processing Grass Sky Tree ? ? Review.
Chapter 10: Image Segmentation
Digital Image Processing Lecture 20: Representation & Description
Lecture 5. Morphological Image Processing. 10/6/20152 Introduction ► ► Morphology: a branch of biology that deals with the form and structure of animals.
Digital Image Processing & Analysis Spring Definitions Image Processing Image Analysis (Image Understanding) Computer Vision Low Level Processes:
ENT 273 Object Recognition and Feature Detection Hema C.R.
University of Kurdistan Digital Image Processing (DIP) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
Under Supervision of Dr. Kamel A. Arram Eng. Lamiaa Said Wed
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Digital Image Processing CSC331
Course Syllabus 1.Color 2.Camera models, camera calibration 3.Advanced image pre-processing Line detection Corner detection Maximally stable extremal regions.
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
DIGITAL IMAGE PROCESSING
Digital Image Processing Lecture 18: Segmentation: Thresholding & Region-Based Prof. Charlene Tsai.
Data Extraction using Image Similarity CIS 601 Image Processing Ajay Kumar Yadav.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
Autonomous Robots Vision © Manfred Huber 2014.
Edges.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
1 Mathematic Morphology used to extract image components that are useful in the representation and description of region shape, such as boundaries extraction.
1 Overview representing region in 2 ways in terms of its external characteristics (its boundary)  focus on shape characteristics in terms of its internal.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
TOPIC 12 IMAGE SEGMENTATION & MORPHOLOGY. Image segmentation is approached from three different perspectives :. Region detection: each pixel is assigned.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
Sliding Window Filters Longin Jan Latecki October 9, 2002.
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
Medical Image Analysis
Digital Image Processing (DIP)
Edge Detection slides taken and adapted from public websites:
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Digital Image Processing Lecture 20: Representation & Description
IMAGE PROCESSING INTENSITY TRANSFORMATION AND SPATIAL FILTERING
Introduction to Computer and Human Vision
Computer Vision Lecture 16: Texture II
CS Digital Image Processing Lecture 5
Representation and Description
Fourier Transform of Boundaries
IT472 Digital Image Processing
Presentation transcript:

Topic 10 - Image Analysis DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick

10. Image Analysis Mapping, filtering and restoration techniques are all aimed at producing an “enhanced” image as an input to IMAGE ANALYSIS On the one hand image analysis can be carried out by visual inspection perhaps supported by a number of interactive software tools. At the other extreme, it may involve automated processing utilising very complex computer algorithms Here we consider a few relevant techniques under the headings: 10.1Simple interactive techniques 10.2Image segmentation 10.3Feature description & recognition 10.4Pattern recognition via cross-correlation

10.1 Simple Interactive Tools (i) Description of a Point (selected via a mouse/cursor) Position = x, y (and via the calibration  scene coordinate θ,Φ) Gray level = f xy (and via the calibration  flux density W/m 2 ) (ii) Description of an Extended Object Position =, (centroid within the defined object region, weighted by gray level) Average gray level = (iii) Subtraction of a Background Signal ie net = - where: (iv) Distribution along a 1-d Cut through the Image (v) Distribution Radially and Azimuthally around a Point

10.2 Image Segmentation Segmentation often relies on either: (i)The identification of the gray-level range which “characterises” the features/objects of interest (against the confusion of the surrounding scene. (ii)The identification of edge/discontinuities which suitably delineate the features/objects (against the confusion of the surrounding scene). Method (i) often reduces simply to defining a suitable threshold in the gray-level distribution. Method (ii) often involves the use of an edge-detection filter This is the process of sub-dividing an image into its constituent parts (i.e. into features or objects which comprise the image). Segmentation is often the first step in an automated “feature-identification” procedure

Segmentation by applying a gray-level threshold-I Example 1 The detection of bright point sources in an image. source detection threshold

Segmentation by applying a gray-level threshold II Example 2 The detection of a set of bright point sources against a varying background an image.

Segmentation by applying a gray-level threshold - III Example 3. How well might the rectangles be "identified" by the suitable choice of a gray-level threshold? EASY! HARD!

Segmentation via Edge Detection - I Example 1: Using the Sobel filters Discontinuities/edges in images can be detected by the use of either “gradient” or “Laplacian” spatial filters.

Segmentation via Edge Detection - II Example 2: Using a Laplacian mask and an "edge-crossing" algorithm.

A Full Segmentation Process 1.Compute a gradient image or “zero- crossing” image. 2.Apply a threshold to remove clutter and noise. 1.Apply “edge growing” and “edge thinning” algorithms. 2.Apply “edge linking” and “stray-filament removal” algorithms.

Image Segmentation - Example

10.3 Feature Description and Recognition Once an image has been segmented into a set of individual objects or features, the next step is to characterize the image in terms of these components. The possibilities range from: Object Identification – how many objects are there of a given type? Scene Analysis – what is inter-relation of the objects? A common requirement is to identify the individual objects against a specified (and restricted) set of possibilities – i.e., a RECOGNITION problem. How should the objects be represented? The two possibilities are: In terms of their (external) boundary characteristics. i.e. the morphology (shape) of the object. In terms of the (internal) object pixel characterictics. i.e. the gray level, colour or texture of the object. A quantitative representation of the object characteristics is then provided by its descriptor values. Once measured, the object descriptor value is compared to the possible values for known types of object (held in a look-up table) so as to search for a match. Easy Hard

Some Object Descriptors 3. Texture 2.pq’th Central Moment Normalization  1 Defines Defines 1. Compactness Parameter Ideally descriptors are insensitive to variations in the object size, translation and rotation

Shape Recognition via Fourier Descriptors Consider the set of (x,y) values of the boundary pixels as a set of complex numbers x+iy. Compute the DFT of this set of complex numbers Extract a subset of the Fourier values and use as descriptors. As a check apply the inverse DFT to the restricted set (setting the non-descriptors to zero) Compare the limited set if descriptors with look-up tables for the “target” objects Aircraft silhouettes reconstructed from 32 DFT Descriptor values

10.4 Pattern recognition via cross-correlation ConvolutionCross-correlation

Pattern Recognition via Cross-Correlation cont. Cross-correlation techniques are very powerful when searching for objects/pattern of fixed size and orientation.

Pattern Recognition via Cross-Correlation cont. In 2-d the process involves cross- correlating the image with a sub- image (ie the template)