嵌入式視覺 Feature Extraction

Slides:



Advertisements
Similar presentations
Department of Computer Engineering
Advertisements

CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Computational Biology, Part 23 Biological Imaging II Robert F. Murphy Copyright  1996, 1999, All rights reserved.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
1 Color and Texture Ch 6 and 7 of Shapiro How do we quantify them? How do we use them to segment an image?
Computer Vision Lecture 16: Texture
Texture. Edge detectors find differences in overall intensity. Average intensity is only simplest difference. many slides from David Jacobs.
Digital Image Processing
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
6/9/2015Digital Image Processing1. 2 Example Histogram.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
Texture Turk, 91.
Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
CS 376b Introduction to Computer Vision 03 / 26 / 2008 Instructor: Michael Eckmann.
Digital Image Processing
MSU CSE 803 Stockman Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute.
E.G.M. PetrakisTexture1 Repeative patterns of local variations of intensity on a surface –texture pattern: texel Texels: similar shape, intensity distribution.
Texture Readings: Ch 7: all of it plus Carson paper
Computer Vision Lecture 3: Digital Images
MSU CSE 803 Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute some result.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Image processing Lecture 4.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Copyright © 2012 Elsevier Inc. All rights reserved.
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Machine Vision for Robots
Spatial Filtering: Basics
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Texture analysis Team 5 Alexandra Bulgaru Justyna Jastrzebska Ulrich Leischner Vjekoslav Levacic Güray Tonguç.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
HP-PURDUE-CONFIDENTIAL Final Exam May 16th 2008 Slide No.1 Outline Motivations Analytical Model of Skew Effect and its Compensation in Banding and MTF.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
EECS 274 Computer Vision Segmentation by Clustering II.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
CS 376b Introduction to Computer Vision 03 / 21 / 2008 Instructor: Michael Eckmann.
Local invariant features Cordelia Schmid INRIA, Grenoble.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
By Brian Lam and Vic Ciesielski RMIT University
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
CS654: Digital Image Analysis
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
CS Spring 2010 CS 414 – Multimedia Systems Design Lecture 4 – Audio and Digital Image Representation Klara Nahrstedt Spring 2010.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Digital Image Processing Image Enhancement in Spatial Domain
1 Color and Texture How do we quantify them? How do we use them to segment an image?
CSE 185 Introduction to Computer Vision Image Filtering: Spatial Domain.
Image Quality Measures Omar Javed, Sohaib Khan Dr. Mubarak Shah.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Image Enhancement Band Ratio Linear Contrast Enhancement
Another Example: Circle Detection
Program Studi S-1 Teknik Informatika FMIPA Universitas Padjadjaran
- photometric aspects of image formation gray level images
Texture.
Distinctive Image Features from Scale-Invariant Keypoints
COMP 9517 Computer Vision Texture 6/22/2018 COMP 9517 S2, 2009.
Image gradients and edges
Computer Vision Lecture 16: Texture II
Texture.
Motion illusion, rotating snakes
Linear Operations Using Masks
COMPUTER VISION Introduction
Fourier Transform of Boundaries
IT472 Digital Image Processing
Review and Importance CS 111.
Presentation transcript:

嵌入式視覺 Feature Extraction Statistical Feature Extraction(Texture) Geometry Feature Extraction(Fingerprint)

Texture Feature Extraction Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

Natural Textures grass leaves

Texture as Statistical Feature Segmenting out texels is difficult or impossible in real images. Numeric quantities or statistics that describe a texture can be computed from the gray tones (or colors) alone. This approach is less intuitive, but is computationally efficient. It can be used for both classification and segmentation.

Simple Statistical Texture Measures 1. Edge Density and Direction Use an edge detector as the first step in texture analysis. The number of edge pixels in a fixed-size region tells us how busy that region is. The directions of the edges also help characterize the texture

Two Edge-based Texture Measures 1. edgeness per unit area 2. edge magnitude and direction histograms Fedgeness = |{ p | gradient_magnitude(p)  threshold}| / N where N is the size of the unit area Fmagdir = ( Hmagnitude, Hdirection ) where these are the normalized histograms of gradient magnitudes and gradient directions, respectively.

Example Original Image Frei-Chen Thresholded Edge Image Edge Image

Local Binary Pattern For each pixel p, create an 8-bit number b1 b2 b3 b4 b5 b6 b7 b8, where bi = 0 if neighbor i has value less than or equal to p’s value and 1 otherwise. Represent the texture in the image (or a region) by the histogram of these numbers. 1 2 3 100 101 103 40 50 80 50 60 90 4 5 8 7 6 1 1 1 1 1 1 0 0

Co-occurrence Matrix Features A co-occurrence matrix is a 2D array C in which Both the rows and columns represent a set of possible image values. C (i,j) indicates how many times value i co-occurs with value j in a particular spatial relationship d. The spatial relationship is specified by a vector d = (dr,dc). d

Co-occurrence Matrix gray-tone image d = (3,1) co-occurrence matrix 0 1 2 1 1 0 0 0 0 2 2 i j 1 2 1 0 3 2 0 2 0 0 1 3 Cd From Cd we can compute Nd, the normalized co-occurrence matrix, where each value is divided by the sum of all the values.

Co-occurrence Features

Laws’ Texture Energy Features The Laws Algorithm : Filter the input image using texture filters. Compute texture energy by summing the absolute value of filtering results in local neighborhoods around each pixel. Combine features to achieve rotational invariance.

Law’s texture masks (1)

Law’s texture masks (2) Creation of 2D Masks E5 L5 E5L5

9D feature vector for pixel Subtract mean neighborhood intensity from (center) pixel Apply 16 5x5 masks to get 16 filtered images Fk , k=1 to 16 Produce 16 texture energy maps using 15x15 windows Ek[r,c] = ∑ |Fk[i,j]| Replace each distinct pair with its average map: 9 features (9 filtered images) defined as follows:

Laws Filters

Example: Using Laws Features to Cluster water tiger fence flag grass small flowers big flowers

Features from sample images

Autocorrelation function Autocorrelation function can detect repetitive patterns Also defines fineness/coarseness of the texture Compare the dot product (energy) of non shifted image with a shifted image

Interpreting autocorrelation Coarse texture  function drops off slowly Fine texture  function drops off rapidly Can drop differently for r and c Regular textures  function will have peaks and valleys; peaks can repeat far away from [0, 0] Random textures  only peak at [0, 0]; breadth of peak gives the size of the texture

Fourier power spectrum High frequency power  fine texture Concentrated power  regularity Directionality  directional texture

Feature Extraction Process Input: image Output: pixel features Color features Texture features Position features Algorithm: Select an appropriate scale for each pixel and extract features for that pixel at the selected scale feature extraction Pixel Features Polarity Anisotropy Texture contrast Original image

Geometry Feature Extraction (Fingerprint)

Image Enhancement - Gabor Filtering

Image Processing (2.2) Raw Image Enhanced Image Skeleton Image

Fingerprint Feature : Minutia Ending Point Bifurcation Point

Minutia Extraction