Lecture 19 Representation and description II

Slides:



Advertisements
Similar presentations
Component Analysis (Review)
Advertisements

電腦視覺 Computer and Robot Vision I
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
嵌入式視覺 Feature Extraction
Computer Vision Lecture 16: Texture
Chapter 8 Content-Based Image Retrieval. Query By Keyword: Some textual attributes (keywords) should be maintained for each image. The image can be indexed.
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
COMPUTER AIDED DIAGNOSIS: FEATURE SELECTION Prof. Yasser Mostafa Kadah –
Face Recognition Jeremy Wyatt.
E.G.M. PetrakisTexture1 Repeative patterns of local variations of intensity on a surface –texture pattern: texel Texels: similar shape, intensity distribution.
Feature Screening Concept: A greedy feature selection method. Rank features and discard those whose ranking criterions are below the threshold. Problem:
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Lectures 10&11: Representation and description
Information that lets you recognise a region.
Ch. 10: Linear Discriminant Analysis (LDA) based on slides from
Chapter 11 Representation and Description. Preview Representing a region involves two choices: In terms of its external characteristics (its boundary)
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
Digital Image Processing
Principles of Pattern Recognition
8D040 Basis beeldverwerking Feature Extraction Anna Vilanova i Bartrolí Biomedical Image Analysis Group bmia.bmt.tue.nl.
Recognition and Matching based on local invariant features Cordelia Schmid INRIA, Grenoble David Lowe Univ. of British Columbia.
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Chapter 11 Representation & Description Chapter 11 Representation.
Digital Image Processing Lecture 20: Representation & Description
8D040 Basis beeldverwerking Feature Extraction Anna Vilanova i Bartrolí Biomedical Image Analysis Group bmia.bmt.tue.nl.
Edge Linking & Boundary Detection
Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Digital Image Processing CSC331
Generalized Hough Transform
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Representation & Description.
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
November 30, PATTERN RECOGNITION. November 30, TEXTURE CLASSIFICATION PROJECT Characterize each texture so as to differentiate it from one.
Discriminant Analysis
So, what’s the “point” to all of this?….
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
1 Overview representing region in 2 ways in terms of its external characteristics (its boundary)  focus on shape characteristics in terms of its internal.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Feature Extraction 主講人:虞台文.
 After an image has been segmented into regions by methods such as those discussed in image segmentation chapter, the segmented pixels usually are represented.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Sheng-Fang Huang Chapter 11 part I.  After the image is segmented into regions, how to represent and describe these regions? ◦ In terms of its external.
Image Transformation Spatial domain (SD) and Frequency domain (FD)
Image Representation and Description – Representation Schemes
Texture.
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Digital Image Processing Lecture 20: Representation & Description
LECTURE 10: DISCRIMINANT ANALYSIS
Materi 10 Analisis Citra dan Visi Komputer
Feature description and matching
Fitting Curve Models to Edges
REMOTE SENSING Multispectral Image Classification
REMOTE SENSING Multispectral Image Classification
Computer Vision Lecture 16: Texture II
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Feature space tansformation methods
Digital Image Processing Lecture 21: Principal Components for Description Prof. Charlene Tsai *Chapter 11.4 of Gonzalez.
LECTURE 09: DISCRIMINANT ANALYSIS
Representation and Description
Recognition and Matching based on local invariant features
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Lecture 19 Representation and description II Regional descriptors Principle components Representation with Matlab 4. Feature selection

Regional Descriptors area perimeter compactness topological descriptors texture

Simple descriptors Area = the number of pixels in the region Perimeter = length of its boundary Compactness = (perimeter)2/area

Topological descriptors Features that does not change when deformation E = C – H E = Euler number C = number of connected region H = number of holes

Example Straight-line segments (polygon networks) V – Q + F = C – H = E V = number of vertices Q = number of edges F = number of faces 7-11+2 = 1-3 = -2

Example

Texture description Texture features measures Smoothness, coarseness, and regularity Three approaches: statistical, structural, and spectral Statistical approach yields characterization of smoothness, coarse, grainy Structural approach deals with arrangement of image primitives such as regularly spaced parallel lines Spectral approach is based on properties of Fourier spectrum to detect global periodicity in an image by identifying high-energy, narrow peaks in the spectrum.

Statistical approaches

Example

Co-occurrence matrix and descriptors

Occurrence with large distance Example: correlation descriptor as a function of offset

Structural approaches Texture pattern generated by grammar rules Examples S →aS, and a represents a circle S →aS, S →bA, A →cA, A →c, A →bS, S →a b represents a circle down, c a circle on the left

Spectral approaches Consider the FT F(u, v) of the region. Or represent F(u, v) in polar coordinates S(r, θ) Ray descriptor Ring descriptor

Principle components Suppose we are given n components of an image (e.g. n = 3 for RGB image), written as The mean vector is The covariance matrix Real and symmetric

Eigenvalue, eigenvector, and Hoteling transformation

Example 2 Let X =(x1, x2) be the coordinates of pixel of a region or a boundary The eigenvalues are descriptors insensitive to size and routation

Example 2

Lecture 19 Part II Feature selection This is to select of a subset of features from a larger pool of available features The goal is to select those that are rich in discriminatory information with respect to the classification problem at hand.

Some housekeeping techniques Outlier removal An outlier is a point that lies far away from the mean value of the corresponding random variable; e.g. for normally distributed data, a threshold of 1, 2, or 3 times the standard deviation is used to define outliers. Data normalization : restrict the values of all features within predetermined ranges. E.g. transform to standard normal distribution or transform the range to [-1, 1] or by softmax scaling r is user defined para.

Informative or not by hypothesis testing A feature is informative or not. Statistical tests are commonly used. The idea is to test whether the mean values of feature has in two classes differ significantly H1: The mean values of the feature in the two classes are different. (alternative hypothesis) H0: The mean values of the feature in the two classes are equal. (null hypothesis)

Receiver operating characteristic Measure the overlap between the pdfs describing the data distribution. This overlap is quantified in terms of an area between two curves, also known as AUC (area under the receiver operating curve).

Fisher’s discriminant Ratio Fisher’s discriminant ratio (FDR) is commonly employed to quantify the discriminatory power of individual features between two equiprobable classes.

Combination of features Divergence Si is the covariance matrix

Bhattacharyya Distance and Chernoff Bound

Measures Based on Scatter Matrices Large values of J1, J2, and J3 indicate that data points in the respective feature space have small within-class variance and large between-class distance.

Feature Subset Selection Reduce the number of features by discarding the less informative ones, using scalar feature selection. Consider the features that survive from the previous step in different combinations in order to keep the “best” combination.

Feature ranking 1. features are ranked in descending order according to some criterion C. 2. Let i1 be the index of the best one. Next, the cross-correlations among the first (top-ranked) feature with each of the remaining features are computed. 3. The index, i2, of the second most important feature, x_i2 , is computed as 4. General k

Feature Vector Selection Find the “best” combination of features. To examine all possible combinations of the m features Suboptimal Searching Techniques, e.g. sequential forward selection (SFS),