Computer Vision Lecture 16: Texture II

Slides:



Advertisements
Similar presentations
電腦視覺 Computer and Robot Vision I
Advertisements

CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
嵌入式視覺 Feature Extraction
Computer Vision Lecture 16: Texture
November 4, 2014Computer Vision Lecture 15: Shape Representation II 1Signature Another popular method of representing shape is called the signature. In.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
Texture Turk, 91.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
CS 376b Introduction to Computer Vision 02 / 27 / 2008 Instructor: Michael Eckmann.
Image Enhancement.
Texture Readings: Ch 7: all of it plus Carson paper
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Information that lets you recognise a region.
September 25, 2014Computer Vision Lecture 6: Spatial Filtering 1 Computing Object Orientation We compute the orientation of an object as the orientation.
Linear Algebra and Image Processing
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
CS 376b Introduction to Computer Vision 02 / 26 / 2008 Instructor: Michael Eckmann.
Neighborhood Operations
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Computer vision.
Texture analysis Team 5 Alexandra Bulgaru Justyna Jastrzebska Ulrich Leischner Vjekoslav Levacic Güray Tonguç.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
October 7, 2014Computer Vision Lecture 9: Edge Detection II 1 Laplacian Filters Idea: Smooth the image, Smooth the image, compute the second derivative.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
November 5, 2013Computer Vision Lecture 15: Region Detection 1 Basic Steps for Filtering in the Frequency Domain.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Another Example: Circle Detection
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
CS262: Computer Vision Lect 09: SIFT Descriptors
Texture.
Distinctive Image Features from Scale-Invariant Keypoints
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
COMP 9517 Computer Vision Texture 6/22/2018 COMP 9517 S2, 2009.
Mean Shift Segmentation
Computer Vision Lecture 12: Image Segmentation II
Slope and Curvature Density Functions
Feature description and matching
Fourier Transform: Real-World Images
Common Classification Tasks
Computer Vision Lecture 5: Binary Image Processing
Computer Vision Lecture 4: Color
Fitting Curve Models to Edges
Histogram Histogram is a graph that shows frequency of anything. Histograms usually have bars that represent frequency of occuring of data. Histogram has.
Computer Vision Lecture 9: Edge Detection II
From a presentation by Jimmy Huff Modified by Josiah Yoder
Magnetic Resonance Imaging
Introduction to Artificial Intelligence Lecture 24: Computer Vision IV
Fourier Transform of Boundaries
Intensity Transformation
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Computer Vision Lecture 16: Texture II The most fundamental question is: How can we “measure” texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual pixels. Since the repetitive local arrangement of intensity determines the texture, we have to analyze neighborhoods of pixels to measure texture properties. April 10, 2018 Computer Vision Lecture 16: Texture II

Frequency Descriptors One possible approach is to perform local Fourier transforms of the image. Then we can derive information on the contribution of different spatial frequencies and the dominant orientation(s) in the local texture. For both kinds of information, only the power (magnitude) spectrum needs to be analyzed. April 10, 2018 Computer Vision Lecture 16: Texture II

Frequency Descriptors Prior to the Fourier transform, multiply the image with a Gaussian function to avoid horizontal and vertical “phantom” lines. In the power spectrum, use ring filters of different radii to extract the frequency band contributions. Also in the power spectrum, apply wedge filters at different angles to obtain the information on dominant orientation of edges in the texture. April 10, 2018 Computer Vision Lecture 16: Texture II

Frequency Descriptors The resulting frequency and orientation data can be normalized, for example, so that the sum across frequency or orientation bands is 1. This effectively turns them into histograms that are less affected by monotonic gray-level changes caused by shading etc. However, it is recommended to combine frequency-based approaches with space-based approaches. April 10, 2018 Computer Vision Lecture 16: Texture II

Frequency Descriptors April 10, 2018 Computer Vision Lecture 16: Texture II

Frequency Descriptors April 10, 2018 Computer Vision Lecture 16: Texture II

Co-Occurrence Matrices A simple and popular method for this kind of analysis is the computation of gray-level co-occurrence matrices. To compute such a matrix, we first separate the intensity in the image into a small number of different levels. For example, by dividing the usual brightness values ranging from 0 to 255 by 64, we create the levels 0, 1, 2, and 3. April 10, 2018 Computer Vision Lecture 16: Texture II

Co-Occurrence Matrices Then we choose a displacement vector d = [di, dj]. The gray-level co-occurrence matrix P(a, b) is then obtained by counting all pairs of pixels separated by d having gray levels a and b. Afterwards, to normalize the matrix, we determine the sum across all entries and divide each entry by this sum. This co-occurrence matrix contains important information about the texture in the examined area of the image. April 10, 2018 Computer Vision Lecture 16: Texture II

Co-Occurrence Matrices Example (2 gray levels): 1 local texture patch co-occurrence matrix 1 d = (1, 1) displacement vector 2 9 1/25  10 4 April 10, 2018 Computer Vision Lecture 16: Texture II

Co-Occurrence Matrices It is often a good idea to use more than one displacement vector, resulting in multiple co-occurrence matrices. The more similar the matrices of two textures are, the more similar are usually the textures themselves. This means that the difference between corresponding elements of these matrices can be taken as a similarity measure for textures. Based on such measures we can use texture information to enhance the detection of regions and contours in images. April 10, 2018 Computer Vision Lecture 16: Texture II

Co-Occurrence Matrices For a given co-occurrence matrix P(a, b), we can compute the following six important characteristics: April 10, 2018 Computer Vision Lecture 16: Texture II

Co-Occurrence Matrices April 10, 2018 Computer Vision Lecture 16: Texture II

Co-Occurrence Matrices You should compute these six characteristics for multiple displacement vectors, including different directions. The maximum length of your displacement vectors depends on the size of the texture elements. April 10, 2018 Computer Vision Lecture 16: Texture II

Classification Performance April 10, 2018 Computer Vision Lecture 16: Texture II

Law’s Texture Energy Measures Law’s measures use a set of convolution filters to assess gray level, edges, spots, ripples, and waves in textures. This method starts with three basic filters: averaging: L3 = (1, 2, 1) first derivative (edges): E3 = (-1, 0, 1) second derivative (curvature): S3 = (-1, 2, -1) April 10, 2018 Computer Vision Lecture 16: Texture II

Law’s Texture Energy Measures Convolving these filters with themselves and each other results in five new filters: L5 = (1, 4, 6, 4, 1) E5 = (-1, -2, 0, 2, 1) S5 = (-1, 0, 2, 0, -1) R5 = (1, -4, 6, -4, 1) W5 = (-1, 2, 0, -2, 1) April 10, 2018 Computer Vision Lecture 16: Texture II

Law’s Texture Energy Measures Now we can multiply any two of these vectors, using the first one as a column vector and the second one as a row vector, resulting in 5  5 Law’s masks. For example: April 10, 2018 Computer Vision Lecture 16: Texture II

Law’s Texture Energy Measures Now you can apply the resulting 25 convolution filters to a given image. The 25 resulting values at each position in the image are useful descriptors of the local texture. Law’s texture energy measures are easy to apply, efficient, and give good results for most texture types. However, co-occurrence matrices are more flexible; for example, they can be scaled to account for coarse-grained textures. April 10, 2018 Computer Vision Lecture 16: Texture II

Computer Vision Lecture 16: Texture II Local Binary Patterns Clearly, local intensity gradients are important in describing textures. However, the orientation of these gradients may be more important than their magnitude (steepness). After all, changing the contrast of a picture does not fundamentally change the texture, while it modifies the magnitude of all gradients. The technique of local binary patterns uses histograms to represent the relative frequencies of different gradients. April 10, 2018 Computer Vision Lecture 16: Texture II

Computer Vision Lecture 16: Texture II Local Binary Patterns Here, the gradient at a given pixel p is computed by comparing its intensity with that of a number of neighboring pixels. Those could be immediate neighbors or be located at larger distances. Such neighborhoods are characterized by the number of neighbors and their distance from p (“radius”). For example, a popular choice is an (8, 2) neighborhood, which uses 8 neighbors at a radius of 2 pixels. April 10, 2018 Computer Vision Lecture 16: Texture II

Computer Vision Lecture 16: Texture II Local Binary Patterns April 10, 2018 Computer Vision Lecture 16: Texture II

Computer Vision Lecture 16: Texture II Local Binary Patterns April 10, 2018 Computer Vision Lecture 16: Texture II

Computer Vision Lecture 16: Texture II Local Binary Patterns When characterizing gradients, we are typically only interested in uniform binary patterns. These are the patterns that contain at most two transitions from 0 to 1 or vice versa when we read them along the full circle. The example pattern 10000011 is uniform, because it has only two transitions, which are between positions 1 and 2 and between positions 6 and 7 in the binary string. The pattern 11001100, on the other hand, has four transitions and is thus not uniform. April 10, 2018 Computer Vision Lecture 16: Texture II

Computer Vision Lecture 16: Texture II Local Binary Patterns The total number of uniform patterns is 58. We build a histogram with 59 bins, with each of the first 58 bins indicating how often each of the uniform patterns was found within the image area P that we want to analyze. The 59th bin shows how many non-uniform patterns were encountered. We normalize the histograms by dividing each of its 59 entries by |P|. The resulting 59-dimensional vector is the LBP descriptor of the texture patch. April 10, 2018 Computer Vision Lecture 16: Texture II

Classification Performance April 10, 2018 Computer Vision Lecture 16: Texture II

Texture Segmentation Benchmarks Benchmark image for texture segmentation – an ideal segmentation algorithm would divide this image into five segments. For example, a texture-descriptor based variant of split-and-merge may be able to achieve good results. April 10, 2018 Computer Vision Lecture 16: Texture II