November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.

Slides:



Advertisements
Similar presentations
電腦視覺 Computer and Robot Vision I
Advertisements

1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
嵌入式視覺 Feature Extraction
Computer Vision Lecture 16: Texture
Chapter 8 Content-Based Image Retrieval. Query By Keyword: Some textual attributes (keywords) should be maintained for each image. The image can be indexed.
Computer Vision Lecture 16: Region Representation
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
November 4, 2014Computer Vision Lecture 15: Shape Representation II 1Signature Another popular method of representing shape is called the signature. In.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Lecture 4 Edge Detection
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
Texture Turk, 91.
Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
CS 376b Introduction to Computer Vision 02 / 27 / 2008 Instructor: Michael Eckmann.
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering
Blob detection.
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Information that lets you recognise a region.
September 25, 2014Computer Vision Lecture 6: Spatial Filtering 1 Computing Object Orientation We compute the orientation of an object as the orientation.
Computational Photography: Image Processing Jinxiang Chai.
Linear Algebra and Image Processing
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Copyright © 2012 Elsevier Inc. All rights reserved.
CS 376b Introduction to Computer Vision 02 / 26 / 2008 Instructor: Michael Eckmann.
Neighborhood Operations
Computer vision.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Chapter 11 Representation & Description Chapter 11 Representation.
Digital Image Processing Lecture 20: Representation & Description
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
Digital Image Processing CCS331 Relationships of Pixel 1.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
09/19/2002 (C) University of Wisconsin 2002, CS 559 Last Time Color Quantization Dithering.
October 7, 2014Computer Vision Lecture 9: Edge Detection II 1 Laplacian Filters Idea: Smooth the image, Smooth the image, compute the second derivative.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
CS654: Digital Image Analysis Lecture 36: Feature Extraction and Analysis.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
November 5, 2013Computer Vision Lecture 15: Region Detection 1 Basic Steps for Filtering in the Frequency Domain.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Miguel Tavares Coimbra
Image Representation and Description – Representation Schemes
- photometric aspects of image formation gray level images
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Texture.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Mean Shift Segmentation
Slope and Curvature Density Functions
Image gradients and edges
Fourier Transform: Real-World Images
Common Classification Tasks
Fitting Curve Models to Edges
Jeremy Bolton, PhD Assistant Teaching Professor
Computer Vision Lecture 9: Edge Detection II
Computer Vision Lecture 16: Texture II
Magnetic Resonance Imaging
Fourier Transform of Boundaries
Intensity Transformation
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute the signature of a contour, we choose an (arbitrary) starting point A on it. Then we measure the shortest distance from A to any other point on the contour, measured in perpendicular direction to the tangent at point A. Now we keep moving along the contour while measuring this distance. The resulting measure of distance as a function of path length is the signature of the contour.

November 12, 2013Computer Vision Lecture 12: Texture 2Signature Notice that if you normalize the path length, the signature representation is position, rotation, and scale invariant. Smoothing of the contour before computing its signature may be necessary for a useful computation of the tangent.

November 12, 2013Computer Vision Lecture 12: Texture 3 Fourier Transform of Boundaries As we discussed before, a 2D contour can be represented by two 1D functions of a parameter s. Let S be our (arbitrary) starting point on the contour. By definition, we should move in counterclockwise direction (for example, we can adapt the boundary- following algorithm to do that). In our discrete (pixel) space, the points of the contour can then be specified by [v[s], h[s]], where s is the distance from S (measured along the contour) and v and h are the functions describing our vertical and horizontal position on the contour for a given s.

November 12, 2013Computer Vision Lecture 12: Texture 4 Fourier Transform of Boundaries

November 12, 2013Computer Vision Lecture 12: Texture 5 Fourier Transform of Boundaries

November 12, 2013Computer Vision Lecture 12: Texture 6 Fourier Transform of Boundaries Obviously, both v[s] and h[s] are periodic functions, as we will pass the same points over and over again in constant time intervals. This gives us a brilliant idea: Why not represent the contour in Fourier space? In fact, this can be done. We simply compute separate 1D Fourier transforms for v[s] and h[s]. When we convert the result into the magnitude/phase representation, the transform gives us the functions M v [l], M h [l],  v [l], and  h [l] for frequency l ranging from 0 (offset) to s/2.

November 12, 2013Computer Vision Lecture 12: Texture 7 Fourier Transform of Boundaries As with images, higher frequencies do not usually contribute much to a function an can be disregarded without any noticeable consequences. Let us see what happens when we delete all functions above frequency l max and perform the inverse Fourier transform. Will the original contour be conserved even for small values of l max ?

November 12, 2013Computer Vision Lecture 12: Texture 8 Fourier Transform of Boundaries

November 12, 2013Computer Vision Lecture 12: Texture 9 Fourier Transform of Boundaries

November 12, 2013Computer Vision Lecture 12: Texture 10 Fourier Transform of Boundaries The phase spectrum of the Fourier description usually depends on our chosen starting point. The power (magnitude) spectrum, however, is invariant to this choice if we combine vertical and horizontal magnitude as follows: We can cut off higher-frequency descriptors and still have a precise and compact contour representation. If we normalize the values of all M[l] so that they add up to 1, we obtain scale-invariant descriptors.

November 12, 2013Computer Vision Lecture 12: Texture 11

November 12, 2013Computer Vision Lecture 12: Texture 12 Our next topic is… Texture

November 12, 2013Computer Vision Lecture 12: Texture 13Texture

November 12, 2013Computer Vision Lecture 12: Texture 14Texture Texture is an important cue for biological vision systems to estimate the boundaries of objects. Also, texture gradient is used to estimate the orientation of surfaces. For example, on a perfect lawn the grass texture is the same everywhere. However, the further away we look, the finer this texture becomes – this change is called texture gradient. For the same reasons, texture is also a useful feature for computer vision systems.

November 12, 2013Computer Vision Lecture 12: Texture 15 Texture Gradient

November 12, 2013Computer Vision Lecture 12: Texture 16Texture The most fundamental question is: How can we “measure” texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual pixels. Since the repetitive local arrangement of intensity determines the texture, we have to analyze neighborhoods of pixels to measure texture properties.

November 12, 2013Computer Vision Lecture 12: Texture 17 Frequency Descriptors One possible approach is to perform local Fourier transforms of the image. Then we can derive information on the contribution of different spatial frequencies and the contribution of different spatial frequencies and the dominant orientation(s) in the local texture. the dominant orientation(s) in the local texture. For both kinds of information, only the power (magnitude) spectrum needs to be analyzed.

November 12, 2013Computer Vision Lecture 12: Texture 18 Frequency Descriptors Prior to the Fourier transform, apply a Gaussian filter to avoid horizontal and vertical “phantom” lines. In the power spectrum, use ring filters of different radii to extract the frequency band contributions. Also in the power spectrum, apply wedge filters at different angles to obtain the information on dominant orientation of edges in the texture.

November 12, 2013Computer Vision Lecture 12: Texture 19 Frequency Descriptors The resulting frequency and orientation data can be normalized, for example, so that the sum across frequency or orientation bands is 1. This effectively turns them into histograms that are less affected by monotonic gray-level changes caused by shading etc. However, it is recommended to combine frequency- based approaches with space-based approaches.

November 12, 2013Computer Vision Lecture 12: Texture 20 Co-Occurrence Matrices A simple and popular method for this kind of analysis is the computation of gray-level co-occurrence matrices. To compute such a matrix, we first separate the intensity in the image into a small number of different levels. For example, by dividing the usual brightness values ranging from 0 to 255 by 64, we create the levels 0, 1, 2, and 3.

November 12, 2013Computer Vision Lecture 12: Texture 21 Co-Occurrence Matrices Then we choose a displacement vector d = [di, dj]. The gray-level co-occurrence matrix P(a, b) is then obtained by counting all pairs of pixels separated by d having gray levels a and b. Afterwards, to normalize the matrix, we determine the sum across all entries and divide each entry by this sum. This co-occurrence matrix contains important information about the texture in the examined area of the image.

November 12, 2013Computer Vision Lecture 12: Texture 22 Co-Occurrence Matrices Example (2 gray levels): local texture patch d = (1, 1) displacement vector 1/25  co-occurrence matrix

November 12, 2013Computer Vision Lecture 12: Texture 23 Co-Occurrence Matrices It is often a good idea to use more than one displacement vector, resulting in multiple co- occurrence matrices. The more similar the matrices of two textures are, the more similar are usually the textures themselves. This means that the difference between corresponding elements of these matrices can be taken as a similarity measure for textures. Based on such measures we can use texture information to enhance the detection of regions and contours in images.

November 12, 2013Computer Vision Lecture 12: Texture 24 Co-Occurrence Matrices For a given co-occurrence matrix P(a, b), we can compute the following six important characteristics:

November 12, 2013Computer Vision Lecture 12: Texture 25 Co-Occurrence Matrices

November 12, 2013Computer Vision Lecture 12: Texture 26 Co-Occurrence Matrices You should compute these six characteristics for multiple displacement vectors, including different directions. For example, you could use the following vectors: (-3, 0), (-2, 0), (-1, 0) (1, 0), (2, 0), (3, 0) (0, -3), (0, -2), (0, -1) (0, 1), (0, 2), (0, 3) (-3, -3), (-2, -2), (-1, -1) (1, 1), (2, 2), (3, 3) (-3, 3), (-2, 2), (-1, 1) (1, -1), (2, -2), (3, -3) The maximum length of your displacement vectors depend on the size of the texture elements.

November 12, 2013Computer Vision Lecture 12: Texture 27 Law’s Texture Energy Measures Law’s measures use a set of convolution filters to assess gray level, edges, spots, ripples, and waves in textures. This method starts with three basic filters: averaging: L 3 = (1, 2, 1) averaging: L 3 = (1, 2, 1) first derivative (edges): E 3 = (-1, 0, 1) first derivative (edges): E 3 = (-1, 0, 1) second derivative (curvature): S 3 = (-1, 2, -1) second derivative (curvature): S 3 = (-1, 2, -1)

November 12, 2013Computer Vision Lecture 12: Texture 28 Law’s Texture Energy Measures Convolving these filters with themselves and each other results in five new filters: L 5 = (1, 4, 6, 4, 1) L 5 = (1, 4, 6, 4, 1) E 5 = (-1, -2, 0, 2, 1) E 5 = (-1, -2, 0, 2, 1) S 5 = (-1, 0, 2, 0, -1) S 5 = (-1, 0, 2, 0, -1) R 5 = (1, -4, 6, -4, 1) R 5 = (1, -4, 6, -4, 1) S 5 = (-1, 2, 0, -2, 1) S 5 = (-1, 2, 0, -2, 1)

November 12, 2013Computer Vision Lecture 12: Texture 29 Law’s Texture Energy Measures Now we can multiply any two of these vectors, using the first one as a column vector and the second one as a row vector, resulting in 5  5 Law’s masks. For example:

November 12, 2013Computer Vision Lecture 12: Texture 30 Law’s Texture Energy Measures Now you can apply the resulting 25 convolution filters to a given image. The 25 resulting values at each position in the image are useful descriptors of the local texture. Law’s texture energy measures are easy to apply and give good results for most texture types. However, co-occurrence matrices are more flexible; for example, they can be scaled to account for coarse-grained textures.

November 12, 2013Computer Vision Lecture 12: Texture 31 Co-Occurrence Matrices Benchmark image for texture segmentation – an ideal segmentation algorithm would divide this image into five segments. For example, a texture-descriptor based variant of split- and-merge may be able to achieve good results.