EECS 274 Computer Vision Pyramid and Texture. Filter, pyramid and texture Frequency domain Fourier transform Gaussian pyramid Wavelets Texture Reading:

Slides:



Advertisements
Similar presentations
Linear Filtering – Part II Selim Aksoy Department of Computer Engineering Bilkent University
Advertisements

3-D Computational Vision CSc Image Processing II - Fourier Transform.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Spatial Filtering (Chapter 3)
Fourier Transform (Chapter 4)
Ted Adelson’s checkerboard illusion. Motion illusion, rotating snakes.
CS 691 Computational Photography
Pyramids and Texture. Scaled representations Big bars and little bars are both interesting Spots and hands vs. stripes and hairs Inefficient to detect.
Linear Filtering – Part II Selim Aksoy Department of Computer Engineering Bilkent University
Computational Photography CSE 590 Tamara Berg Filtering & Pyramids.
Texture. Edge detectors find differences in overall intensity. Average intensity is only simplest difference. many slides from David Jacobs.
Lecture 4 Linear Filters and Convolution
Sampling, Aliasing, & Mipmaps
Image processing. Image operations Operations on an image –Linear filtering –Non-linear filtering –Transformations –Noise removal –Segmentation.
Convolution, Edge Detection, Sampling : Computational Photography Alexei Efros, CMU, Fall 2006 Some slides from Steve Seitz.
CSCE 641 Computer Graphics: Image Sampling and Reconstruction Jinxiang Chai.
Sampling and Pyramids : Rendering and Image Processing Alexei Efros …with lots of slides from Steve Seitz.
Computer Vision - A Modern Approach
General Functions A non-periodic function can be represented as a sum of sin’s and cos’s of (possibly) all frequencies: F(  ) is the spectrum of the function.
CSCE 641 Computer Graphics: Image Sampling and Reconstruction Jinxiang Chai.
Immagini e filtri lineari. Image Filtering Modifying the pixels in an image based on some function of a local neighborhood of the pixels
CSCE 641 Computer Graphics: Fourier Transform Jinxiang Chai.
Announcements For future problems sets: matlab code by 11am, due date (same as deadline to hand in hardcopy). Today’s reading: Chapter 9, except.
Introduction to Computer Vision CS / ECE 181B  Handout #4 : Available this afternoon  Midterm: May 6, 2004  HW #2 due tomorrow  Ack: Prof. Matthew.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
2D Fourier Theory for Image Analysis Mani Thomas CISC 489/689.
CPSC 641 Computer Graphics: Fourier Transform Jinxiang Chai.
Computer Vision - A Modern Approach Set: Pyramids and Texture Slides by D.A. Forsyth Scaled representations Big bars (resp. spots, hands, etc.) and little.
Computational Photography: Fourier Transform Jinxiang Chai.
Fourier Analysis : Rendering and Image Processing Alexei Efros.
Fourier Transform 2D Discrete Fourier Transform - 2D
Basic Image Processing January 26, 30 and February 1.
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Chapter 4 Image Enhancement in the Frequency Domain Chapter.
CSC589 Introduction to Computer Vision Lecture 9 Sampling, Gaussian Pyramid Bei Xiao.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5409 T-R 10:30am – 11:50am.
2D Image Fourier Spectrum.
Applications of Image Filters Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/04/10.
The Wavelet Tutorial: Part3 The Discrete Wavelet Transform
CAP5415: Computer Vision Lecture 4: Image Pyramids, Image Statistics, Denoising Fall 2006.
Image Processing Xuejin Chen Ref:
Image Processing Edge detection Filtering: Noise suppresion.
09/19/2002 (C) University of Wisconsin 2002, CS 559 Last Time Color Quantization Dithering.
Lecture 7: Sampling Review of 2D Fourier Theory We view f(x,y) as a linear combination of complex exponentials that represent plane waves. F(u,v) describes.
Texture Key issue: representing texture –Texture based matching little is known –Texture segmentation key issue: representing texture –Texture synthesis.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
Fourier Transform.
Image hole-filling. Agenda Project 2: Will be up tomorrow Due in 2 weeks Fourier – finish up Hole-filling (texture synthesis) Image blending.
Lecture 5: Fourier and Pyramids
2D Fourier Transform.
Last Lecture photomatix.com. Today Image Processing: from basic concepts to latest techniques Filtering Edge detection Re-sampling and aliasing Image.
Linear filters. CS8690 Computer Vision University of Missouri at Columbia What is Image Filtering? Modify the pixels in an image based on some function.
Miguel Tavares Coimbra
Linear Filters and Edges Chapters 7 and 8
- photometric aspects of image formation gray level images
… Sampling … … Filtering … … Reconstruction …
Linear Filtering – Part II
Linear Filters and Edges Chapters 7 and 8
Department of Computer Engineering
Image Enhancement in the
General Functions A non-periodic function can be represented as a sum of sin’s and cos’s of (possibly) all frequencies: F() is the spectrum of the function.
Fourier Transform.
Outline Linear Shift-invariant system Linear filters
CSCE 643 Computer Vision: Image Sampling and Filtering
CSCE 643 Computer Vision: Thinking in Frequency
Gaussian Pyramid and Texture
Texture.
Basic Image Processing
Linear Filtering CS 678 Spring 2018.
Department of Computer Engineering
Presentation transcript:

EECS 274 Computer Vision Pyramid and Texture

Filter, pyramid and texture Frequency domain Fourier transform Gaussian pyramid Wavelets Texture Reading: FP Chapters 8 and 9, S Chapters 3 and 4

Scaled representations Big bars (resp. spots, hands, etc.) and little bars are both interesting –Stripes and hairs, say Inefficient to detect big bars with big filters –And there is superfluous detail in the filter kernel Alternative: –Apply filters of fixed size to images of different sizes –Typically, a collection of images whose edge length changes by a factor of 2 (or root 2) –This is a pyramid (or Gaussian pyramid) by visual analogy

A bar in the big images is a hair on the zebra’s nose; in smaller images, a stripe; in the smallest, the animal’s nose

Gaussian pre-filtering G 1/4 G 1/8 Gaussian 1/2 Solution: filter the image, then subsample Filter size should double for each ½ size reduction. Why?

Subsampling with Gaussian pre-filtering G 1/4G 1/8Gaussian 1/2 Solution: filter the image, then subsample Filter size should double for each ½ size reduction. Why? How can we speed this up?

Compare with... 1/4 (2x zoom) 1/8 (4x zoom) Why does this look so crufty? 1/2

Aliasing Can’t shrink an image by taking every second pixel If we do, characteristic errors appear –In the next few slides –Typically, small phenomena look bigger; fast phenomena can look slower –Common phenomenon Wagon wheels rolling the wrong way in movies Checkerboards misrepresented in ray tracing Striped shirts look funny on colour television

Resample the checkerboard by taking one sample at each circle. In the case of the top left board, new representation is reasonable. Top right also yields a reasonable representation. Bottom left is all black (dubious) and bottom right has checks that are too big. Sampling scheme is crucially related to frequency

Constructing a pyramid by taking every second pixel leads to layers that badly misrepresent the top layer

Open questions What causes the tendency of differentiation to emphasize noise? In what precise respects are discrete images different from continuous images? How do we avoid aliasing? General thread: a language for fast changes --- The Fourier Transform

The Fourier transform Represent function on a new basis –Think of functions as vectors, with many components –We now apply a linear transformation to transform the basis dot product with each basis element In the expression, u and v select the basis element, so a function of x and y becomes a function of u and v basis elements have the form Measure amount of the sinusoid with given frequency and orientation in the signal

Fourier transform convole sinusoidal s(x) with a filter whose impulse repose h(x) another sinusoid of the same frequency but magnitude A and phase ϕ Tabulation of magnitude and phase response to each frequency A: how much of a frequency component ϕ : where of a certain frequency is

To get some sense of what basis elements look like, we plot a basis element --- or rather, its real part --- as a function of x,y for some fixed u, v. We get a function that is constant when (ux+vy) is constant, e.g., (u,v)=(1,2) The magnitude of the vector (u, v) gives a frequency, and its direction gives an orientation. The function is a sinusoid with this frequency along the direction, and constant perpendicular to the direction. Spatial frequency component

Here u and v are larger than in the previous slide, (u,v)=(0,0.4)

And larger still...

Phase and magnitude Fourier transform of a real function is complex –difficult to plot, visualize –instead, we can think of the phase and magnitude of the transform Phase is the phase of the complex transform Magnitude is the magnitude of the complex transform Curious fact –all natural images have about the same magnitude transform –hence, phase seems to matter, but magnitude largely doesn’t Demonstration –Take two pictures, swap the phase transforms, compute the inverse - what does the result look like?

This is the magnitude transform of the cheetah pic

This is the phase transform of the cheetah pic

This is the magnitude transform of the zebra pic

This is the phase transform of the zebra pic

Reconstruction with zebra phase, cheetah magnitude Magnitude spctrum of an image is rather uninformative

Reconstruction with cheetah phase, zebra magnitude

Various Fourier Transform Pairs Important facts –The Fourier transform is linear –There is an inverse FT –if you scale the function’s argument, then the transform’s argument scales the other way. This makes sense --- if you multiply a function’s argument by a number that is larger than one, you are stretching the function, so that high frequencies go to low frequencies –The FT of a Gaussian is a Gaussian. The convolution theorem –The Fourier transform of the convolution of two functions is the product of their Fourier transforms –The inverse Fourier transform of the product of two Fourier transforms is the convolution of the two inverse Fourier transforms There’s a table in the book.

Sampling in 1D takes a continuous function and replaces it with a vector of values, consisting of the function’s values at a set of sample points. We’ll assume that these sample points are on a regular grid, and can place one at each integer for convenience. Sampling

Sampling in 2D does the same thing, only in 2D. We’ll assume that these sample points are on a regular grid, and can place one at each integer point for convenience. Sampling in 2D

A continuous model for a sampled function We want to be able to approximate integrals sensibly Leads to –the delta function –the following model Zero everywhere except at integer points

The Fourier transform of a sampled signal F(u,v) is the Fourier transform of f(x,y)

Transform of box filter is sinc. Transform of Gaussian is Gaussian. (Trucco and Verri) Example

Fourier transform of the sampled signal consists of a sum of copies of the Fourier transform of the original signal, shifted with respect to each other by the sampling frequency. If the shifted copies do not overlap, the original signal can be reconstructed from the sampled signal. If they overlap, we cannot obtain a separate copy of the Fourier transform, and the signal is aliased

Nyquist theorem: sampling rate must be at least two times the cut –off frequency for perfect reconstruction

Smoothing as low-pass filtering The message of the FT is that high frequencies lead to trouble with sampling. Solution: suppress high frequencies before sampling –multiply the FT of the signal with something that suppresses high frequencies –or convolve with a low-pass filter A filter whose FT is a box is bad, because the filter kernel has infinite support Common solution: use a Gaussian –multiplying FT by Gaussian is equivalent to convolving image with Gaussian.

Sampling without smoothing. Top row shows the images, sampled at every second pixel to get the next; bottom row shows the magnitude spectrum of these images. substantial aliasing

Sampling with smoothing (small σ). Top row shows the images. We get the next image by smoothing the image with a Gaussian with σ=1 pixel, then sampling at every second pixel to get the next; bottom row shows the magnitude spectrum of these images. Low pass filter suppresses high frequency components with less aliasing reducing aliasing

Sampling with smoothing (large σ). Top row shows the images. We get the next image by smoothing the image with a Gaussian with σ=2 pixels, then sampling at every second pixel to get the next; bottom row shows the magnitude spectrum of these images. Large σ  less aliasing, but with little detail Gaussian is not an ideal low-pass filter lose details

Decimation and interpolation Decimation (downsampling): Use an “ideal” anti-aliasing filter h(n) In practice, use FIR (finite impulse response) filter Interpolation (upsampling): Use an “ideal” interpolation filter h(n) In practice, use FIR (finite impulse response) filter

Applications of scaled representations Search for correspondence –look at coarse scales, then refine with finer scales, coarse-to-fine matching –4 × 4, 16 × 16, …, 1024 × 1024 versions of images Edge tracking –a “good” edge at a fine scale has parents at a coarser scale Control of detail and computational cost in matching –e.g. finding stripes –terribly important in texture representation

The Gaussian pyramid Smooth with Gaussians, because –a Gaussian*Gaussian=another Gaussian Gaussian filter operates as a low-pass filter Synthesis –smooth and sample Analysis –take the top image Use 3-tap FIR, h(n)=(1/4,1/2,1/4) in dowsampling and 3-tap FIR, h(n)=(1/2,1,1/2) for interpolation

The Laplacian pyramid Analysis –preserve difference between upsampled Gaussian pyramid level and Gaussian pyramid level –band pass filter - each level represents spatial frequencies (largely) unrepresented at other levels Synthesis –reconstruct Gaussian pyramid, take top layer

LoG and DoG DoG (difference of Gaussian) is widely used as an approximation to LoG (Laplacian of Gaussian)

Laplacian pyramid Different levels represent different spatial frequencies

Stripes give stronger response at particular scales as each layer corresponds to the band-pass filter output

Filters in spatial frequency domain Fourier transform of a Gaussian with std of σ is the a Gaussian with std of 1/ σ It falls off quickly in frequency domain and operates as a low-pass filter Convolving an image with Gaussian with small σ  all but the highest frequencies are preserved Convolving an image with Gaussian with large σ  like an average of the image

Example use: Smoothing/Blurring We want a smoothed function of f(x) H(u) attenuates high frequencies in F(u) (Low-pass Filter)! Then Let us use a Gaussian kernel

Band-pass filter and orientation selective operators Respond strongly to signals of particular range of spatial frequencies and orientations

Oriented pyramids Laplacian pyramid is orientation independent Apply an oriented filter to determine orientations at each layer –by clever filter design, we can simplify synthesis –this represents image information at a particular scale and orientation

Reprinted from “Shiftable MultiScale Transforms,” by Simoncelli et al., IEEE Transactions on Information Theory, 1992, copyright 1992, IEEE

analysissynthesis Oriented pyramids

Gabor filters Similar to 2D receptive fields Self similar Band-pass filter Good spatial locality Sensitive to orientation Each kernel is a product of a Gaussian envelope and a complex plane wave Often use Gabor wavelets of m scaled and n orientations Successfully used in iris recognition

Wavelets HLHH LLLH

Texture Key issue: representing texture –Texture based matching little is known –Texture segmentation key issue: representing texture –Texture synthesis useful; also gives some insight into quality of representation –Shape from texture cover superficially

Representing textures Textures are made up of quite stylised subelements, repeated in meaningful ways Representation: –find the subelements, and represent their statistics But what are the subelements, and how do we find them? –recall normalized correlation –find subelements by applying filters, looking at the magnitude of the response What filters? –experience suggests spots and oriented bars at a variety of different scales –details probably don’t matter What statistics? –within reason, the more the merrier. –At least, mean and standard deviation –better, various conditional histograms.

Spots and bars Two spots: Weighted sum of 3 concentric Gaussians Weighted sum of 2 concentric Gaussians Six bars: Weighted sum of 3 oriented Gaussians Filter response at fine scale

Filter response at coarse scale

Gabor filters at different scales and spatial frequencies top row shows anti-symmetric (or odd) filters, bottom row the symmetric (or even) filters.

Final texture representation Form an oriented pyramid (or equivalent set of responses to filters at different scales and orientations). Square the output Take statistics of responses –e.g. mean of each filter output (are there lots of spots) –std of each filter output –mean of one scale conditioned on other scale having a particular range of values (e.g. are the spots in straight rows?)

Texture synthesis Use image as a source of probability model Choose pixel values by matching neighborhood, then filling in Matching process –look at pixel differences –count only synthesized pixels

Figure from Texture Synthesis by Non-parametric Sampling, A. Efros and T.K. Leung, Proc. Int. Conf. Computer Vision, 1999 copyright 1999, IEEE

FRAME Model textures with filters, random fields, and maximum entropy A set of filters is selected from a general filter bank to capture features of the texture and store the histograms The maximum entropy principle is employed to derive a distribution p(I) A stepwise algorithm is proposed to choose filters from a general filter bank. The resulting model is a Markov random field (MRF) model, but with a much enriched vocabulary and hence much stronger descriptive ability than the previous MRF. Gibbs sampler is adopted to synthesize texture images by drawing typical samples from P(I)

Variations Texture synthesis at multiple scales Texture synthesis on surfaces Texture synthesis by tiles “Analogous” texture synthesis

Dynamic texture Model the underlying scene dynamics –Linear dynamic systems –Manifold learning Used for synthesis Video texture linear model nonlinear model