Presentation is loading. Please wait.

Presentation is loading. Please wait.

Johann Radon Institute for Computational and Applied Mathematics: www.ricam.oeaw.ac.at 1/38 Mathematische Grundlagen in Vision & Grafik (710.100) more.

Similar presentations


Presentation on theme: "Johann Radon Institute for Computational and Applied Mathematics: www.ricam.oeaw.ac.at 1/38 Mathematische Grundlagen in Vision & Grafik (710.100) more."— Presentation transcript:

1 Johann Radon Institute for Computational and Applied Mathematics: 1/38 Mathematische Grundlagen in Vision & Grafik ( ) more appropriate title: Scale Space and PDE methods in image analysis and processing Arjan Kuijper Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences Altenbergerstraße 56 A-4040 Linz, Austria

2 Johann Radon Institute for Computational and Applied Mathematics: 2/38 Contents Image analysis & processing deals with the investigation of images and the application of specific tasks on them, like enhancement, denoising, deblurring, and segmentation. Mathematical methods that are commonly used are presented and discussed. The focus will be on the axiomatic choice for the models, their mathematical properties, and their practical use.

3 Johann Radon Institute for Computational and Applied Mathematics: 3/38 Image analysis & processing As image analysis and processing is a mixture of several disciplines, like physics, mathematics, vision, computer science, and engineering, this course is aimed at a broad audience. Only basic knowledge of analysis is assumed and necessary mathematical tools will be outlined during the meetings. Computer Vision Human Perception … Mathematics Medicine Computer Science Electrical Engineering

4 Johann Radon Institute for Computational and Applied Mathematics: 4/38 Some key words Images & Observations: –Scale space, regularization, distributions. Filtering: –Edge detection, enhancement, Wiener, Fourier, … Objects: –Differential structure, invariants, feature detection Deep structure: –Catastrophes & Multi-scale Hierarchy Variational Methods & Partial Differential Methods: –Perona Malik, Anisotropic Diffusion, Total Variation, Mumford-Shah. Curve Evolution: –Normal Motion, Mean Curvature Motion, Euclidian Shortening Flow.

5 Johann Radon Institute for Computational and Applied Mathematics: 5/38 Contents : Introduction, Axioms, Gaussian kernel : Derivatives, deblurring, Differential structure I : Differential structure II, invariants, Perona Malik, Total Variation Deep structure : Mean Curvature Motion, Mumford Shah, : presentation : presentation : presentation

6 Johann Radon Institute for Computational and Applied Mathematics: 6/38 Examination Investigation and public presentation of recent work in image analysis provided at the course: –Front-End Vision and Multi-scale Image Analysis, B. M. ter Haar Romeny Kluwer Academic Publishers, Chapter 17: Optic Flow Chapter 18: Color Differential Structure Chapter 19: Steerable kernels –Handbook of Mathematical Models in Computer Vision, Edited by N. Paragios, Y. Chen and O. Faugeras Springer, 2005 Chapter 1: Diffusion Filters and Wavelets Chapter 2: Total Variation Image Restoration Chapter 3: PDE-Based Image and Surface Inpainting –…… A written exam (questions) on contents of the course.

7 Johann Radon Institute for Computational and Applied Mathematics: 7/38 Introduction, Axioms

8 Johann Radon Institute for Computational and Applied Mathematics: 8/38 Introduction Apertures and the notion of scale –Observations and the size of apertures –Mathematics, physics, and vision –We blur by looking –A critical view on observations Taken from B. M. ter Haar Romeny, Front-End Vision and Multi-scale Image Analysis, Dordrecht, Kluwer Academic Publishers, Chapter 1

9 Johann Radon Institute for Computational and Applied Mathematics: 9/38 Observations and the size of apertures What is a cloud? Observations are always done by integrating some physical property with a measurement device.

10 Johann Radon Institute for Computational and Applied Mathematics: 10/38 Measurements A typical image:

11 Johann Radon Institute for Computational and Applied Mathematics: 11/38 Mathematics, physics, and vision –Observations: math vs. physics Objects have a size. –Points don’t exist in reality. Objects live on a range of various sizes. –They contain several scales. Objects are measured by some device. –Cameras, the eye, … Devices are finite. –They have a minimum and a maximum detection range: the inner and outer scale. They determine the spatial resolution. The device measures an hierarchy of structures.

12 Johann Radon Institute for Computational and Applied Mathematics: 12/38 From Wikipedia: Powers of Ten “Powers of Ten is a 1977 short documentary film which depicts the relative scale of the Universe in factors of ten (see also logarithmic scale and order of magnitude). It was written and directed by Charles and Ray Eames. The idea for the film appears to have come from the 1957 book Cosmic View by Kees Boeke.”Powers of Ten

13 Johann Radon Institute for Computational and Applied Mathematics: 13/38 We blur by looking

14 Johann Radon Institute for Computational and Applied Mathematics: 14/38 The visual system We see multi-scale: –The images only contain two values (black and white). –We regards them as grey level images, or see structure.

15 Johann Radon Institute for Computational and Applied Mathematics: 15/38 A critical view on observations –Infinite resolution is impossible. –We cannot measure at infinite resolution. –Take uncommitted observations –There is no bias, no knowledge, no memory. “We know nothing”. –At least, at the first stage. Refine later on. –Allow different scales. –There’s more than just pixels. –View all scales. –There is no preferred size. –Noise is part of the measurement. –In a measurement noise can only be separated from the observation if we have a model of the structures in the image, a model of the noise, or a model of both.

16 Johann Radon Institute for Computational and Applied Mathematics: 16/38 Spurious resolution Don’t trust the gridsize.

17 Johann Radon Institute for Computational and Applied Mathematics: 17/38 You don’t see what you see Don’t trust the resolution / nearest neighbor interpolation. –What does a detector of a 3 pixels circular size detect? Do you see the image as it is? Or did you see it in a modified way and is its intrinsic size different?

18 Johann Radon Institute for Computational and Applied Mathematics: 18/38 Summary Observations are necessarily done through a finite aperture. –Making this aperture infinitesimally small is not a physical reality. –The size of the aperture determines a hierarchy of structures, which occur naturally in (natural) images. –The visual system exploits a wide range of such observation apertures in the front-end simultaneously, in order to capture the information at all scales. Observed noise is part of the observation. –There is no way to separate the noise from the data if a model of the data, a model of the noise or a model of both is absent. The aperture cannot take any form. –An example of a wrong aperture is the square pixel so often used when zooming in on images. –Such a representation gives rise to edges that were never present in the original image. This artificial extra information is called 'spurious resolution'.

19 Johann Radon Institute for Computational and Applied Mathematics: 19/38 Introduction, Axioms (Let’s have a short break first) (what about the official Powers of Ten movie?)Powers of Ten

20 Johann Radon Institute for Computational and Applied Mathematics: 20/38 Axioms Foundations of scale space –Constraints for an uncommitted front-end –Axioms of a visual front-end –Axiomatic derivation of the Gaussian kernel –Scale space from causality –Scale space from entropy maximization –Derivatives of sampled, observed data –Scale space stack Taken from B. M. ter Haar Romeny, Front-End Vision and Multi-scale Image Analysis, Dordrecht, Kluwer Academic Publishers, Chapter 2

21 Johann Radon Institute for Computational and Applied Mathematics: 21/38 Constraints for an uncommitted front-end I1I2I3OKYBL1F1APNL2F2 Convolution kernel xxxxxxxxx xx Semigroup property xxxxxxxxx Locality x Regularity xxxxxxxx Infinitesimal generator x Max. loss principle x Causality xxxxx Nonnegativity xxxxxx Tikhonov regularization x Average grey level invar. xxxxxx Flat kernel for t to infinity x Isometry invariance xxxxxxxxxxx Homogeneity & isotropy x Separability xx Scale invariance xxxxxxxx Valid for dimension12221,2 11>1>1N NNN

22 Johann Radon Institute for Computational and Applied Mathematics: 22/38 Axioms of a visual front-end Uncommitted assumptions: 1.scale invariance (no preferred scale or size) 2.spatial shift invariance (no preferred location) 3.isotropy (no preferred orientation) 4.linearity (no memory or model) 5.separability (for the sake of computational ease)

23 Johann Radon Institute for Computational and Applied Mathematics: 23/38 Axioms of a visual front-end Physical properties: L [candela/meter 2 ] ”<>” x [meters] Intensity <> spatiality Pi-teorem: Physical laws must be independent of the choice of the fundamental parameters 1. Scale invariance L/ L 0 = G(   )

24 Johann Radon Institute for Computational and Applied Mathematics: 24/38 An uncommitted front-end 2. Linear shift invariance Convolution : In Fourier domain equal to multiplication: 3. Isotropy Consider the length of 4. Linearity Which implies

25 Johann Radon Institute for Computational and Applied Mathematics: 25/38 An uncommitted front-end 5. Separability p = 2 –Outer scale (image averages) –So  2 < 0, say -1/2 for later convenience.

26 Johann Radon Institute for Computational and Applied Mathematics: 26/38 An uncommitted front-end Back to the spatial domain, normalizing the kernel:

27 Johann Radon Institute for Computational and Applied Mathematics: 27/38 Scale space from causality Whatever you do on this image, you don’t want the introduction of white regions in the black ones. No new level lines are to be created: Causality

28 Johann Radon Institute for Computational and Applied Mathematics: 28/38 Scale space from causality Causality: non-enhancement of local extrema. Let D L = L xx + L yy D L equals the sum of the eigenvalues of the Hessian. Then at a maximum D L 0 and L t > 0 So D L L t > 0. Choose L t = a D L, a > 0 With a = 1, L t = D L

29 Johann Radon Institute for Computational and Applied Mathematics: 29/38 Scale space from causality L t (x,y;t) = D L (x,y;t) Obviously, for t -> 0, L (x,y;t) = L 0 The general solution (Greens function) for this diffusion equation is convolution of the original image with an Gaussian: G(x,y;t) = Exp (-(x 2 +y 2 )/(4 t)) / (4 p t) Note: one uses rater 4t than 2s 2

30 Johann Radon Institute for Computational and Applied Mathematics: 30/38 Scale space from entropy maximization A statistical measure for the disorder of the filter is given by the entropy: 1D for simplicity If it is maximized it states something like “there is nothing ordered” (we know nothing). Obviously, there are some constraints.

31 Johann Radon Institute for Computational and Applied Mathematics: 31/38 Constraints –The function must be normalized; no global enhancement: –The mean of the measurement is at the location where we measure, say 0: –There is a standard deviation, say  : –The function is positive; it’s a real object: g(x)>0 Scale space from entropy maximization

32 Johann Radon Institute for Computational and Applied Mathematics: 32/38 Scale space from entropy maximization Maximize the Euler Lagrange equation Set the variational derivative w.r.t. g(x) equal to zero: So

33 Johann Radon Institute for Computational and Applied Mathematics: 33/38 Scale space from entropy maximization ∫ x g(x) dx = 0 -> λ 2 = 0 ∫ x 2 g(x) dx =  2 -> λ 3 = -1/(2  2 ) g(x)>0 -> OK ∫ g(x) dx = 1 -> λ 1 = Log[e/√(2 p 2 )] => g(x)= Exp[-1+1-Log[√(2 p 2 )]- x 2 /(2  2 )] = Exp[-x 2 /(2  2 )] / √(2 p 2 )

34 Johann Radon Institute for Computational and Applied Mathematics: 34/38 Derivatives of sampled, observed data The Gaussian kernel and all of its partial derivatives form the unique set of kernels for a front-end visual system that satisfies the constraints: –no preference for location, scale and orientation, and linearity. It is a one-parameter family of kernels, where the scale is the free parameter. The derivative of the observed data is given by which equals

35 Johann Radon Institute for Computational and Applied Mathematics: 35/38 Derivatives of sampled, observed data Derivatives of a Gaussian: The first order derivative of an image gives edges

36 Johann Radon Institute for Computational and Applied Mathematics: 36/38 Gaussian scale space L(x;  ) = L 0 (x) * Exp (- x 2 /(2  2 ) / Sqrt[ (2 p  2 ) D ] L(x;  ) is called the Gaussian scale space image.

37 Johann Radon Institute for Computational and Applied Mathematics: 37/38 Summary We have specific physical constraints for the early vision front-end kernel. We are able to set up a 'first principle' framework from which the exact sensitivity function of the measurement aperture can be derived. There exist many such derivations for an uncommitted kernel, all leading to the same unique result: the Gaussian kernel. –The assumptions of linearity, isoptropy, homogeneity and scale-invariance; –The principle of causality; –Minimization of the entropy Differentiation of discrete data is done by the convolution with the derivative of the observation kernel. This means that differentiation can never be done without blurring the data somewhat.

38 Johann Radon Institute for Computational and Applied Mathematics: 38/38 Powers of Ten revisited Short popularized version


Download ppt "Johann Radon Institute for Computational and Applied Mathematics: www.ricam.oeaw.ac.at 1/38 Mathematische Grundlagen in Vision & Grafik (710.100) more."

Similar presentations


Ads by Google