Download presentation

Presentation is loading. Please wait.

Published byAna Whalley Modified about 1 year ago

1
Johann Radon Institute for Computational and Applied Mathematics: 1/33 Signal- und Bildverarbeitung, Image Analysis and Processing Arjan Kuijper Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences Altenbergerstraße 56 A-4040 Linz, Austria

2
Johann Radon Institute for Computational and Applied Mathematics: 2/33 Summary of the previous weeks Invariant differential feature detectors are special (mostly) polynomial combinations of image derivatives, which exhibit invariance under some chosen group of transformations. The derivatives are easily calculated from the image through the multi-scale Gaussian derivative kernels. The notion of invariance is crucial for geometric relevance. Non-invariant properties have no value in general feature detection tasks. A convenient paradigm to calculate features invariant under Euclidean coordinate transformations is the notion of gauge coordinates (v,w). Any combination of derivatives with respect to v and w is invariant under Euclidean tranformations.

3
Johann Radon Institute for Computational and Applied Mathematics: 3/33 Today The differential structure of images –Third order image structure: T-junction detection –Fourth order image structure: junction detection –Scale invariance and natural coordinates –Irreducible invariants Geometry-driven diffusion –Adaptive smoothing and image evolution –Nonlinear diffusion equations –The Perona & Malik Equation –Scale-space implementation of the P&M equation –The P&M equation is ill-posed Taken from B. M. ter Haar Romeny, Front-End Vision and Multi-scale Image Analysis, Dordrecht, Kluwer Academic Publishers, Chapter 6/21

4
Johann Radon Institute for Computational and Applied Mathematics: 4/33 Gauge coordinates Remind: Do the trick: And w = L x + L y p L 2 x + L 2 y = ( L x p L 2 x + L 2 y ; L y p L 2 x + L 2 y ) ¢ v = L x ¡ L y p L 2 x + L 2 y = ( L y p L 2 x + L 2 y ; ¡ L x p L 2 x + L 2 y ) ¢ r cos Á = L x p L 2 x + L 2 y s i n Á = L y p L 2 x + L 2 v = ( s i n Á ; ¡ cos Á ) ¢ w = ( cos Á ; s i n Á ) ¢ r

5
Johann Radon Institute for Computational and Applied Mathematics: 5/33 Third order image structure: T-junction detection T-junctions in the intensity landscape of natural images occur typically at occlusion points. Occlusion points are those points where a contour ends or emerges because there is another object in front of the contour.

6
Johann Radon Institute for Computational and Applied Mathematics: 6/33 Third order image structure: T-junction detection When we zoom in on the T-junction of an observed image and inspect locally the isophote structure at a T- junction, we see that at a T-junction the derivative of the isophote curvature k in the direction perpendicular to the isophotes is high.

7
Johann Radon Institute for Computational and Applied Mathematics: 7/33 Third order image structure: T-junction detection When we study the curvature of the isophotes in the middle of the image, at the location of the T-junction, we see the isophote 'sweep' from highly curved to almost straight for decreasing intensity. So the geometric reasoning is the "the isophote curvature changes a lot when we traverse the image in the w direction". It seems to make sense to study ¢¢¢ = ¡ L vvw L w + L vv L ww L 2 w + 2 L 2 vw L 2 w

8
Johann Radon Institute for Computational and Applied Mathematics: 8/33 Third order image structure: T-junction detection To avoid singularities at vanishing gradients through the division by we use as our T-junction detector:

9
Johann Radon Institute for Computational and Applied Mathematics: 9/33 Fourth order image structure: junction detection Yet another order higher: Find in a checkerboard the crossings where 4 edges meet. When we study the fourth order local image structure, we consider the fourth order polynomial terms from the local Taylor expansion. The main theorem of algebra states that a polynomial is fully described by its roots. How well all roots coincide, given by the discriminant, is a particular invariant condition. The discriminant of second order image structure is just the determinant of the Hessian matrix, i.e. the Gaussian curvature.

10
Johann Radon Institute for Computational and Applied Mathematics: 10/33 Fourth order image structure: junction detection The forth order discriminant is slightly more complicated:

11
Johann Radon Institute for Computational and Applied Mathematics: 11/33 Scale invariance and natural coordinates The dimensionless coordinate is termed the natural coordinate. This implies that the derivative operator in natural coordinates has a scaling factor: Compare Lw and the natural Lw:

12
Johann Radon Institute for Computational and Applied Mathematics: 12/33 Irreducible invariants It has been shown by Hilbert that any invariant of finite order can be expressed as a polynomial function of a set of irreducible invariants. For e.g. scalar images these invariants form the fundamental set of image primitives in which all local intrinsic properties can be described. There are only a small number of irreducible invariants for low order. E.g. for 2D images up to second order there are only 5 of such irreducibles. One mechanism to find the irreducible set are gauge coordinates:

13
Johann Radon Institute for Computational and Applied Mathematics: 13/33 Tensors There are many ways to set up an irreducible basis. In tensor notation, tensor indices denote partial derivatives and run over the dimensions so = and = When indices come in pairs, summation over the dimensions is implied (the so-called Einstein summation convention, or contraction)

14
Johann Radon Institute for Computational and Applied Mathematics: 14/33 Tensors Each of these irreducible invariants cannot be expressed in the others. Any invariant property to some finite order can be expressed as a combination of these irreducibles. Isophote curvature, a second order local invariant feature, is expressed as These irreducibles form a basis for the differential invariant structure. The set of 5 irreducible grayvalue invariants in 2D images has been exploited to classify local image structure for statistical object recognition.

15
Johann Radon Institute for Computational and Applied Mathematics: 15/33 Geometry-driven diffusion

16
Johann Radon Institute for Computational and Applied Mathematics: 16/33 Adaptive Smoothing and Image Evolution Calculate edges and other differential invariants at a range of scales. –Select a fine or a coarse scale? –Larger scale: improved reduction of noise, the appearance of more prominent structure, localization accuracy. –Linear, isotropic diffusion cannot preserve the position of the differential invariant features over scale. Make the diffusion (blurring) locally adaptive to the structure of the image. –preserve edges –reducing the noise

17
Johann Radon Institute for Computational and Applied Mathematics: 17/33 Adaptive Smoothing and Image Evolution This adaptive filtering process is possible by three classes of (all nonlinear) mathematical approaches, which are in essence equivalent: 1.Nonlinear partial differential equations (PDE's), i.e. nonlinear diffusion equations which evolve the luminance function as some function of a flow. This general approach is known as the 'nonlinear PDE approach'; 2.Curve evolution of the isophotes (curves in 2D, surfaces in 3D) in the image. This is known as the 'curve evolution approach'. 3.Variational methods that minimize some energy functional on the image. This is known as the 'energy minimization approach' or 'variational approach'.

18
Johann Radon Institute for Computational and Applied Mathematics: 18/33 geometric reasoning The word 'nonlinear' implies the inclusion of a nonlinearity in the algorithm. This can be done in an infinite variety, and it takes geometric reasoning to come up with the right nonlinearity for the task. We can include knowledge about –a preferred direction of diffusion, –or that we like the diffusion to be reduced at edges –or at points of high curvature –etc.

19
Johann Radon Institute for Computational and Applied Mathematics: 19/33 Nonlinear Diffusion Equations The introduction of a conductivity coefficient (c) in the diffusion equation makes it possible to make the diffusion adaptive to local image structure: where the function is a function of local image differential structure, i.e. depends on local partial derivatives. The change of luminance with increasing scale is a divergence ( ) of some flow (c L) or flux.

20
Johann Radon Institute for Computational and Applied Mathematics: 20/33 The Perona & Malik Equation Perona and Malik [1990] proposed to make c a function of the gradient magnitude in order to reduce the diffusion at the location of edges: with two possible choices for c:

21
Johann Radon Institute for Computational and Applied Mathematics: 21/33 Example The conductivity coefficient in the Perona & Malik equation as a function of the parameter k. Gradient scale: s = 2 pixels, image resolution 256x256. For higher k, larger gradients are taken into account only:

22
Johann Radon Institute for Computational and Applied Mathematics: 22/33 PDE formulation Complete PDE: c1: c2:

23
Johann Radon Institute for Computational and Applied Mathematics: 23/33 PDE formulation Using gauge coordinates: They arise from minimizing the functionals The limit for k-> yields the heat equation. L t = k 2 ¢ k 2 ¢L + L 2 w ( L vv ¡ L ww ) ( k 2 + L 2 w ) 2 L t = 1 k 2 e ¡ j r L j 2 k 2 ( k 2 ¢L ¡ 2 L 2 w L ww ) E PM 1 ( L ) = R 1 2 k 2 e ¡ j r L j 2 k 2 d E PM 2 ( L ) = R 1 2 k 2 l og ( k 2 + L 2 w ) d

24
Johann Radon Institute for Computational and Applied Mathematics: 24/33 Scale space implementation of the P&M equation There are no analytical solution for these PDEs, –rely on numerical methods to approximate the solution. There are many efficient and stable numerical schemes for the time-evolution of an image governed by this type of divergence of a flow-type PDE's. –The most straightforward numerical approximation of is the forward-Euler approximation where d L is the increment in L and d s is the (typically small) step size in scale: the evolution step size. Through iteration we can calculate the image at the required level of evolution, i.e. at the required level of adaptive blurring. (scale 1)

25
Johann Radon Institute for Computational and Applied Mathematics: 25/33 Scale space implementation of the P&M equation Obviously, the derivative is computed by convolution with Gaussian derivatives. (scale 2) A rule for the choice of k is difficult to give. –It depends on the choice of which edges have to be enhanced, and which have to be canceled. –The histogram of gradient values (at s=1) may give some clue to how much 'relative edge strength' is present in the image:

26
Johann Radon Institute for Computational and Applied Mathematics: 26/33 Scale space implementation of the P&M equation k determines the 'turnover' point of edge reduction versus enhancement. Four examples with operator scale s=.8 pixels, time step d s=0.1, number of iterations = 10. From left to right: org, k=5, k=25, k=75, k=150.

27
Johann Radon Institute for Computational and Applied Mathematics: 27/33 Scale space implementation of the P&M equation We can define a contrast-to-noise ratio (CNR) for this particular image by taking two square (16x16) areas, one in the middle of the black disk and one in the lower left corner in the background. The CNR is defined as the difference of Signal-to-Noise ratios (the mean, divided by the variances of the intensity values) of two representative areas:

28
Johann Radon Institute for Computational and Applied Mathematics: 28/33 Don’t run it too long Clearly, the signal-to-noise ratio increases substantially during the evolution. But this cannot continue, of course, for physical reasons. When we continue the evolution until t=20 (in units of iterations), we see that the gain is lost again We need a stopping time!

29
Johann Radon Institute for Computational and Applied Mathematics: 29/33 The P&M equation is ill-posed It is instructive to study the P&M equation in somewhat more detail. Let us look how the diffusion process depends on the gradient strength, so we consider (in 1D for simplicity): Suppose that the flow (or flux function) is decreasing with respect to L x at some point x 0, then with a>0: ->

30
Johann Radon Institute for Computational and Applied Mathematics: 30/33 Deblurring Locally we have an inverse heat equation which is well known to be ill-posed. This heat equation locally blurs or deblurs, dependent on the condition of c. The function decreases for and decreases for. This implies that with k we can adjust the turnover point in the gradient strength, below which we have blurring, and above which we have deblurring.

31
Johann Radon Institute for Computational and Applied Mathematics: 31/33 Deblurring the graphs of the flux and of for both c's with k=2: Note: The original formulation by Perona and Malik employed nearest neighbor differences in 4 directions to calculate the local gradient strength. This introduces artifacts because there is a bias for direction. We now understand that the Gaussian derivative kernel is the appropriate regularized differential operator, which does not introduce a bias for direction. This was introduced first by Catté, Lions, Morel and Coll [1992].

32
Johann Radon Institute for Computational and Applied Mathematics: 32/33 Summary The diffusion can be made locally adaptive to image structure. Three mathematical approaches are discussed: 1.PDE-based nonlinear diffusion, where the luminance function evolves as the divergence of some flow. 2.Evolution of the isophotes as an example of curve-evolution; 3.Variational methods, minimizing an energy functional defined on the image. The nonlinear PDE's involve local image derivatives, and cannot be solved analytically. Adaptive smoothing requires geometric reasoning to define the influence on the diffusivity coefficient. The simplest equation is the equation proposed by Perona & Malik, where the variable conduction is a function of the local edge strength. Strong gradient magnitudes prevent the blurring locally, the effect is edge preserving smoothing. The Perona & Malik equation leads to deblurring (enhancing edges) for edges larger than the turnover point k, and blurs smaller edges.

33
Johann Radon Institute for Computational and Applied Mathematics: 33/33 Next week Non-linear diffusion: Total Variation –Rudin – Osher (- Fatemi) Model –Denoising –Edge preserving –Energy minimizing –Bounded variation

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google