Presentation is loading. Please wait.

Presentation is loading. Please wait.

Edge Preserving Image Restoration using L 1 norm Vivek Agarwal The University of Tennessee, Knoxville.

Similar presentations


Presentation on theme: "Edge Preserving Image Restoration using L 1 norm Vivek Agarwal The University of Tennessee, Knoxville."— Presentation transcript:

1 Edge Preserving Image Restoration using L 1 norm Vivek Agarwal The University of Tennessee, Knoxville

2 2 Outline Introduction Regularization based image restoration –L 2 norm regularization –L 1 norm regularization Tikhonov regularization Total Variation regularization Least Absolute Shrinkage and Selection Operator (LASSO) Results Conclusion and future work

3 3 Introduction -Physics of Image formation f(x’,y’) Imaging system K(x,y,x’,y’) g(x,y) Registration system noise g(x,y)+noise Reverse ProcessForward Process

4 4 Image Restoration Image restoration is a subset of image processing. It is a highly ill-posed problem. Most of the image restoration algorithms uses least squares. L 2 norm based algorithms produces smooth restoration which is inaccurate if the image consists of edges. L 1 norm algorithms preserves the edge information in the restored images. But the algorithms are slow.

5 5 Well-Posed Problem In 1923, the French mathematician Hadamard introduced the notion of well-posed problems. According to Hadamard a problem is called well-posed if 1.A solution for the problem exists (existence). 2.This solution is unique (uniqueness). 3.This unique solution is stable under small perturbations in the data, in other words small perturbations in the data should cause small perturbations in the solution (stability). If at least one of these conditions fails the problem is called ill or incorrectly posed and demands a special consideration.

6 6 Existence To deal with non-existence we have to enlarge the domain where the solution is sought. Example: A quadratic equation ax 2 + bx +c =0 in general form has two solutions: There is a solution Complex domain Real Domain No SolSution Non-existence is Harmfull

7 7 Uniqueness Non-uniqueness is usually caused by the lack or absence of information about underlying model. Example: Neural networks. Error surface has multiple local minima and many of these minima fit training data very well, however Generalization capabilities of these different solution (predictive models) can be very different, ranging from poor to excellent. How to pick up a model which is going to generalize well? Solution #1 Bad or good? Solution #2 Bad or good? Solution #3 Bad or good?

8 8 Uniqueness Non-uniqueness is not always harmful. It depends on what we are looking for. If we are looking for a desired effect, that is we know how the good solution looks like then we can be happy with multiple solutions just picking up a good one from a variety of solution. The non-uniqueness is harmful if we are looking for an observed effect, that is we do not know how good solution looks like. The best way to combat non-uniqueness is just specify a model using prior knowledge of the domain or at least restrict the space where the desired model is searched.

9 9 Instability Instability is caused by an attempt to reverse cause-effect relationships. Nature always solves just for forward problem, because of the arrow of time. Cause always goes before effect. In practice very often we have to reverse the relationships, that is to go from effect to cause. Example: Convolution-deconvolution, Fredhold integral equations of the first kind. Forward Operation EffectCause

10 10 L 1 and L 2 Norms The general expression for norm is given as L 2 norm: is the Euclidean distance or vector distance. L 1 norm: is also known as Manhattan norm because it corresponds to the sum of the distances along the coordinate axes.

11 11 Why Regularization? Most of the restoration is based on Least Squares. But if the problem is ill-posed then least squares method fails.

12 12 Regularization The general formulation for regularization techniques is Where is the Error term is the regularization parameter is the penalty term

13 13 Tikhonov Regularization Tikhonov is a L 2 norm or classical regularization technique. Tikhonov regularization technique produces smoothing effect on the restored image. In zero order Tikhonov regularization, the regularization operator (L) is identity matrix. The expression that can be used to compute, Tikhonov regularization is In Higher order Tikhonov, L is either first order or second order differentiation matrix.

14 14 Tikhonov Regularization Original ImageBlurred Image

15 15 Tikhonov Regularization - Restoration

16 16 Total Variation Total Variation is a deterministic approach. This regularization method preserve the edge information in the restored images. TV regularization penalty function obeys the L1 norm. The mathematical expression for TV regularization is given as

17 17 Difference between Tikhonov regularization and Total Variation S.NoTikhonov RegularizationTotal Variation regularization 1. 2.Assumes smooth and continuous information Smoothness is not assumed. 3.Computationally less complexComputationally more complex 4.Restored image is smoothRestored image is blocky and preserves the edges.

18 18 Computation Challenges Total Variation Gradient Non-Linear PDE

19 19 Computation Challenges (Contd..) Iterative method is necessary to solve. TV function is non-differential at zero. The is non-linear operator. The ill conditioning of the operator causes numerical difficulties. Good Preconditioning is required.

20 20 Computation of Regularization Operator Total Variation is computed using the formulation. The total variation is obtained after minimization of the Total Variation Penalty function (L)Least Square Solution

21 21 Computation of Regularization Operator Discretization of Total variation function: Gradient of Total Variation is given by

22 22 Regularization Operator The regularization operator is computer using the expression Where

23 23 Lasso Regression Lasso for “Least Absolute Shrinkage and Selection Operator” is a shrinkage and selection method for linear regression introduced by Tibshirani It minimizes the usual sum of squared errors, with a bound on the sum of the absolute values of the coefficients. The computation of solution for Lasso is a quadratic programming problem that can be best solved by least angle regression algorithm. Lasso also uses L1 penalty norm.

24 24 Ridge Regression and Lasso Equivalence The cost function of ridge regression is given as Ridge regression is identical to Zero Order Tikhonov regularization Analytical Solution of Ridge and Tikhonov are similar The bias introduced favors solution with small weights and the effect is to smooth the output function.

25 25 Ridge Regression and Lasso Equivalence Instead of single value of λ, different values of λ can be used for different pixels. It should provide same solution as lasso regression (regularization). Thus we establish relation between lasso and Zero Order Tikhonov, there is a relation between Total Variation and Lasso Tikhonov LassoTotal Variation Proved Both are L1 Norm penalties Our Aim To Prove

26 26 L 1 norm regularization - Restoration Input Image Blurred and Noisy Image Synthetic Images

27 27 L 1 norm regularization - Restoration LASSO Restoration Total Variation Restoration

28 28 L 1 norm regularization - Restoration I Deg of BlurIII Deg of BlurII Deg of Blur Blurred and Noisy Images Total Variation Regularization LASSO Regularization

29 29 L 1 norm regularization - Restoration Blurred and Noisy Images Total Variation Regularization LASSO Regularization I level of NoiseIII level of NoiseII level of Noise

30 30 Cross Section of Restoration Total Variation Regularization LASSO Regularization Different degrees Of Blurring

31 31 Cross Section of Restoration Total Variation Regularization LASSO Regularization Different levels of Noise

32 32 Comparison of Algorithms Original ImageLASSO Restoration Total Variation Restoration Tikhonov Restoration

33 33 Effect of Different Levels of Noise and Blurring Blurred and Noisy Image LASSO Restoration Total Variation RestorationTikhonov Restoration

34 34 Numerical Analysis of Results - Airplane PlanePD Iteration CG Iteration LambdaBlurring Error (%) Residual Error (%) Restoration Time (min) Total Variation e LASSO Regression e Tikhonov Regularization e First Level of Noise Second Level of Noise PlanePD Iteration CG Iteration LambdaBlurring Error (%) Residual Error (%) Restoration Time (min) Total Variation 1151e LASSO Regression 121e Tikhonov Regularization e

35 35 Numerical Analysis of Results - Airplane ShelvesPD Iteration CG Iteration LambdaBlurring Error (%) Residual Error (%) Restoration Time (min) Total Variation e LASSO Regression e PlanePD Iteration CG Iteration LambdaBlurring Error (%) Residual Error (%) Restoration Time (min) Total Variation e LASSO Regression e

36 36 Graphical Representation – 5 Real Images Restoration Time Residual Error Different degrees of Blur

37 37 Graphical Representation - 5 Real Images Different levels of Noise Residual Error Restoration Time

38 38 Effect of Blurring and Noise

39 39 Conclusion Total variation method preserves the edge information in the restored image. Restoration time in Total Variation regularization is high LASSO provides an impressive alternative to TV regularization Restoration time of LASSO regularization is two times less than restoration time of RV regularization Restoration quality of LASSO is better or equal to the restoration quality of TV regularization

40 40 Conclusion Both LASSO and TV regularization fails to suppress the noise in the restored images. Analysis shows increase in degree of blur increases the restoration error Increase in the noise level does not have a significant influence on the restoration time but effects the residual error


Download ppt "Edge Preserving Image Restoration using L 1 norm Vivek Agarwal The University of Tennessee, Knoxville."

Similar presentations


Ads by Google