Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pixel Recovery via Minimization in the Wavelet Domain Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz *: Polytechnic University, Brooklyn, NY.

Similar presentations


Presentation on theme: "Pixel Recovery via Minimization in the Wavelet Domain Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz *: Polytechnic University, Brooklyn, NY."— Presentation transcript:

1 Pixel Recovery via Minimization in the Wavelet Domain Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz *: Polytechnic University, Brooklyn, NY # : DoCoMo Communications Laboratories USA, Inc., San Jose, CA * * # presenting author

2 Problem statement: Estimation/Recovery of missing data. Formulation as a linear expansion over overcomplete basis. Expansions that minimize the norm. Why do this? Connections to adaptive linear estimators and sparsity. Connections to recent results and statistics Simulation results and comparisons to our earlier work. Why not to do this: Analysis of what is going on. Conclusion and ways of modifying the solutions for better results. Overview ( Presentation is much more detailed than the paper.) ( Some software available, please check the paper.)

3 Problem Statement Original available pixels lost pixels (assume zero mean) 1. Lost Block Image 2. Derive predicted 3.

4 Formulation 1. Take NxM matrix of overcomplete basis, 2. Write y in terms of the basis 3. Find the expansion coefficients (two ways) available data projection

5 Find the expansion coefficients to minimize the norm norm of expansion coefficients RegularizationAvailable data constraint subject to

6 Why minimize the norm? “Under i.i.d. Laplacian model for coefficient probabilities, norm Real reason: sparse decompositions. min subject to Bogus reason

7 What does sparsity have to do with estimation/recovery? 1. Any such decomposition builds an adaptive linear estimate. 2. In fact “any” estimate can be written in this form. Onur G. Guleryuz, ``Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising: Part I - Theory,‘’ IEEE Tr. on IP, in review. http://eeweb.poly.edu/~onurhttp://eeweb.poly.edu/~onur (google: onur guleryuz).

8 The recovered signal must be sparse 3. The recovered becomes Onur G. Guleryuz, ``Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising: Part I - Theory,‘’ IEEE Tr. on IP, in review. http://eeweb.poly.edu/~onurhttp://eeweb.poly.edu/~onur (google: onur guleryuz). null space of dimension y has to be sparse

9 Who cares about y, what about the original x? Onur G. Guleryuz, ``Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising: Part I - Theory,‘’ IEEE Tr. on IP, in review. http://eeweb.poly.edu/~onurhttp://eeweb.poly.edu/~onur (google: onur guleryuz). If successful prediction is possible x also has to be ~sparse small, then x ~ sparse 1. Predictable sparse 2. Sparsity of x is not a bad leap of faith to make in estimation i.e., if If not sparse, cannot estimate well anyway. (caveat: the data may be sparse, but not in the given basis)

10 Why minimize the norm? Under certain conditions the problem gives the solution to the problem: D. Donoho, M. Elad, and V. Temlyakov, ``Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise‘’. http://www-stat.stanford.edu/~donoho/reports.html subject to Find the “most predictable”/sparsest expansion that agrees with the data. subject to (solving convex, not combinatorial)

11 Why minimize the norm? R. Tibshirani, ``Regression shrinkage and selection via the lasso’’. J. Royal. Statist. Soc B., Vol. 58, No. 1, pp. 267-288. subject to Experience from statistics literature. The “lasso” is known to generate sparse expansions.

12 Simulation Results subject to vs. Onur G. Guleryuz, ``Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising: Part II –Adaptive Algorithms,‘’ IEEE Tr. on IP, in review. http://eeweb.poly.edu/~onurhttp://eeweb.poly.edu/~onur (google: onur guleryuz). H: Two times expansive M=2N, real, dual-tree, DWT. Real part of: N. G. Kingsbury, ``Complex wavelets for shift invariant analysis and filtering of signals,‘’ Appl. Comput. Harmon. Anal., 10(3):234-253, May 2002. Iterated Denosing (ID) with no layering and no selective thresholding :

13 Simulation Results

14 Sparse Modeling Generates Non- Convex Problems Pixel coordinates for a “two pixel” image x x Transform coordinates available pixel missing pixel x available pixel constraint

15 + = “Sparse=non-convex”, who cares. What about reality, natural images?

16 x Geometry x ball x Case 1 Case 2Case 3 Not sparse Bogus reason

17 Why not to minimize the norm What about all the optimality/sparsest results? Results such as: D. Donoho et. al. ``Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise‘’. are very impressive, but they are tied closely to H providing the sparsest decomposition for x. overwhelming noise: modeling error error due to missing data

18 Why not to minimize the norm subject to “nice” basis, “decoherent” “not nice” basis (due to cropping), may become very “coherent” (problem due to )

19 Examples orthonormal, coherency=0 unnormalized coherency= normalized coherency= 1 (worst possible) 1. Optimal solution sometimes tries to make coefficients of scaling functions zero. 2. solution never sees the actual problem.

20 ... Progression 2: What does ID do? Uses correctly modeled components to reduce the overwhelming errors/”noise” Progression 1: Decomposes the big problem into many progressions. Arrives at the final complex problem by solving much simpler problems. is conceptually a single step, greedy version of ID.

21 ID is all about robustly selecting sparsity Tries to be sparse, not the sparsest. Robust to model failures. Other constraints easy to incorporate

22 Conclusion 1. Have to be more agnostic than smoothest, sharpest, smallest, sparsest, *est. minimum mse not necessarily = sparsest 2. Have to be more robust to modeling errors. When a convex approximation is possible to the underlying non-convex problem, great. But have to make sure assumptions are not violated. 3. Is it still possible to use, but with ID principles? Yes For lasso/ fans:

23 subject to... But must ensure no case 3 problems (ID stays away from those). 1. It’s not about the lasso or how you tighten the lasso, it’s about what (plural) you tighten the lasso to. subject to available data Do you think you reduced mse? No: you shouldn’t have done this. Yes: Do it again. 2. This is not “LASSO”, “LARS”,.... This is Iterated Denoising (use hard thresholding!).


Download ppt "Pixel Recovery via Minimization in the Wavelet Domain Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz *: Polytechnic University, Brooklyn, NY."

Similar presentations


Ads by Google