Presentation is loading. Please wait.

Presentation is loading. Please wait.

Single Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling The research leading to these results has received funding from the European.

Similar presentations


Presentation on theme: "Single Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling The research leading to these results has received funding from the European."— Presentation transcript:

1 Single Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling The research leading to these results has received funding from the European Research Council under European Union's Seventh Framework Program, ERC Grant agreement no. 320649, and by the Intel Collaborative Research Institute for Computational Intelligence. SIAM Conference on IMAGING SCIENCE May 2014, Hong Kong Yaniv Romano The Electrical Engineering Department Matan Protter The Computer Science Department Michael Elad The Computer Science Department

2 The Interpolation Problem 2 2  Given a Low-Resolution (LR) image High-Resolution (HR) image Low-Resolution (LR) image Decimates the image by a factor of L along the horizontal and vertical dimensions (without blurring)

3 Background 3

4  We assume the existence of a dictionary D  IR d  n whose columns are the atom signals.  Signals are modeled as sparse linear combinations of the dictionary atoms: where  is sparse, meaning that it is assumed to contain mostly zeros.  The computation of  from x (or its noisy\decimated versions) is called sparse-coding.  The OMP is a popular sparse-coding technique, especially for low dimensional signals. Sparsity Model – The Basics D … D  = x 4

5 K-SVD Image Denoising [Elad & Aharon (‘06)] Noisy Image  The above is one in the large family of patch-based algorithms.  Effective for denoising, deblurring, inpainiting, tomographic reconstruction, etc. Reconstructed Image 5 Denoise each patch Using OMP Initial Dictionary Using KSVD Update the Dictionary

6 Interpolation – Starting Point  Drawbacks…  Each patch is interpolated independently.  Sparse-coding tends to err due to small number of existing pixels. Noisy Image Reconstructed Image Denoise each patch Using OMP Initial Dictionary Zero-Filled Image *sparse-coding using high/low weights. Using weighted OMP* Interpolate each patch Not so impressive Reconstructed Image Using KSVD Update the Dictionary Using weighted KSVD* Update the Dictionary

7 The Basic Idea  The more known pixels within a patch, the better the restoration:  The number of known pixels depends on its location. “strong” patches. “weak” patches.  We suggest “increasing” the number of known pixels based on the “self-similarity“ assumption. known pixel 7

8 Past Work 8  LSSC – A non local sparse model for image restoration [Mairal, Bach, Ponce, Sapiro and Zisserman (’09)].  Applies joint sparse coding (Simultaneous OMP) instead of OMP.  NARM – Combines the sparsity prior and non-local autoregressive modeling [Dong, Zhang, Lukac and Shi (’13)].  Embeds the autoregressive model (i.e. connecting a missing pixel with its nonlocal neighbors) into the conventional sparsity data fidelity term.  PLE – A framework for solving inverse problems based on Gaussian mixture model [Yu, Sapiro and Mallat (’12)].  Very effective for denoising, inpainting, interpolation and deblurring.

9 The Algorithm 9

10 Outline Basic interpolated image LR Input image Interpolate each group jointly Group similar patches together Reconstruct the image Update the Dictionary

11  Combine the “self-similarity” assumption and the sparsity prior. Grouping 11 LR image Estimated HR image Grouping similar patches

12  Combine the “self-similarity” assumption and the sparsity prior.  Use Simultaneous OMP (SOMP) instead of OMP [J. A. Tropp et al. (’06)]. Sparse-Coding 12 α2α2 α1α1 α1α1 α1α1 α1α1 α2α2 α2α2 α2α2 + + + +

13  Combine the “self-similarity” assumption and the sparsity prior.  Use Simultaneous OMP (SOMP) instead of OMP [J. A. Tropp et al. (’06)]. Improve the sparse-coding by finding the joint representation of a non-weighted version of the reference patch (“stabilizer”) and its K-NN. Sparse-Coding 13 α2α2 α1α1 α1α1 α1α1 α1α1 α2α2 α2α2 α2α2 + + + + α2α2 α2α2 α2α2 α2α2 α1α1 α1α1 α1α1 + + + + α1α1 stabilizer

14  Combine the “self-similarity” assumption and the sparsity prior.  Use Simultaneous OMP (SOMP) instead of OMP [J. A. Tropp et al. (’06)]. Improve the sparse-coding by finding the joint representation of a non-weighted version of the reference patch (“stabilizer”) and its K-NN.  Given the representations, we update the dictionary.  Using a weighted version of the KSVD. Dictionary Learning 14

15  Combine the “self-similarity” assumption and the sparsity prior.  Use Simultaneous OMP (SOMP) instead of OMP [J. A. Tropp et al. (’06)]. Improve the sparse-coding by finding the joint representation of a non-weighted version of the reference patch (“stabilizer”) and its K-NN.  Given the representations, we update the dictionary.  Using a weighted version of the KSVD.  Reconstruct the HR image.  How? Obtaining the HR Image 15

16  A two-stage algorithm: 1.First stage exploits “strong” patches. The Proposed Method 16 Initial cubic HR estimation Interpolate each group jointly (using the “stabilizer”) Per each patch: Find its K-Nearest “strong” Neighbors Reconstruct the image by exploiting the “strong” patches Update the Dictionary

17  A two-stage algorithm: 1.First stage exploits “strong” patches. 2.Second stage obtains the final HR image. The Proposed Method 17 First stage’ HR image Interpolate each group jointly (using the “stabilizer”) Per each patch: Find its K-Nearest “strong” Neighbors Reconstruct the image by exploiting the “strong” patches First stage’ Dictionary X X

18  A two-stage algorithm: 1.First stage exploits “strong” patches. 2.Second stage obtains the final HR image. The Proposed Method 18 The “general“ method The proposed two-stage method

19 Experiments 19

20  We test the proposed algorithm over 18 well-known images.  Each image was decimated (without blurring) by a factor of 2 or 3 along each axis.  We compared the PSNR* of the proposed algorithm to the current state-of-the-art methods. Simulation Setup 20

21  Average PSNR* over 18 well-known images (higher is better): Quantitative Results [dB] 21Method Scaling Factor Cubic28.9825.52 SAI [Zhang & Wu (’08)] 29.5125.83 SME [Mallat & Yu (’10)] 29.6225.95 PLE [Yu, Sapiro & Mallat (’12)] 29.6226.08 NARM [Dong et al. (’13)] 29.9826.21

22  Average PSNR* over 18 well-known images (higher is better): Quantitative Results [dB] 22Method Scaling Factor Cubic28.9825.52 SAI [Zhang & Wu (’08)] 29.5125.83 SME [Mallat & Yu (’10)] 29.6225.95 PLE [Yu, Sapiro & Mallat (’12)] 29.6226.08 NARM [Dong et al. (’13)] 29.9826.21 Proposed30.0926.44 Diff to best result 0.110.23Method Scaling Factor Cubic28.9825.52 SAI [Zhang & Wu (’08)] 29.5125.83 SME [Mallat & Yu (’10)] 29.6225.95 PLE [Yu, Sapiro & Mallat (’12)] 29.6226.08 NARM [Dong et al. (’13)] 29.9826.21

23 Visual Comparison (X2) 23 SAI SME NARM Proposed Original PLE

24 Visual Comparison (X2) 24 Original

25 Visual Comparison (X2) 25 NARM

26 Visual Comparison (X2) 26 Proposed

27 Visual Comparison (X2) 27 Original SAI SME PLENARM Proposed

28 Visual Comparison (X2) 28 OriginalSAI SME PLENARM Proposed Original SAI SME PLE NARM Proposed

29 Visual Comparison (X2) 29 OriginalSAI SME PLENARM Proposed Original

30 Visual Comparison (X2) 30 OriginalSAI SME PLENARM Proposed NARM

31 Visual Comparison (X2) 31 OriginalSAI SME PLENARM Proposed Proposed

32 Visual Comparison (X3) 32 OriginalSAI SME PLE NARM Proposed Original SAI SME PLE NARM Proposed

33 Visual Comparison (X3) 33 OriginalSAI SME PLE NARM Proposed Original

34 Visual Comparison (X3) 34 OriginalSAI SME PLE NARM Proposed NARM

35 Visual Comparison (X3) 35 OriginalSAI SME PLE NARM Proposed Proposed

36 Visual Comparison (X3) 36 Original SAI SME PLE NARM Proposed Original SAI SME PLE NARM Proposed

37 Visual Comparison (X3) 37 Original SAI SME PLE NARM Proposed Original

38 Visual Comparison (X3) 38 Original SAI SME PLE NARM Proposed NARM

39 Visual Comparison (X3) 39 Original SAI SME PLE NARM Proposed Proposed

40 Conclusions A decimated HR image in a regular pattern Image interpolation by K-SVD inpainting isn’t competitive The 2 nd stage builds upon the 1 st stage results and obtains the final image This leads to state-of-the-art results 40 We shall use our two-stage algorithm that combines the non-local proximity and sparsity priors The 1 st stage trains a dictionary and obtains a HR image by exploiting the “strong” patches both in the sparse-coding and reconstruction steps

41 We Are Done Questions ? Thank You 41


Download ppt "Single Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling The research leading to these results has received funding from the European."

Similar presentations


Ads by Google