Single Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling The research leading to these results has received funding from the European.

Slides:



Advertisements
Similar presentations
MMSE Estimation for Sparse Representation Modeling
Advertisements

Joint work with Irad Yavneh
Learning Measurement Matrices for Redundant Dictionaries Richard Baraniuk Rice University Chinmay Hegde MIT Aswin Sankaranarayanan CMU.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Submodular Dictionary Selection for Sparse Representation Volkan Cevher Laboratory for Information and Inference Systems - LIONS.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Mixed-Resolution Patch- Matching (MRPM) Harshit Sureka and P.J. Narayanan (ECCV 2012) Presentation by Yaniv Romano 1.
K-SVD Dictionary-Learning for Analysis Sparse Models
Dictionary Learning on a Manifold
Learning sparse representations to restore, classify, and sense images and videos Guillermo Sapiro University of Minnesota Supported by NSF, NGA, NIH,
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui
1 Micha Feigin, Danny Feldman, Nir Sochen
Duke University COPYRIGHT © DUKE UNIVERSITY 2012 Sparsity Based Denoising of Spectral Domain Optical Coherence Tomography Images Leyuan Fang, Shutao Li,
Ilias Theodorakopoulos PhD Candidate
Nonlinear Unsupervised Feature Learning How Local Similarities Lead to Global Coding Amirreza Shaban.
* *Joint work with Idan Ram Israel Cohen
Image classification by sparse coding.
Sparse & Redundant Signal Representation, and its Role in Image Processing Michael Elad The CS Department The Technion – Israel Institute of technology.
Dictionary-Learning for the Analysis Sparse Model Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000,
Image Super-Resolution Using Sparse Representation By: Michael Elad Single Image Super-Resolution Using Sparse Representation Michael Elad The Computer.
Sparse and Overcomplete Data Representation
Mathematics and Image Analysis, MIA'06
Image Denoising via Learned Dictionaries and Sparse Representations
Image Denoising and Beyond via Learned Dictionaries and Sparse Representations Michael Elad The Computer Science Department The Technion – Israel Institute.
DoCoMo USA Labs All Rights Reserved Sandeep Kanumuri, NML Fast super-resolution of video sequences using sparse directional transforms* Sandeep Kanumuri.
An Introduction to Sparse Representation and the K-SVD Algorithm
New Results in Image Processing based on Sparse and Redundant Representations Michael Elad The Computer Science Department The Technion – Israel Institute.
Image Denoising with K-SVD Priyam Chatterjee EE 264 – Image Processing & Reconstruction Instructor : Prof. Peyman Milanfar Spring 2007.
* Joint work with Michal Aharon Guillermo Sapiro
Super-Resolution With Fuzzy Motion Estimation
Recent Trends in Signal Representations and Their Role in Image Processing Michael Elad The CS Department The Technion – Israel Institute of technology.
SOS Boosting of Image Denoising Algorithms
Image Denoising and Inpainting with Deep Neural Networks Junyuan Xie, Linli Xu, Enhong Chen School of Computer Science and Technology University of Science.
A Weighted Average of Sparse Several Representations is Better than the Sparsest One Alone Michael Elad The Computer Science Department The Technion –
A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel.
Sparse and Redundant Representation Modeling for Image Processing Michael Elad The Computer Science Department The Technion – Israel Institute of technology.
Topics in MMSE Estimation for Sparse Approximation Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000,
Seminar presented by: Tomer Faktor Advanced Topics in Computer Vision (048921) 12/01/2012 SINGLE IMAGE SUPER RESOLUTION.
Super-Resolution of Remotely-Sensed Images Using a Learning-Based Approach Isabelle Bégin and Frank P. Ferrie Abstract Super-resolution addresses the problem.
Online Dictionary Learning for Sparse Coding International Conference on Machine Learning, 2009 Julien Mairal, Francis Bach, Jean Ponce and Guillermo Sapiro.
Fast and incoherent dictionary learning algorithms with application to fMRI Authors: Vahid Abolghasemi Saideh Ferdowsi Saeid Sanei. Journal of Signal Processing.
Unsupervised Learning of Compositional Sparse Code for Natural Image Representation Ying Nian Wu UCLA Department of Statistics October 5, 2012, MURI Meeting.
Sparsity-based Image Deblurring with Locally Adaptive and Nonlocally Robust Regularization Weisheng Dong a, Xin Li b, Lei Zhang c, Guangming Shi a a Xidian.
Structured Face Hallucination Chih-Yuan Yang Sifei Liu Ming-Hsuan Yang Electrical Engineering and Computer Science 1.
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
Non-local Sparse Models for Image Restoration Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro and Andrew Zisserman ICCV 2009 Presented by: Mingyuan.
Fields of Experts: A Framework for Learning Image Priors (Mon) Young Ki Baik, Computer Vision Lab.
1 Markov random field: A brief introduction (2) Tzu-Cheng Jen Institute of Electronics, NCTU
Sparse & Redundant Representation Modeling of Images Problem Solving Session 1: Greedy Pursuit Algorithms By: Matan Protter Sparse & Redundant Representation.
Image Decomposition, Inpainting, and Impulse Noise Removal by Sparse & Redundant Representations Michael Elad The Computer Science Department The Technion.
A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.
SuperResolution (SR): “Classical” SR (model-based) Linear interpolation (with post-processing) Edge-directed interpolation (simple idea) Example-based.
EE565 Advanced Image Processing Copyright Xin Li Further Improvements Gaussian scalar mixture (GSM) based denoising* (Portilla et al.’ 2003) Instead.
EE565 Advanced Image Processing Copyright Xin Li Why do we Need Image Model in the first place? Any image processing algorithm has to work on a collection.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
My Research in a Nut-Shell Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel Meeting with.
Sparsity Based Poisson Denoising and Inpainting

Deeply learned face representations are sparse, selective, and robust
Outlier Processing via L1-Principal Subspaces
Systems Biology for Translational Medicine
School of Electronic Engineering, Xidian University, Xi’an, China
Yaniv Romano The Electrical Engineering Department
Improving K-SVD Denoising by Post-Processing its Method-Noise
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
Learned Convolutional Sparse Coding
Lecture 7 Patch based methods: nonlocal means, BM3D, K- SVD, data-driven (tight) frame.
Presentation transcript:

Single Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling The research leading to these results has received funding from the European Research Council under European Union's Seventh Framework Program, ERC Grant agreement no , and by the Intel Collaborative Research Institute for Computational Intelligence. SIAM Conference on IMAGING SCIENCE May 2014, Hong Kong Yaniv Romano The Electrical Engineering Department Matan Protter The Computer Science Department Michael Elad The Computer Science Department

The Interpolation Problem 2 2  Given a Low-Resolution (LR) image High-Resolution (HR) image Low-Resolution (LR) image Decimates the image by a factor of L along the horizontal and vertical dimensions (without blurring)

Background 3

 We assume the existence of a dictionary D  IR d  n whose columns are the atom signals.  Signals are modeled as sparse linear combinations of the dictionary atoms: where  is sparse, meaning that it is assumed to contain mostly zeros.  The computation of  from x (or its noisy\decimated versions) is called sparse-coding.  The OMP is a popular sparse-coding technique, especially for low dimensional signals. Sparsity Model – The Basics D … D  = x 4

K-SVD Image Denoising [Elad & Aharon (‘06)] Noisy Image  The above is one in the large family of patch-based algorithms.  Effective for denoising, deblurring, inpainiting, tomographic reconstruction, etc. Reconstructed Image 5 Denoise each patch Using OMP Initial Dictionary Using KSVD Update the Dictionary

Interpolation – Starting Point  Drawbacks…  Each patch is interpolated independently.  Sparse-coding tends to err due to small number of existing pixels. Noisy Image Reconstructed Image Denoise each patch Using OMP Initial Dictionary Zero-Filled Image *sparse-coding using high/low weights. Using weighted OMP* Interpolate each patch Not so impressive Reconstructed Image Using KSVD Update the Dictionary Using weighted KSVD* Update the Dictionary

The Basic Idea  The more known pixels within a patch, the better the restoration:  The number of known pixels depends on its location. “strong” patches. “weak” patches.  We suggest “increasing” the number of known pixels based on the “self-similarity“ assumption. known pixel 7

Past Work 8  LSSC – A non local sparse model for image restoration [Mairal, Bach, Ponce, Sapiro and Zisserman (’09)].  Applies joint sparse coding (Simultaneous OMP) instead of OMP.  NARM – Combines the sparsity prior and non-local autoregressive modeling [Dong, Zhang, Lukac and Shi (’13)].  Embeds the autoregressive model (i.e. connecting a missing pixel with its nonlocal neighbors) into the conventional sparsity data fidelity term.  PLE – A framework for solving inverse problems based on Gaussian mixture model [Yu, Sapiro and Mallat (’12)].  Very effective for denoising, inpainting, interpolation and deblurring.

The Algorithm 9

Outline Basic interpolated image LR Input image Interpolate each group jointly Group similar patches together Reconstruct the image Update the Dictionary

 Combine the “self-similarity” assumption and the sparsity prior. Grouping 11 LR image Estimated HR image Grouping similar patches

 Combine the “self-similarity” assumption and the sparsity prior.  Use Simultaneous OMP (SOMP) instead of OMP [J. A. Tropp et al. (’06)]. Sparse-Coding 12 α2α2 α1α1 α1α1 α1α1 α1α1 α2α2 α2α2 α2α

 Combine the “self-similarity” assumption and the sparsity prior.  Use Simultaneous OMP (SOMP) instead of OMP [J. A. Tropp et al. (’06)]. Improve the sparse-coding by finding the joint representation of a non-weighted version of the reference patch (“stabilizer”) and its K-NN. Sparse-Coding 13 α2α2 α1α1 α1α1 α1α1 α1α1 α2α2 α2α2 α2α α2α2 α2α2 α2α2 α2α2 α1α1 α1α1 α1α α1α1 stabilizer

 Combine the “self-similarity” assumption and the sparsity prior.  Use Simultaneous OMP (SOMP) instead of OMP [J. A. Tropp et al. (’06)]. Improve the sparse-coding by finding the joint representation of a non-weighted version of the reference patch (“stabilizer”) and its K-NN.  Given the representations, we update the dictionary.  Using a weighted version of the KSVD. Dictionary Learning 14

 Combine the “self-similarity” assumption and the sparsity prior.  Use Simultaneous OMP (SOMP) instead of OMP [J. A. Tropp et al. (’06)]. Improve the sparse-coding by finding the joint representation of a non-weighted version of the reference patch (“stabilizer”) and its K-NN.  Given the representations, we update the dictionary.  Using a weighted version of the KSVD.  Reconstruct the HR image.  How? Obtaining the HR Image 15

 A two-stage algorithm: 1.First stage exploits “strong” patches. The Proposed Method 16 Initial cubic HR estimation Interpolate each group jointly (using the “stabilizer”) Per each patch: Find its K-Nearest “strong” Neighbors Reconstruct the image by exploiting the “strong” patches Update the Dictionary

 A two-stage algorithm: 1.First stage exploits “strong” patches. 2.Second stage obtains the final HR image. The Proposed Method 17 First stage’ HR image Interpolate each group jointly (using the “stabilizer”) Per each patch: Find its K-Nearest “strong” Neighbors Reconstruct the image by exploiting the “strong” patches First stage’ Dictionary X X

 A two-stage algorithm: 1.First stage exploits “strong” patches. 2.Second stage obtains the final HR image. The Proposed Method 18 The “general“ method The proposed two-stage method

Experiments 19

 We test the proposed algorithm over 18 well-known images.  Each image was decimated (without blurring) by a factor of 2 or 3 along each axis.  We compared the PSNR* of the proposed algorithm to the current state-of-the-art methods. Simulation Setup 20

 Average PSNR* over 18 well-known images (higher is better): Quantitative Results [dB] 21Method Scaling Factor Cubic SAI [Zhang & Wu (’08)] SME [Mallat & Yu (’10)] PLE [Yu, Sapiro & Mallat (’12)] NARM [Dong et al. (’13)]

 Average PSNR* over 18 well-known images (higher is better): Quantitative Results [dB] 22Method Scaling Factor Cubic SAI [Zhang & Wu (’08)] SME [Mallat & Yu (’10)] PLE [Yu, Sapiro & Mallat (’12)] NARM [Dong et al. (’13)] Proposed Diff to best result Method Scaling Factor Cubic SAI [Zhang & Wu (’08)] SME [Mallat & Yu (’10)] PLE [Yu, Sapiro & Mallat (’12)] NARM [Dong et al. (’13)]

Visual Comparison (X2) 23 SAI SME NARM Proposed Original PLE

Visual Comparison (X2) 24 Original

Visual Comparison (X2) 25 NARM

Visual Comparison (X2) 26 Proposed

Visual Comparison (X2) 27 Original SAI SME PLENARM Proposed

Visual Comparison (X2) 28 OriginalSAI SME PLENARM Proposed Original SAI SME PLE NARM Proposed

Visual Comparison (X2) 29 OriginalSAI SME PLENARM Proposed Original

Visual Comparison (X2) 30 OriginalSAI SME PLENARM Proposed NARM

Visual Comparison (X2) 31 OriginalSAI SME PLENARM Proposed Proposed

Visual Comparison (X3) 32 OriginalSAI SME PLE NARM Proposed Original SAI SME PLE NARM Proposed

Visual Comparison (X3) 33 OriginalSAI SME PLE NARM Proposed Original

Visual Comparison (X3) 34 OriginalSAI SME PLE NARM Proposed NARM

Visual Comparison (X3) 35 OriginalSAI SME PLE NARM Proposed Proposed

Visual Comparison (X3) 36 Original SAI SME PLE NARM Proposed Original SAI SME PLE NARM Proposed

Visual Comparison (X3) 37 Original SAI SME PLE NARM Proposed Original

Visual Comparison (X3) 38 Original SAI SME PLE NARM Proposed NARM

Visual Comparison (X3) 39 Original SAI SME PLE NARM Proposed Proposed

Conclusions A decimated HR image in a regular pattern Image interpolation by K-SVD inpainting isn’t competitive The 2 nd stage builds upon the 1 st stage results and obtains the final image This leads to state-of-the-art results 40 We shall use our two-stage algorithm that combines the non-local proximity and sparsity priors The 1 st stage trains a dictionary and obtains a HR image by exploiting the “strong” patches both in the sparse-coding and reconstruction steps

We Are Done Questions ? Thank You 41