Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Introduction to Sparse Representation and the K-SVD Algorithm

Similar presentations


Presentation on theme: "An Introduction to Sparse Representation and the K-SVD Algorithm"— Presentation transcript:

1 An Introduction to Sparse Representation and the K-SVD Algorithm
Ron Rubinstein The CS Department The Technion – Israel Institute of technology Haifa 32000, Israel University of Erlangen - Nürnberg April 2008

2 ? Noise Removal ? Our story begins with image denoising …
Remove Additive Noise ? Practical application A convenient platform (being the simplest inverse problem) for testing basic ideas in image processing. An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

3 Sanity (relation to measurements)
Denoising By Energy Minimization Many of the proposed denoising algorithms are related to the minimization of an energy function of the form Sanity (relation to measurements) Prior or regularization y : Given measurements x : Unknown to be recovered Thomas Bayes This is in-fact a Bayesian point of view, adopting the Maximum-Aposteriori Probability (MAP) estimation. Clearly, the wisdom in such an approach is within the choice of the prior – modeling the images of interest. An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

4 The Evolution Of Pr(x) During the past several decades we have made all sort of guesses about the prior Pr(x) for images: Energy Smoothness Adapt+ Smooth Robust Statistics Sparse & Redundant Total-Variation Wavelet Sparsity Bilateral Filter An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

5 Agenda A Visit to Sparseland 2. Transforms & Regularizations
Introducing sparsity & overcompleteness 2. Transforms & Regularizations How & why should this work? 3. What about the dictionary? The quest for the origin of signals 4. Putting it all together Image filling, denoising, compression, … Welcome to Sparseland An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

6 M Generating Signals in Sparseland
K N Fixed Dictionary Every column in D (dictionary) is a prototype signal (atom). M N The vector  is generated randomly with few non-zeros in random locations and with random values. Sparse & random vector An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

7 M Sparseland Signals Are Special Multiply by D
Simple: Every signal is built as a linear combination of a few atoms from the dictionary D. Effective: Recent works adopt this model and successfully deploy it to applications. Empirically established: Neurological studies show similarity between this model and early vision processes. [Olshausen & Field (’96)] Multiply by D M An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

8 M T Transforms in Sparseland ? M M
Assume that x is known to emerge from M How about "Given x, find the α that generated it in " ? M T An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

9 Problem Statement We need to solve an under-determined linear system of equations: Known Among all (infinitely many) possible solutions we want the sparsest !! We will measure sparsity using the L0 "norm": An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

10 Measure of Sparsity? -1 +1 1 As p  0 we get a count of the non-zeros in the vector An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

11 Where We Are Multiply by D Is ? 3 Major Questions
A sparse & random vector 3 Major Questions Is ? NP-hard: practical ways to get ? How do we know D? An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

12 M Q Inverse Problems in Sparseland ? M
Assume that x is known to emerge from . M M Suppose we observe , a degraded and noisy version of x with How do we recover x? How about "find the α that generated y " ? Noise Q An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

13 3 Major Questions (again!)
Inverse Problem Statement "blur" by H Multiply by D A sparse & random vector 3 Major Questions (again!) Is ? How can we compute ? What D should we use? An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

14 T Q Agenda 1. A Visit to Sparseland 2. Transforms & Regularizations
Introducing sparsity & overcompleteness 2. Transforms & Regularizations How & why should this work? 3. What about the dictionary? The quest for the origin of signals 4. Putting it all together Image filling, denoising, compression, … T Q An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

15 The Sparse Coding Problem
Our dream for now: Find the sparsest solution to Known known Put formally, An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

16 Suppose we can solve this exactly
Question 1 – Uniqueness? Multiply by D M Suppose we can solve this exactly Why should we necessarily get ? It might happen that eventually An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

17 Matrix "Spark" Definition: Given a matrix D, =Spark{D} is the smallest and and number of columns that are linearly dependent. Donoho & Elad (‘02) By definition, if Dv=0 then Say I have and you have , and the two are different representations of the same x : An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

18 Uniqueness Rule M Now, what if my satisfies ? The rule implies that !
If we have a representation that satisfies then necessarily it is the sparsest. Uniqueness Donoho & Elad (‘02) So, if generates signals using "sparse enough"  , the solution of will find them exactly. M An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

19 M Question 2 – Practical P0 Solver?
Are there reasonable ways to find ? Multiply by D An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

20 Matching Pursuit (MP) Mallat & Zhang (1993) The MP is a greedy algorithm that finds one atom at a time. Step 1: find the one atom that best matches the signal. Next steps: given the previously found atoms, find the next one to best fit … The Orthogonal MP (OMP) is an improved version that re-evaluates the coefficients after each round. An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

21 Basis Pursuit (BP) Instead of solving Solve this:
Chen, Donoho, & Saunders (1995) Instead of solving Solve this: The newly defined problem is convex (linear programming). Very efficient solvers can be deployed: Interior point methods [Chen, Donoho, & Saunders (`95)] , Iterated shrinkage [Figuerido & Nowak (`03), Daubechies, Defrise, & Demole (‘04), Elad (`05), Elad, Matalon, & Zibulevsky (`06)]. An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

22 M Question 3 – Approx. Quality? MP/BP How effective are MP/BP ?
Multiply by D MP/BP How effective are MP/BP ? An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

23 BP and MP Performance Given a signal x with a representation , if then BP and MP are guaranteed to find it. Donoho & Elad (‘02) Gribonval & Nielsen (‘03) Tropp (‘03) Temlyakov (‘03) MP and BP are different in general (hard to say which is better). The above results correspond to the worst-case. Average performance results available too, showing much better bounds [Donoho (`04), Candes et.al. (`04), Tanner et.al. (`05), Tropp et.al. (`06)]. Similar results for general inverse problems [Donoho, Elad & Temlyakov (`04), Tropp (`04), Fuchs (`04), Gribonval et. al. (`05)]. An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

24 Agenda 1. A Visit to Sparseland 2. Transforms & Regularizations
Introducing sparsity & overcompleteness 2. Transforms & Regularizations How & why should this work? 3. What about the dictionary? The quest for the origin of signals 4. Putting it all together Image filling, denoising, compression, … An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

25 M Problem Setting Multiply by D
Given these P examples and a fixed size (NK) dictionary, how would we find D? An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

26 X A D  The Objective Function
The examples are linear combinations of atoms from D Each example has a sparse representation with no more than L atoms (N,K,L are assumed known, D has normalized columns) An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

27 Column-by-Column by SVD computation
K–SVD – An Overview D Initialize D Sparse Coding Use MP or BP X T Dictionary Update Column-by-Column by SVD computation Aharon, Elad & Bruckstein (‘04) An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

28 Ordinary Sparse Coding !
K–SVD: Sparse Coding Stage D For the jth example we solve T X Ordinary Sparse Coding ! An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

29 For the kth atom we solve
K–SVD: Dictionary Update Stage D For the kth atom we solve X T (the residual) An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

30 D X K–SVD Dictionary Update Stage T We can do better than this
But wait! What about sparsity? An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

31 Ek K–SVD Dictionary Update Stage ak dk We want to solve: T
When updating ak, only recompute the coefficients corresponding to those examples Only some of the examples use column dk! Solve with SVD! An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

32 Column-by-Column by SVD computation
The K–SVD Algorithm – Summary D Initialize D Sparse Coding Use MP or BP X T Dictionary Update Column-by-Column by SVD computation An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

33 Agenda 1. A Visit to Sparseland 2. Transforms & Regularizations
Introducing sparsity & overcompleteness 2. Transforms & Regularizations How & why should this work? 3. What about the dictionary? The quest for the origin of signals 4. Putting it all together Image filling, denoising, compression, … An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

34 = Image Inpainting: Theory
Assumption: the signal x was created by x=Dα0 with a very sparse α0. Missing values in x imply missing rows in this linear system. By removing these rows, we get Now solve If α0 was sparse enough, it will be the solution of the above problem! Thus, computing Dα0 recovers x perfectly. = An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

35 Inpainting: The Practice
We define a diagonal mask operator W representing the lost samples, so that Given y, we try to recover the representation of x, by solving We use a dictionary that is the sum of two dictionaries, to get an effective representation of both texture and cartoon contents. This also leads to image separation [Elad, Starck, & Donoho (’05)] An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

36 Inpainting Results Source Dictionary: Curvelet (cartoon) + Global DCT (texture) Outcome An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

37 Inpainting Results Source Dictionary: Curvelet (cartoon) + Overlapped DCT (texture) Outcome An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

38 Inpainting Results 20% 50% 80% An Introduction to
Sparse Representation And the K-SVD Algorithm Ron Rubinstein

39 Denoising: Theory and Practice
Given a noisy image y, we can clean it by solving Can we use the K-SVD dictionary? With K-SVD, we cannot train a dictionary for an entire image. How do we go from local treatment of patches to a global prior? Solution: force shift-invariant sparsity – for each NxN patch of the image, including overlaps. An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

40 For patches, our MAP penalty becomes
From Local to Global Treatment For patches, our MAP penalty becomes Extracts the (i,j)th patch Our prior An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

41 What Data to Train On? Option 1:
Use a database of images: works quite well (~0.5-1dB below the state-of-the-art) Option 2: Use the corrupted image itself ! Simply sweep through all NxN patches (with overlaps) and use them to train Image of size 1000x1000 pixels ~ examples to use – more than enough. This works much better! An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

42 Image Denoising: The Algorithm
x and D known x and ij known D and ij known Compute x by which is a simple averaging of shifted patches K-SVD Compute ij per patch using matching pursuit Compute D to minimize using SVD, updating one column at a time An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

43 Denoising Results Source Obtained dictionary after 10 iterations
Initial dictionary (overcomplete DCT) 64×256 Result dB Noisy image An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

44 Denoising Results: 3D Source: Vis. Male Head (Slice #137) 2d-KSVD:
PSNR=27.3dB 3d-KSVD: PSNR=32.4dB PSNR=12dB An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

45 Image Compression Problem: compressing photo-ID images.
General purpose methods (JPEG, JPEG2000) do not take into account the specific family. By adapting to the image-content, better results can be obtained. An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

46 Compression: The Algorithm
Training Detect main features and align the images to a common reference (20 parameters) Training set (2500 images) Divide each image to disjoint 15x15 patches, and for each compute a unique dictionary Detect features and align Divide to disjoint patches, and sparse-code each patch Compression Quantize and entropy-code An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

47 Compression Results Results for 820 bytes per image
Original JPEG JPEG 2000 PCA K-SVD Results for 820 bytes per image 11.99 10.49 8.81 5.56 10.83 8.92 7.89 4.82 Bottom: RMSE values 10.93 8.71 8.61 5.58 An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

48 Compression Results Results for 550 bytes per image
Original JPEG JPEG 2000 PCA K-SVD Results for 550 bytes per image 15.81 13.89 10.66 6.60 14.67 12.41 9.44 5.49 Bottom: RMSE values 15.30 12.57 10.27 6.36 An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

49 Today We Have Discussed
1. A Visit to Sparseland Introducing sparsity & overcompleteness 2. Transforms & Regularizations How & why should this work? 3. What about the dictionary? The quest for the origin of signals 4. Putting it all together Image filling, denoising, compression, … An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

50 Coping with an NP-hard problem
Summary Sparsity and over- completeness are important ideas for designing better tools in signal and image processing Approximation algorithms can be used, are theoretically established and work well in practice Coping with an NP-hard problem What dictionary to use? We have seen inpainting, denoising and compression algorithms. What next? How is all this used? Several dictionaries already exist. We have shown how to practically train D using the K-SVD (a) Generalizations: multiscale, non-negative,… (b) Speed-ups and improved algorithms (c) Deploy to other applications An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

51 + Why Over-Completeness? 1 2 3 4 1.0 0.3 0.5 DCT Coefficients
-0.1 -0.05 0.05 0.1 1 2 + 1.0 0.3 0.5 0.05 20 40 60 80 100 120 10 -10 -8 -6 -4 -2 |T{1+0.32}| DCT Coefficients -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.1 |T{1+0.32+0.53+0.054}| 0.5 1 3 4 64 128 An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

52 Spike (Identity) Coefficients
Desired Decomposition 40 80 120 160 200 240 10 -10 -8 -6 -4 -2 DCT Coefficients Spike (Identity) Coefficients An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein

53 Inpainting Results 70% Missing Samples DCT (RMSE=0.04)
Haar (RMSE=0.045) K-SVD (RMSE=0.03) 90% Missing Samples DCT (RMSE=0.085_ Haar (RMSE=0.07) K-SVD (RMSE=0.06) An Introduction to Sparse Representation And the K-SVD Algorithm Ron Rubinstein


Download ppt "An Introduction to Sparse Representation and the K-SVD Algorithm"

Similar presentations


Ads by Google