Presentation is loading. Please wait.

Presentation is loading. Please wait.

Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

Similar presentations


Presentation on theme: "Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,"— Presentation transcript:

1 Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk, Volkan Cevher, Marco F. Duarte, Chinmay Hegde )

2 Outline ■Introduction ■Compressive Sensing ■Beyond Sparse and Compressible Signals ■Model-Based Signal Recovery Algorithms ■Example: Wavelet Tree Model ■Example: Block-Sparse Signals and Signal Ensembles ■Conclusions 211/5/10

3 Introduction ■Shannon/Nyquist Sampling □Sampling rate must be 2x the Fourier bandwidth □Not always feasible ■Reduction of dimensionality by representing as sparse set of coefficients in a basis expansion □Sparse means that K << N coefficients are nonzero and need to be transmitted/stored/etc. ■Compressive Sensing can be used instead of Nyquist Sampling when the signal in known to be sparse or compressible 311/5/10

4 Background on Compressive Sensing Sparse Signals ■We can represent any signal in terms of coefficients of a basis set: ■A signal is K-Sparse iff K << N entries are nonzero ■Support of x (supp(x)) is a list of the indices for nonzero entries ■The set of all K-sparse signals is the union of the, K- dimensional subspaces aligned with the coordinate axes in □Denote this union of subspaces by 411/5/10

5 Background on Compressive Sensing Compressible Signals ■Many signals are not sparse but can be expressed as such □Called “Compressible Signals” ■Given a signal with coefficients that when sorted in order of decreasing magnitude decay according to power law: □Because of the rapid decay of the coefficients such signals can be approximated as K-sparse ▪Error for such approximations is given by: 511/5/10

6 Background on Compressive Sensing Compressible Signals ■Expressing a compressible signal as K-sparse is known as Transform Coding. □Record signal’s full N samples □Express in terms of basis functions □Discard all but K largest coefficients □Encode coefficients and their locations ■Transform Coding has drawbacks □Must start with full N samples □Must compute all N coefficients □Must encode locations of coefficients we keep 611/5/10

7 Background on Compressive Sensing Restricted Isometry Property (RIP) ■Compressive Sensing combines signal acquisition and compression by using a measurement matrix ■In order to recover a good estimate of our signal x from M compressive measurements our measurement matrix must satisfy the Restricted Isometry Property 711/5/10

8 Background on Compressive Sensing Recovery Algorithms ■We can conceive of an infinite amount of signal coefficient vectors which will produce the same set of compressive measurements. If we seek the sparsest x that satisfies y: We recover a K-sparse signal from M = 2K compressive measurements. This is a combinatorial NP-Complete problem and is not stable in the presence of noise. □Need to find another way to solve this problem 811/5/10

9 Background on Compressive Sensing Recovery Algorithms ■Convex Optimization □Linear program, polynomial time □Adaptations exist to handle noise ▪Basis Pursuit with Denoising (BPDN), Complexity-Based Regularization, and Dantzig Selector ■Greedy Search □Matching Pursuit, Orthogonal Matching Pursuit, StOMP, Iterative Hard Thresholding (IHT), CoSaMP, Subspace Pursuit (SP) ▪All use a best L-term approximation for the estimated signal 911/5/10

10 Background on Compressive Sensing Performance Bounds on Signal Recovery ■For compressive measurements □All l 1 techniques and CoSaMP, SP, IHT iterative techniques offer stable recovery with performance close to optimal K-term approximation □With random Φ all results hold with high probability ▪In a noise free setting these offer perfect recovery ▪In the presence of noise the mean-square error is given by: ▪For an s-compressible signal with noise of bounded norm the mean- sqaure error is: 1011/5/10

11 Beyond Sparse and Compressible Signals ■Coefficients of both natural and manmade signals often exhibit interdependency □We can model this structure in order to: ▪Reduce the degrees of freedom ▪Reduce the number of compressive measurements needed to reconstruct the signal 1111/5/10

12 Beyond Sparse and Compressible Signals Model-Sparse Signals 1211/5/10

13 Beyond Sparse and Compressible Signals Model-Based RIP ■If x is K-sparse we can relax RIP constraint on Φ. 1311/5/10

14 Beyond Sparse and Compressible Signals Model-Compressible Signals 1411/5/10

15 Beyond Sparse and Compressible Signals ■Nested Model Approximations and Residual Subspaces ■Restricted Amplification Property (RAmP) □The number of compressive measurements M required for a random matrix to be M K -RIP is determined by the number of canonical subspaces m K. This does not extend to model- compressible signals. □We can analyze the robustness by looking at the signal outside its K-term approximation and considering it noise 1511/5/10

16 Beyond Sparse and Compressible Signals ■Restricted Amplification Property (RAmP) □A matrix Φ has the (ε K,r)-RAmP for the residual subspaces R j,K of model M if: ■We can determine the number of measurements M required for a random measurement matrix Φ to have RAmP with high probability: 1611/5/10

17 Model-Based Signal Recovery Algorithms ■For greedy algorithms just replace the K-term approximation step with the corresponding K-term model-based approximation ■These algorithms have fewer subspaces to search so fewer measurements are required to obtain the same accuracy of conventional CS 1711/5/10

18 Model-Based Signal Recovery Algorithms Model-Based CoSaMP ■CoSaMP was chosen because: □It offers robust recovery on par with the best convex- optimization approaches □It has a simple iterative greedy structure which can be easily modified for the model-based case 1811/5/10

19 Model-Based Signal Recovery Algorithms Performance of Model-Sparse Signal Recovery 1911/5/10

20 Model-Based Signal Recovery Algorithms Performance of Model-Compressible Signal Recovery ■We use RAmP as a condition on our measurement matrix Φ to obtain a robustness guarantee for signal recovery with noisy measurements: 2011/5/10

21 Model-Based Signal Recovery Algorithms Robustness to Model Mismatch ■A model mismatch occurs when the model chosen does not exactly match the signal we are trying to recover. ■We start with the best case possibility: □Model-based CoSaMP (Sparsity mismatch): □(Compressibility mismatch): ■Worst Case: We end up requiring the same number of measurements required for conventional CS 2111/5/10

22 Model-Based Signal Recovery Algorithms Computational Complexity of Model-Based Recovery ■Model-based algorithms are different from the standard forms of the algorithms in two ways: □There is a reduction in the number of required measurements. This reduces the computational complexity. □K-term approximation can be implemented using a simple sorting algorithm (low cost implementation). 2211/5/10

23 Example: Wavelet Tree Model ■Wavelet coefficients can be naturally organized into a tree structure with the largest coefficients clustering together along the branches of the tree. □This motivated the authors towards a connected tree model for wavelet coefficients. ▪Previous work did not utilize bounds on the number of compressive measurements. 2311/5/10

24 Example: Wavelet Tree Model Tree-Sparse Signals ■The wavelet representation of a signal x is given by: ■Nested supports create a parent/child relationship between the wavelet coefficients at different scales. ■Discontinuities create larger coefficients which results in a chain from root to leaf. □This relationship has been exploited in many wavelet processing and compression algorithms. 2411/5/10

25 Example: Wavelet Tree Model Tree-Sparse Signals 2511/5/10

26 Example: Wavelet Tree Model Tree-Based Approximation ■The optimal approximation for tree-based signal recovery: □An efficient algorithm exists, Condensing Sort and Select Algorithm (CSSA). ▪CSSA solves by condensing nonmonotonic segments of the branches using iterative sort and average. □Subtree approximations coincide with K-term approximations when the wavelet coefficients are monotonically non-increasing along the tree branches out from the root. 2611/5/10

27 Example: Wavelet Tree Model Tree-Based Approximation ■CSSA solves by condensing nonmonotonic segments of the branches using iterative sort and average. □Condensed nodes are called supernodes □This can also be implemented as a greedy search among nodes ▪The algorithm calculates the average wavelet coefficient for the subtree rooted at that node ▪records the largest average among all the subtrees as the energy for that node ▪search for the unselected node with the largest energy and add the subtree corresponding to the node's energy to the estimated support as a supernode 2711/5/10

28 Example: Wavelet Tree Model Tree-Based Approximation 2811/5/10

29 Example: Wavelet Tree Model Tree-Compressible Signals ■Tree approximation classes contain signals with wavelet coefficients that have loose decay from coarse to fine scales. 2911/5/10

30 Example: Wavelet Tree Model Stable Tree-Based Recovery from Compressive Measurements 3011/5/10

31 Example: Wavelet Tree Model Experiments 3111/5/10

32 Example: Wavelet Tree Model Experiments ■Monte Carlo simulation study on impact of number of measurements M on the model-based and conventional recovery for a class of tree-sparse piece-wise polynomials ■Each point is from measuring normalized recovery error of 500 sample trials ■For each trial: □generate new piecewise-polynomial signal with five polynomial pieces of cubic degree and randomly placed discontinuities □compute K-term tree-approx using CSSA □measure resulting signal using matrix with i.i.d. Gaussian entries 3211/5/10

33 Example: Wavelet Tree Model Experiments 3311/5/10

34 Example: Wavelet Tree Model Experiments ■Generated sample piecewise-polynomial signals as before ■Computed K-term tree-approximation ■Computed M measurements of each approximation ■Added Gaussian noise of expected norm ■Recovered the signal using CoSaMP and model-based recovery ■Measured the error for each case 3411/5/10

35 Example: Wavelet Tree Model Experiments 3511/5/10

36 Example: Wavelet Tree Model Experiments 3611/5/10

37 Example: Block Sparse Signals and Signal Ensembles 3711/5/10 ■Locations of significant coefficients cluster in blocks under a specific sorting order ■This has been investigated in CS applications: □DNA microarrays □Magnetoencephalography ■There is a similar problem in CS for signal ensembles like sensor networks and MIMO communication □Several signals share a common coefficient support set □The signal can be re-shaped as single vector by concatenation then the coefficients rearranged so the vector has block sparsity

38 Example: Block Sparse Signals and Signal Ensembles 3811/5/10 ■Block-Sparse Signals ■Block-Based Approximation

39 Example: Block Sparse Signals and Signal Ensembles ■Block-Compressible Signals 3911/5/10

40 Example: Block Sparse Signals and Signal Ensembles ■Double Block-Based Recovery from Compressible Measurements □The same number of measurements is required for block-sparse and block-compressible signals. □The bound on the number of measurements required is: □The first term of this bound matches the order of the bound for conventional CS. □The second term represents a linear dependence on the size of the block J. ▪The number of measurements M = O(KJ+K*log(N/K)) ▫An improvement over conventional CS 4011/5/10

41 Example: Block Sparse Signals and Signal Ensembles ■Double Block-Based Recovery from Compressible Measurements □We can break an M x JN dense matrix in a distributed setting into J pieces of size M x N, calculate the CS at each sensor, then sum the results for the complete vector □According to our bound: for large values of J, the number of measurements required is lower than that required for recovery of each signal independently. 4111/5/10

42 Example: Block Sparse Signals and Signal Ensembles ■Experiments □Comparison of model-based recovery to CoSaMP for block- sparse signals. □The model-based procedures are several times faster than convex optimization based procedures. 4211/5/10

43 Example: Block Sparse Signals and Signal Ensembles 4311/5/10

44 Example: Block Sparse Signals and Signal Ensembles 4411/5/10

45 Conclusions ■Signal Models can produce significant performance gains over conventional CS ■Wavelet procedure offers considerable speed-up ■Block-sparse procedure can recover signals with fewer measurements than each sensor recovering the signals independently Future Work: □The authors have only considered models that are geometrically described as the union of subspaces. There may be potential to extend these models to more complex geometries. □It may be possible to integrate these models into other iterative algorithms 4511/5/10

46 Thank you! 4611/5/10


Download ppt "Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,"

Similar presentations


Ads by Google