Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sensing via Dimensionality Reduction Structured Sparsity Models Volkan Cevher

Similar presentations


Presentation on theme: "Sensing via Dimensionality Reduction Structured Sparsity Models Volkan Cevher"— Presentation transcript:

1 Sensing via Dimensionality Reduction Structured Sparsity Models Volkan Cevher volkan@rice.edu

2 Sensors 160MP 200,000fps 192,000Hz 2009 - Real time 1977 - 5hours

3 Digital Data Acquisition Foundation: Shannon/Nyquist sampling theorem timespace “if you sample densely enough (at the Nyquist rate), you can perfectly reconstruct the original analog data”

4 Major Trends in Sensing higher resolution / denser sampling large numbers of sensors increasing # of modalities / mobility

5 Major Trends in Sensing Motivation: solve bigger / more important problems decrease acquisition times / costs entertainment…

6 Problems of the Current Paradigm Sampling at Nyquist rate –expensive / difficult Data deluge –communications / storage Sample then compress –not future proof

7 Approaches Do nothing / Ignore be content with where we are… –generalizes well –robust

8 Approaches Finite Rate of Innovation Sketching / Streaming Compressive Sensing [Vetterli, Marziliano, Blu; Blu, Dragotti, Vetterli, Marziliano, Coulot; Gilbert, Indyk, Strauss, Cormode, Muthukrishnan; Donoho; Candes, Romberg, Tao; Candes, Tao]

9 Approaches Finite Rate of Innovation Sketching / Streaming Compressive Sensing [Vetterli, Marziliano, Blu; Blu, Dragotti, Vetterli, Marziliano, Coulot; Gilbert, Indyk, Strauss, Cormode, Muthukrishnan; Donoho; Candes, Romberg, Tao; Candes, Tao] PARSITY

10 Agenda A short review of compressive sensing Beyond sparsity –Structure incorporated… Source localization application Conclusions

11 Compressive Sensing 101 Goal: Recover a sparse or compressible signal from measurements Problem: Random projection not full rank Solution: Exploit the sparsity/compressibility geometry of acquired signal

12 Goal: Recover a sparse or compressible signal from measurements Problem: Random projection not full rank but satisfies Restricted Isometry Property (RIP) Solution: Exploit the sparsity/compressibility geometry of acquired signal –iid Gaussian –iid Bernoulli –… Compressive Sensing 101

13 Goal: Recover a sparse or compressible signal from measurements Problem: Random projection not full rank Solution: Exploit the model geometry of acquired signal Compressive Sensing 101

14 Sparse signal: only K out of N coordinates nonzero –model: union of K -dimensional subspaces aligned w/ coordinate axes Concise Signal Structure sorted index

15 Sparse signal: only K out of N coordinates nonzero –model: union of K -dimensional subspaces Compressible signal: sorted coordinates decay rapidly to zero –Model: weak ball Concise Signal Structure sorted index power-law decay

16 Sparse signal: only K out of N coordinates nonzero –model: union of K -dimensional subspaces Compressible signal: sorted coordinates decay rapidly to zero well-approximated by a K -sparse signal (simply by thresholding) sorted index Concise Signal Structure

17 Restricted Isometry Property (RIP) Preserve the structure of sparse/compressible signals RIP of order 2K implies: for all K-sparse x 1 and x 2 K-planes

18 Restricted Isometry Property (RIP) Preserve the structure of sparse/compressible signals Random subGaussian (iid Gaussian, Bernoulli) matrix has the RIP with high probability if K-planes

19 Recovery Algorithms Goal:given recover and convex optimization formulations –basis pursuit, Dantzig selector, Lasso, … Greedy algorithms –orthogonal matching pursuit, iterative thresholding (IT), compressive sensing matching pursuit (CoSaMP) –at their core:iterative sparse approximation

20 Performance of Recovery Using methods, IT, CoSaMP Sparse signals –noise-free measurements: exact recovery –noisy measurements: stable recovery Compressible signals –recovery as good as K-sparse approximation CS recovery error signal K-term approx error noise

21 From Sparsity to Structured Sparsity

22 Sparse Models wavelets: natural images Gabor atoms: chirps/tones pixels: background subtracted images

23 Sparse Models Sparse/compressible signal model captures simplistic primary structure sparse image

24 Beyond Sparse Models Sparse/compressible signal model captures simplistic primary structure Modern compression/processing algorithms capture richer secondary coefficient structure wavelets: natural images Gabor atoms: chirps/tones pixels: background subtracted images

25 Sparse Signals Defn: K-sparse signals comprise a particular set of K-dim canonical subspaces

26 Model-Sparse Signals Defn: A K-sparse signal model comprises a particular (reduced) set of K-dim canonical subspaces

27 Model-Sparse Signals Defn: A K-sparse signal model comprises a particular (reduced) set of K-dim canonical subspaces Structured subspaces <> fewer subspaces <> relaxed RIP <> fewer measurements

28 Model-Sparse Signals Defn: A K-sparse signal model comprises a particular (reduced) set of K-dim canonical subspaces Structured subspaces <> increased signal discrimination <> improved recovery perf. <> faster recovery

29 Model-based CS Running Example: Tree-Sparse Signals [Baraniuk, VC, Duarte, Hegde]

30 Wavelet Sparse Typical of wavelet transforms of natural signals and images (piecewise smooth) 1-D signals1-D wavelet transform scale coefficientstime amplitude

31 Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Typical of wavelet transforms of natural signals and images (piecewise smooth)

32 Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Sparse approx: find best set of coefficients –sorting –hard thresholding Tree-sparse approx: find best rooted subtree of coefficients –CSSA [Baraniuk] –dynamic programming [Donoho]

33 Model: K-sparse coefficients RIP: stable embedding Sparse K-planes

34 Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Tree-RIP: stable embedding K-planes

35 Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Tree-RIP: stable embedding Recovery: new model based algorithms [VC, Duarte, Hegde, Baraniuk; Baraniuk, VC, Duarte, Hegde]

36 Iterative Thresholding Standard CS Recovery [Nowak, Figueiredo; Kingsbury, Reeves; Daubechies, Defrise, De Mol; Blumensath, Davies; …] update signal estimate prune signal estimate (best K-term approx) update residual

37 Iterative Model Thresholding Model-based CS Recovery [VC, Duarte, Hegde, Baraniuk; Baraniuk, VC, Duarte, Hegde] update signal estimate prune signal estimate (best K-term model approx) update residual

38 Tree-Sparse Signal Recovery target signal CoSaMP, (MSE=1.12) L1-minimization (MSE=0.751) Tree-sparse CoSaMP (MSE=0.037) N=1024 M=80

39 Compressible Signals Real-world signals are compressible, not sparse Recall: compressible <> well approximated by sparse –compressible signals lie close to a union of subspaces –ie: approximation error decays rapidly as If has RIP, then both sparse and compressible signals are stably recoverable sorted index

40 Model-Compressible Signals Model-compressible <> well approximated by model-sparse –model-compressible signals lie close to a reduced union of subspaces –ie: model-approx error decays rapidly as

41 Model-Compressible Signals Model-compressible <> well approximated by model-sparse –model-compressible signals lie close to a reduced union of subspaces –ie: model-approx error decays rapidly as While model-RIP enables stable model-sparse recovery, model-RIP is not sufficient for stable model-compressible recovery at !

42 Stable Recovery Stable model-compressible signal recovery at requires that have both: –RIP + Restricted Amplification Property RAmP: controls nonisometry of in the approximation’s residual subspaces optimal K-term model recovery (error controlled by RIP) optimal 2K-term model recovery (error controlled by RIP) residual subspace (error not controlled by RIP)

43 Tree-RIP, Tree-RAmP Theorem: An MxN iid subgaussian random matrix has the Tree(K)-RIP if Theorem: An MxN iid subgaussian random matrix has the Tree(K)-RAmP if

44 Simulation Number samples for correct recovery Piecewise cubic signals + wavelets Models/algorithms: –compressible (CoSaMP) –tree-compressible (tree-CoSaMP)

45 Performance of Recovery Using model-based IT, CoSaMP with RIP and RAmP Model-sparse signals –noise-free measurements: exact recovery –noisy measurements: stable recovery Model-compressible signals –recovery as good as K-model-sparse approximation CS recovery error signal K-term model approx error noise [Baraniuk, VC, Duarte, Hegde]

46 Other Useful Models When the model-based framework makes sense: –model with  fast approximation algorithm –sensing matrix with  model-RIP  model-RAmP Ex: block sparsity / signal ensembles [Tropp, Gilbert, Strauss], [Stojnic, Parvaresh, Hassibi], [Eldar, Mishali], [Baron, Duarte et al], [Baraniuk, VC, Duarte, Hegde] Ex: clustered signals [VC, Duarte, Hegde, Baraniuk], [VC, Indyk, Hegde, Baraniuk] Ex: neuronal spike trains [Hegde, Duarte, VC] – Best paper award

47 Block-Sparse Signal target CoSaMP (MSE = 0.723) block-sparse model recovery (MSE=0.015) Blocks are pre-specified.

48 Block-Compressible Signal target CoSaMP (MSE=0.711) block-sparse recovery (MSE=0.195) best 5-block approximation (MSE=0.116 )

49 Clustered Sparsity (K,C) sparse signals (1-D) –K-sparse within at most C clusters For stable recovery (model-RIP + RAmP) Model approximation using dynamic programming Includes block sparsityas as a special case [VC, Indyk, Hedge, Baraniuk]

50 Clustered Sparsity targetIsing-model recovery CoSaMP recovery LP (FPC) recovery Model clustering of significant pixels in space domain using graphical model (MRF) Ising model approximation via graph cuts [VC, Duarte, Hedge, Baraniuk]

51 Neuronal Spike Trains Model the firing process of a single neuron via 1D Poisson process with spike trains - Exploit the refractory period of neurons Model approximation problem: - Find a K-sparse signal such that its coefficients are separated by at least 

52 Neuronal Spike Trains Model the firing process of a single neuron via 1D Poisson process with spike trains –Stable recovery Model approximation solution: –Integer program –Efficient & provable solution due to total unimodularity of linear constraint [Hedge, Duarte, VC; Best Student Paper Award at SPARS’09]

53

54 Signal recovery is not always required. ELVIS: Enhanced Localization via Incoherence and Sparsity

55 Localization Problem Goal: Localize targets by fusing measurements from a network of sensors [VC, Duarte, Baraniuk; Model and Zibulevsky; VC, Gurbuz, McClellan, Chellappa; Malioutov, Cetin, and Willsky; Chen et al.]

56 Localization Problem Goal: Localize targets by fusing measurements from a network of sensors –collect time signal data –communicate signals across the network –solve an optimization problem

57 Bottlenecks Goal: Localize targets by fusing measurements from a network of sensors –collect time signal data  requires potentially high-rate (Nyquist) sampling –communicate signals across the network  potentially large communication burden –solve an optimization problem Need compression

58 An Important Detail Solve two entangled problems for localization –Estimate source locations –Estimate source signals

59 ELVIS Instead, solve one localization problem –Estimate source locations by exploiting random projections of observed signals –Estimate source signals

60 ELVIS Instead, solve one localization problem –Estimate source locations by exploiting random projections of observed signals –Estimate source signals Bayesian model order selection & MAP estimation results in a decentralized sparse approximation framework that leverages –Source sparsity –Incoherence of sources –Spatial sparsity of sources [VC, Boufounos, Baraniuk, Gilbert, Strauss]

61 ELVIS Use random projections of observed signals two ways: –Create local sensor dictionaries that sparsify source locations –Create intersensor communication messages (K targets on N-dim grid) populated using recovered signals random iid

62 ELVIS Use random projections of observed signals two ways: –Create local sensor dictionaries that sparsify source locations sample at source sparsity –Create intersensor communication messages communicate at spatial sparsity robust to (i) quantization (ii) packet drops No Signal Reconstruction ELVIS Dictionary

63 ELVIS Use random projections of observed signals two ways: –Create local sensor dictionaries that sparsify source locations sample at source sparsity –Create intersensor communication messages communicate at spatial sparsity robust to (i) quantization (ii) packet drops Provable greedy estimation for ELVIS dictionaries Bearing pursuit No Signal Reconstruction

64 Field Data Results 5 vehicle convoy >100 × sub-Nyquist

65 Conclusions Why CS works: stable embedding for signals with concise geometric structure Sparse signals >> model-sparse signals Compressible signals>> model-compressible signals Greedy model-based signal recovery algorithms upshot:provably fewer measurements more stable recovery new concept:RIP >> RAmP ELVIS:localization via dimensionality reduction

66 Volkan Cevher / volkan@rice.edu

67 Other Interests Computer Vision –Variational audio-video fusion –Audio-video vehicle classification –Camera networks / 3D Recon [VC, Sankaranarayanan, Chellappa, McClellan] [VC, Chellappa, McClellan] [VC, Duarte, Sankaranarayanan, Reddy, Chellappa, Baraniuk]

68 Active Inference –Optimal sensor maneuvers Sensor Calibration –Simulatenous tracking and calibration Array Processing –Direction-of-arrival tracking [Alam, VC, McClellan] [VC, McClellan] Other Interests [VC, Velmurugan, McClellan]

69 Questions? Richb JMcRama PiotrAnnaMartin VC Lance

70 Calculating the Sampling Bound Decompose the signal into approximation bands

71 Calculating the Sampling Bound Decompose the signal into approximation bands Count the subspaces in each approximation band Relax isometry restrictions on measurement matrix

72 Calculating the Sampling Bound Counting residual subspaces: –# of K-dim subspaces

73 Calculating the Sampling Bound Counting residual subspaces: –# of K-dim subspaces

74 Calculating the Sampling Bound Maximum eigenvalue of the measurement matrix –Concentration of measures [Ledoux] RAmP: subGaussian

75 Calculating the Sampling Bound Maximum eigenvalue of the measurement matrix –Union bound over all residual subspaces RAmP: subGaussian


Download ppt "Sensing via Dimensionality Reduction Structured Sparsity Models Volkan Cevher"

Similar presentations


Ads by Google