Sudocodes Fast measurement and reconstruction of sparse signals

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

An Easy-to-Decode Network Coding Scheme for Wireless Broadcasting
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
On the Power of Adaptivity in Sparse Recovery Piotr Indyk MIT Joint work with Eric Price and David Woodruff, 2011.
Multi-Label Prediction via Compressed Sensing By Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang (NIPS 2009) Presented by: Lingbo Li ECE, Duke University.
Joint work with Irad Yavneh
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Fast Bayesian Matching Pursuit Presenter: Changchun Zhang ECE / CMR Tennessee Technological University November 12, 2010 Reading Group (Authors: Philip.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang
More MR Fingerprinting
Compressed sensing Carlos Becker, Guillaume Lemaître & Peter Rennert
Learning With Dynamic Group Sparsity Junzhou Huang Xiaolei Huang Dimitris Metaxas Rutgers University Lehigh University Rutgers University.
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
Eigenvalues and Eigenvectors
“Random Projections on Smooth Manifolds” -A short summary
Compressive Oversampling for Robust Data Transmission in Sensor Networks Infocom 2010.
Compressed Sensing meets Information Theory Dror Baron Duarte Wakin Sarvotham Baraniuk Guo Shamai.
Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Signal Processing.
Compressed Sensing for Networked Information Processing Reza Malek-Madani, 311/ Computational Analysis Don Wagner, 311/ Resource Optimization Tristan Nguyen,
Image Denoising via Learned Dictionaries and Sparse Representations
A Single-letter Characterization of Optimal Noisy Compressed Sensing Dongning Guo Dror Baron Shlomo Shamai.
Compressive Signal Processing
Optimized Projection Directions for Compressed Sensing Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa.
Random Convolution in Compressive Sampling Michael Fleyer.
Introduction to Compressive Sensing
Rice University dsp.rice.edu/cs Distributed Compressive Sensing A Framework for Integrated Sensing and Processing for Signal Ensembles Marco Duarte Shriram.
6.829 Computer Networks1 Compressed Sensing for Loss-Tolerant Audio Transport Clay, Elena, Hui.
Representation and Compression of Multi-Dimensional Piecewise Functions Dror Baron Signal Processing and Systems (SP&S) Seminar June 2009 Joint work with:
A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel.
Hybrid Dense/Sparse Matrices in Compressed Sensing Reconstruction
Compressed Sensing Compressive Sampling
Compressive Sampling: A Brief Overview
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
Repairable Fountain Codes Megasthenis Asteris, Alexandros G. Dimakis IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 32, NO. 5, MAY /5/221.
Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference.
Cs: compressed sensing
“A fast method for Underdetermined Sparse Component Analysis (SCA) based on Iterative Detection- Estimation (IDE)” Arash Ali-Amini 1 Massoud BABAIE-ZADEH.
Learning With Structured Sparsity
The Secrecy of Compressed Sensing Measurements Yaron Rachlin & Dror Baron TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:
Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,
Compressive Sensing for Multimedia Communications in Wireless Sensor Networks By: Wael BarakatRabih Saliba EE381K-14 MDDSP Literary Survey Presentation.
Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Sudocodes Fast measurement and reconstruction of sparse signals.
Sparse Signals Reconstruction Via Adaptive Iterative Greedy Algorithm Ahmed Aziz, Ahmed Salim, Walid Osamy Presenter : 張庭豪 International Journal of Computer.
Zhilin Zhang, Bhaskar D. Rao University of California, San Diego March 28,
Compressive Sensing Techniques for Video Acquisition EE5359 Multimedia Processing December 8,2009 Madhu P. Krishnan.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Modulated Unit Norm Tight Frames for Compressed Sensing
Computing and Compressive Sensing in Wireless Sensor Networks
Highly Undersampled 0-norm Reconstruction
Lecture 22: Linearity Testing Sparse Fourier Transform
Homomorphic Hashing for Sparse Coefficient Extraction
Lecture 15 Sparse Recovery Using Sparse Matrices
Learning With Dynamic Group Sparsity
Lecture 4: CountSketch High Frequencies
Presenter: Xudong Zhu Authors: Xudong Zhu, etc.
Maximally Recoverable Local Reconstruction Codes
Y. Kotidis, S. Muthukrishnan,
Linear sketching over
Bounds for Optimal Compressed Sensing Matrices
Introduction to Compressive Sensing Aswin Sankaranarayanan
Linear sketching with parities
Aishwarya sreenivasan 15 December 2006.
INFONET Seminar Application Group
CIS 700: “algorithms for Big Data”
Sudocodes Fast measurement and reconstruction of sparse signals
Outline Sparse Reconstruction RIP Condition
Subspace Expanders and Low Rank Matrix Recovery
Presentation transcript:

Sudocodes Fast measurement and reconstruction of sparse signals Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Came out of my personal experience with 301 – fourier analysis and linear systems

Sparse signal Acquisition Consider that contains only non-zero coefficients Are there efficient ways to measure and recover ? Traditional DSP approach: Acquisition: obtain measurements Sparsity is exploited only in the processing stage New Compressed Sensing (CS) approach: Acquisition: obtain just measurements Sparsity is exploited during signal acquisition [Candes et al; Donoho]

CS revelation Measure the signal with few random linear projections (inner products) measurements sparse signal information rate Revelation: Small sufficient to encode

CS Reconstruction Reconstruct given : Less rows than columns Ill-posed inverse problem Reconstruction approach search over subspace of explanations to measurements find most likely explanation Sparsity serves as a strong prior

CS Performance metrics Efficiency in encoding How small can we push M? Reconstruction complexity Critical for a practical decoder

Reconstruction: Traditional L2 Approach Goal: Given measurements find signal Fewer rows than columns in measurement matrix Ill-posed: infinitely many solutions Classical solution: least squares

Reconstruction: Traditional L2 Approach Goal: Given measurements find signal Fewer rows than columns in measurement matrix Ill-posed: infinitely many solutions Classical solution: least squares Problem: small L2 doesn’t imply sparsity

Reconstruction: L0 approach Modern solution: exploit sparsity of Of the infinitely many solutions seek sparsest one number of nonzero entries

Reconstruction: L0 approach Modern solution: exploit sparsity of Of the infinitely many solutions seek sparsest one If then perfect reconstruction w/ high probability [Bresler et al; Wakin et al] Performance Most efficient encoding, but combinatorial computational complexity

The CS Miracle – L1 Modern solution: exploit sparsity of Of the infinitely many solutions seek the one with smallest L1 norm

The CS Miracle – L1 Modern solution: exploit sparsity of Of the infinitely many solutions seek the one with smallest L1 norm If then perfect reconstruction w/ high probability[Candes et al; Donoho] Performance Efficient encoding, and Polynomial N3 computational complexity with linear programming

But… L1 is still inadequate! L1 minimization is still impractical for many applications Reconstruction times: N=1,000 t=10 seconds N=10,000 t=3 hours N=100,000 t=140 days Examples where are not uncommon; L1 is impractical Need new measurement and reconstruction strategies

But… L1 is still inadequate! L1 minimization is still impractical for many applications Reconstruction times: N=1,000 t=10 seconds N=10,000 t=3 hours N=100,000 t=140 days Examples where are not uncommon; L1 is impractical Need new measurement and reconstruction strategies This is where Sudocodes come in!

Sudocodes: overview Efficiency Reconstruction complexity Numerical results are phenomenal. Example: N=100,000 K=1,000 t=5.47 seconds M=5,132 Drawback: works for a specific signal class

Signal model Signal contains exactly non-zero coefficients Condition on the non-zero coefficients of Let = set of non-zero coefficients of Sum of any subset of is unique upto precision True with high probability when non-zero coefficients are drawn from a continuous distribution Otherwise pre-process the signal by dithering

Sudocode strategy Measurement matrix is sparse 0/1 Each row of contains L randomly placed 1’s Value of L is chosen based on N and K Special structure of enables fast measurement and reconstruction measurements sparse signal sparse 0/1 matrix nonzero entries

Sudocode reconstruction Process each measurement y(i) in succession Can the value of y(i) resolve any coefficient(s) of x?

Case 1: Zero measurement Inference: all coefficients involved in the measurement are zero Can resolve up to L coefficients with 1 measurement

Case 1: Zero measurement Inference: all coefficients involved in the measurement are zero Can resolve up to L coefficients with 1 measurement Recovered coefficients and corresponding columns of Phi can be ignored in remaining processing

Case 2: #(support set) = 1 Row 2 of Phi contains only one non-zero entry

Case 2: #(support set) = 1 Row 2 of Phi contains only one non-zero entry

Case 2: #(support set) = 1 resolved Row 2 of Phi contains only one non-zero entry Trivially gives the value of the corresponding coefficient

Case 3: Matching measurements Inference: matching measurements come from summing the same set of non-zero coefficients

Case 3: Matching measurements Common support Disjoint support Inference: matching measurements come from summing the same set of non-zero coefficients Identify disjoint support and common support

Case 3: Matching measurements Inference: matching measurements come from summing the same set of non-zero coefficients Identify disjoint support and common support Resolve coefficients

Sudoku puzzles Name “Sudocodes” inspired by sudoku puzzles. Thanks to Ingrid Daubechies for pointing out the connection

Two phase decoding is not measured Phase 1: decode coefficients Phase 2: decode remaining coefficients Why? When most coefficients are decoded, Phase 2 saves a factor of measurements

Phase 2 measurements and decoding is non-sparse of dimension

Phase 2 measurements and decoding is non-sparse of dimension Resolve remaining coefficients by inverting the sub-matrix of

Phase 2 measurements and decoding is non-sparse of dimension Resolve remaining coefficients by inverting the sub-matrix of Phase 2 complexity = Key: choose Phase 2 complexity is

Accelerated decoding I: Fast matching Use Binary Search Tree to store measurements Searching for matching measurements:

Accelerated decoding II: Avalanche <<Example>> If a coefficient is resolved, search past measurements for potential coefficient revelations

Design of Sudo measurement matrix Choice of L Set L based on For large N,

Number of measurements Theorem: With , phase 1 requires to exactly reconstruct coefficients Proof sketch:

Choice of L K=0.02N For a given choice of N and K

Choice of L Numerical evidence also suggests L = O(N/K)

Related work [Cormode, Muthukrishnan] CS scheme based on group testing M=O(K log2 N) Complexity O(K log2 N) [Gilbert et. al.] Chaining Pursuit CS scheme based on group testing and iterating the solution Complexity O(K log2 N log2 K) Works best for super-sparse signals

Performance comparison Chaining Pursuit Sudocodes N=10,000 K=10 M=5,915 t=0.16 sec M=461 t=0.14 sec K=100 M=90,013 t=2.43 sec M=803 t=0.37 sec N=100,000 M=17,398 t=1.13 sec M=931 t=1.09 sec K=1000 M>106 t>30 sec M=5,132 t=5.47 sec Chaining pursuit works admirably for small K oversampling factor is huge: efficiency is low works for compressible signals as well Sudocodes Very efficient yet fast reconstruction But works only on a restricted class of signals

Sudocode applications Erasure codes in p2p and distributed file storage Stream compressed digital content Thresholded DCT/wavelet coefficients for sudocoding Partial reconstruction of signals (e.g. detection)

Ongoing work Exploit statistical dependencies between non-zero coefficients Adaptive linear projections Algorithms to handle noisy measurements

Conclusions Sudocodes are highly efficient CS technique with low complexity Key idea: use sparse Phi Numerical results are very encouraging Applications to erasure codes, P2P networks However- works for a very specific sparse signal class

Number of measurements Theorem: With , phase 1 requires to exactly reconstruct coefficients Proof sketch: