compressive nonsensing

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

An Introduction to Compressed Sensing Student : Shenghan TSAI Advisor : Hsuan-Jung Su and Pin-Hsun Lin Date : May 02,
Compressive Sensing IT530, Lecture Notes.
Digital Coding of Analog Signal Prepared By: Amit Degada Teaching Assistant Electronics Engineering Department, Sardar Vallabhbhai National Institute of.
Richard G. Baraniuk Chinmay Hegde Sriram Nagaraj Go With The Flow A New Manifold Modeling and Learning Framework for Image Ensembles Aswin C. Sankaranarayanan.
Pixel Recovery via Minimization in the Wavelet Domain Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz *: Polytechnic University, Brooklyn, NY.
Learning Measurement Matrices for Redundant Dictionaries Richard Baraniuk Rice University Chinmay Hegde MIT Aswin Sankaranarayanan CMU.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Fast Bayesian Matching Pursuit Presenter: Changchun Zhang ECE / CMR Tennessee Technological University November 12, 2010 Reading Group (Authors: Philip.
Richard Baraniuk Rice University dsp.rice.edu/cs Lecture 2: Compressive Sampling for Analog Time Signals.
Beyond Nyquist: Compressed Sensing of Analog Signals
Contents 1. Introduction 2. UWB Signal processing 3. Compressed Sensing Theory 3.1 Sparse representation of signals 3.2 AIC (analog to information converter)
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
Compressed sensing Carlos Becker, Guillaume Lemaître & Peter Rennert
Rice/Duke | Compressive Optical Devices | August 2007 Richard Baraniuk Kevin Kelly Rice University Compressive Optical Imaging Systems – Theory, Devices,
Richard Baraniuk Rice University Progress in Analog-to- Information Conversion.
Compressive Signal Processing Richard Baraniuk Rice University.
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
“Random Projections on Smooth Manifolds” -A short summary
School of Computing Science Simon Fraser University
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Quantization and compressed sensing Dmitri Minkin.
Compressed Sensing meets Information Theory Dror Baron Duarte Wakin Sarvotham Baraniuk Guo Shamai.
Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Signal Processing.
Compressed Sensing for Networked Information Processing Reza Malek-Madani, 311/ Computational Analysis Don Wagner, 311/ Resource Optimization Tristan Nguyen,
Image Denoising via Learned Dictionaries and Sparse Representations
Compressive Signal Processing
Random Convolution in Compressive Sampling Michael Fleyer.
Introduction to Compressive Sensing
High Dynamic Range Imaging: Spatially Varying Pixel Exposures Shree K. Nayar, Tomoo Mitsunaga CPSC 643 Presentation # 2 Brien Flewelling March 4 th, 2009.
Markus Strohmeier Sparse MRI: The Application of
Rice University dsp.rice.edu/cs Distributed Compressive Sensing A Framework for Integrated Sensing and Processing for Signal Ensembles Marco Duarte Shriram.
Compressed Sensing Compressive Sampling
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Model-based Compressive Sensing
An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS
Linear Algebra and Image Processing
Entropy, Information and Compressive Sensing in the Quantum Domain.
Compressive Sensing A New Approach to Image Acquisition and Processing
Richard Baraniuk Rice University Model-based Sparsity.
Compressive Sampling: A Brief Overview
Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan
Compressive Sensing A New Approach to Signal Acquisition and Processing Richard Baraniuk Rice University.
Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference.
Recovery of Clustered Sparse Signals from Compressive Measurements
Cs: compressed sensing
Introduction to Compressive Sensing
Recovering low rank and sparse matrices from compressive measurements Aswin C Sankaranarayanan Rice University Richard G. Baraniuk Andrew E. Waters.
DSB-SC AM Tx signal  AM Tx signal spectrum
EE369C Final Project: Accelerated Flip Angle Sequences Jan 9, 2012 Jason Su.
Learning With Structured Sparsity
Source Localization on a budget Volkan Cevher Rice University Petros RichAnna Martin Lance.
SCALE Speech Communication with Adaptive LEarning Computational Methods for Structured Sparse Component Analysis of Convolutive Speech Mixtures Volkan.
Compressive Sensing for Multimedia Communications in Wireless Sensor Networks By: Wael BarakatRabih Saliba EE381K-14 MDDSP Literary Survey Presentation.
Compressible priors for high-dimensional statistics Volkan Cevher LIONS/Laboratory for Information and Inference Systems
Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Sudocodes Fast measurement and reconstruction of sparse signals.
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
The Scaling Law of SNR-Monitoring in Dynamic Wireless Networks Soung Chang Liew Hongyi YaoXiaohang Li.
Image Priors and the Sparse-Land Model
Reconstruction-free Inference on Compressive Measurements Suhas Lohit, Kuldeep Kulkarni, Pavan Turaga, Jian Wang, Aswin Sankaranarayanan Arizona State.
Date of download: 6/29/2016 Copyright © 2016 SPIE. All rights reserved. Potential imaging modes. (a) Each detector in a focal plane array measures the.
Super-resolution MRI Using Finite Rate of Innovation Curves Greg Ongie*, Mathews Jacob Computational Biomedical Imaging Group (CBIG) University of Iowa.
Compressive Coded Aperture Video Reconstruction
Compressive Sensing Imaging
Sudocodes Fast measurement and reconstruction of sparse signals
Introduction to Compressive Sensing Aswin Sankaranarayanan
INFONET Seminar Application Group
Sudocodes Fast measurement and reconstruction of sparse signals
Subspace Expanders and Low Rank Matrix Recovery
Goodfellow: Chapter 14 Autoencoders
Presentation transcript:

compressive nonsensing Richard Baraniuk Rice University

Chapter 1 The Problem

challenge 1 data too expensive

Case in Point: MR Imaging Measurements very expensive $1-3 million per machine 30 minutes per scan

Case in Point: IR Imaging

challenge 2 too much data

Case in Point: DARPA ARGUS-IS 1.8 Gpixel image sensor video rate output: 444 Gbits/s comm data rate: 274 Mbits/s factor of 1600x way out of reach of existing compression technology Reconnaissance without conscience too much data to transmit to a ground station too much data to make effective real-time decisions

Chapter 2 The Promise

COMPRESSIVE SENSING

innovation 1 sparse signal models

Sparsity large wavelet coefficients pixels wideband signal samples (blue = 0) pixels wideband signal samples large Gabor (TF) coefficients frequency time

Sparsity large wavelet coefficients pixels nonlinear signal model (blue = 0) pixels sparse signal nonlinear signal model nonzero entries

innovation 2 dimensionality reduction for sparse signals

Dimensionality Reduction When data is sparse/compressible, can directly acquire a compressed representation with no/little information loss through linear dimensionality reduction sparse signal measurements nonzero entries

Stable Embedding An information preserving projection preserves the geometry of the set of sparse signals SE ensures that K-dim subspaces 15

Stable Embedding An information preserving projection preserves the geometry of the set of sparse signals SE ensures that 16

Random Embedding is Stable Measurements = random linear combinations of the entries of No information loss for sparse vectors whp sparse signal measurements nonzero entries

innovation 3 sparsity-based signal recovery

Signal Recovery Goal: Recover signal from measurements Problem: Random projection not full rank (ill-posed inverse problem) Solution: Exploit the sparse/compressible geometry of acquired signal Recovery via (convex) sparsity penalty or greedy algorithms [Donoho; Candes, Romberg, Tao, 2004]

Signal Recovery Goal: Recover signal from measurements Problem: Random projection not full rank (ill-posed inverse problem) Solution: Exploit the sparse/compressible geometry of acquired signal Recovery via (convex) sparsity penalty or greedy algorithms [Donoho; Candes, Romberg, Tao, 2004]

“Single-Pixel” CS Camera scene single photon detector image reconstruction or processing DMD DMD random pattern on DMD array DMD is used in projectors Multiply value of random pattern in mirror with value of signal (light intensity) in pixel lens is focused onto the photodiode w/ Kevin Kelly

“Single-Pixel” CS Camera scene single photon detector image reconstruction or processing DMD DMD random pattern on DMD array DMD is used in projectors Multiply value of random pattern in mirror with value of signal (light intensity) in pixel lens is focused onto the photodiode … Flip mirror array M times to acquire M measurements Sparsity-based recovery

Random Demodulator Problem: In contrast to Moore’s Law, ADC performance doubles only every 6-8 years CS enables sampling near signal’s (low) “information rate” rather than its (high) Nyquist rate A2I sampling rate number of tones / window Nyquist bandwidth

Example: Frequency Hopper Sparse in time-frequency 20x sub-Nyquist sampling Nyquist rate sampling spectrogram sparsogram

challenge 1 data too expensive means fewer expensive measurements needed for the same resolution scan

challenge 2 too much data means we compress on the fly as we acquire data

EXCITING!!!

2004—2014 9797 citations 6640 citations dsp.rice.edu/cs archive >1500 papers nuit-blanche.blogspot.com > 1 posting/sec

Chapter 3 The Hype

CS is Growing Up

Gerhard Richter 4096 Colours

muralsoflajolla.com/roy-mcmakin-mural

“L1 is the new L2” - Stan Osher

Exponential Growth

?

Chapter 4 The Fallout

“L1 is the new L2” - Stan Osher

CS for “Face Recognition”

From: M. V. Subject: Interesting application for compressed sensing Date: June 10, 2011 at 11:37:31 PM EDT To: candes@stanford.edu, jrom@ece.gatech.edu Drs. Candes and Romberg, You may have already been approached about this, but I feel I should say something in case you haven't. I'm writing to you because I recently read an article in Wired Magazine about compressed sensing I'm excited about the applications CS could have in many fields, but today I was reminded of a specific application where CS could conceivably settle an area of dispute between mainstream historians and Roswell UFO theorists.  As outlined in the linked video below, Dr. Rudiak has analyzed photos from 1947 in which a General Ramey appears holding a typewritten letter from which Rudiak believes he has been able to discern a number of words which he believes substantiate the extraterrestrial hypothesis for the Roswell Incident).  For your perusal, I've located a "hi-res" copy of the cropped image of the letter in Ramey's hand. I hope to hear back from you.  Is this an application where compressed sensing could be useful?  Any chance you would consider trying it? Thank you for your time, M. V. P.S. - Out of personal curiosity, are there currently any commercial entities involved in developing CS-based software for use by the general public? --

x

Chapter 5 Back to Reality

Back to Reality “There's no such thing as a free lunch” “Something for Nothing” theorems Dimensionality reduction is no exception Result: Compressive Nonsensing

Nonsense 1 Robustness

Measurement Noise Stable recovery with additive measurement noise Noise is added to Stability: noise only mildly amplified in recovered signal

Signal Noise Often seek recovery with additive signal noise Noise is added to Noise folding: signal noise amplified in by 3dB for every doubling of Same effect seen in classical “bandpass subsampling” [Davenport, Laska, Treichler, B 2011]

Noise Folding in CS slope = -3 CS recovered signal SNR

“Tail Folding” Can model compressible (approx sparse) signals as “signal” + “tail” Tail “folds” into signal as increases “signal” “tail” [Davies, Guo, 2011; Davenport, Laska, Treichler, B 2011] sorted index

All Is Not Lost – Dynamic Range In wideband ADC apps As amount of subsampling grows, can employ an ADC with a lower sampling rate and hence higher-resolution quantizer

Dynamic Range CS can significantly boost the ENOB of an ADC system for sparse signals CS ADC w/ sparsity stated number of bits conventional ADC log sampling frequency

Dynamic Range As amount of subsampling grows, can employ an ADC with a lower sampling rate and hence higher-resolution quantizer Thus dynamic range of CS ADC can significantly exceed Nyquist ADC With current ADC trends, dynamic range gain is theoretically 7.9dB for each doubling in

Dynamic Range slope = +5 (almost 7.9) dynamic range

Tradeoff SNR: 3dB loss for each doubling of Dynamic Range: up to 7.9dB gain for each doubling of

Adaptivity ’ Say we know the locations of the non-zero entries in Then we boost the SNR by Motivates adaptive sensing strategies that bypass the noise-folding tradeoff [Haupt, Castro, Nowak, B 2009; Candes, Davenport 2011] columns ’

Nonsense 2 Quantization

CS and Quantization Vast majority of work in CS assumes the measurements are real-valued In practice, measurements must be quantized (nonlinear) Should measure CS performance in terms of number of measurement bits rather than number of (real-valued) measurements Limited progress large number of bits per measurement 1 bit per measurement

CS and Quantization N=2000, K=20, M = (total bits)/(bits per meas) 12 bits/meas 10 bits 8 bits 6 bits 1 bit 4 bits 2 bits

Nonsense 3 Weak Models

Weak Models Sparsity models in CS emphasize discrete bases and frames DFT, wavelets, … But in real data acquisition problems, the world is continuous, not discrete

The Grid Problem Consider “frequency sparse” signal suggests the DFT sparsity basis Easy CS problem: K=1 frequency Hard CS problem: K=1 frequency slow decay due to sinc interpolation of off-grid sinusoids (asymptotically, signal is not even in L1)

Going Off the Grid Spectral CS [Duarte, B, 2010] discrete formulation CS Off the Grid [Tang, Bhaskar, Shah, Recht, 2012] continuous formulation best case Spectral CS 20dB average case worst case

Nonsense 4 Focus on Recovery

Misguided Focus on Recovery Recall the data deluge problem in sensing ex: large-scale imaging, HSI, video, ultrawideband ADC, data ambient dimension N too large When N ~ billions, signal recovery becomes problematic, if not impossible Solution: Perform signal exploitation directly on the compressive measurements

Compressive Signal Processing Many applications involve signal inference and not reconstruction detection < classification < estimation < reconstruction Good news: CS supports efficient learning, inference, processing directly on compressive measurements Random projections ~ sufficient statistics for signals with concise geometrical structure 67

Classification Simple object classification problem Common issue: AWGN: nearest neighbor classifier Common issue: L unknown articulation parameters Common solution: matched filter find nearest neighbor under all articulations 68

CS-based Classification Target images form a low-dimensional manifold as the target articulates random projections preserve information in these manifolds if CS-based classifier: smashed filter find nearest neighbor under all articulations under random projection [Davenport, B, et al 2006] 69

Smashed Filter Random shift and rotation (L=3 dim. manifold) White Gaussian noise added to measurements Goals: identify most likely shift/rotation parameters identify most likely class more noise classification rate (%) avg. shift estimate error more noise number of measurements M number of measurements M 70

Frequency Tracking Compressive Phase Locked Loop (PLL) key idea: phase detector in PLL computes inner product between signal and oscillator output RIP ensures we can compute this inner product between corresponding low-rate CS measurements CS-PLL w/ 20x undersampling

Nonsense 5 Weak Guarantees

Performance Guarantees CS performance guarantees RIP, incoherence, phase transition To date, rigorous results only for random matrices practically not useful often pessimistic Need rigorous guarantees for non-random, structured sampling matrices with fast algorithms analogous to the progress in coding theory from Shannon’s original random codes to modern codes

Chapter 6 All Is Not Lost !

Sparsity Convex optimization Dimensionality reduction

12-Step Program To End Compressive Nonsensing Don’t give in to the hype surrounding CS Resist the urge to blindly apply L1 minimization Face up to robustness issues Deal with measurement quantization Develop more realistic signal models Develop practical sensing matrices beyond random Develop more efficient recovery algorithms Develop rigorous performance guarantees for practical CS systems Exploit signals directly in the compressive domain