Constrained adaptive sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering TexPoint fonts used in EMF.

Slides:



Advertisements
Similar presentations
Min-Plus Linear Systems Theory and Bandwidth Estimation Min-Plus Linear Systems Theory and Bandwidth Estimation TexPoint fonts used in EMF. Read the TexPoint.
Advertisements

C&O 355 Mathematical Programming Fall 2010 Lecture 9
Factorial Mixture of Gaussians and the Marginal Independence Model Ricardo Silva Joint work-in-progress with Zoubin Ghahramani.
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Compressive Sensing IT530, Lecture Notes.
EKF, UKF TexPoint fonts used in EMF.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 123 “True” Constrained Minimization.
SVM—Support Vector Machines
Structured Sparse Principal Component Analysis Reading Group Presenter: Peng Zhang Cognitive Radio Institute Friday, October 01, 2010 Authors: Rodolphe.
Learning Measurement Matrices for Redundant Dictionaries Richard Baraniuk Rice University Chinmay Hegde MIT Aswin Sankaranarayanan CMU.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
C&O 355 Mathematical Programming Fall 2010 Lecture 20 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
Distilled Sensing: Selective Sampling for Sparse Signal Recovery Jarvis Haupt, Rui Castro, and Robert Nowak (International Conference on Artificial Intelligence.
Machine Learning Week 2 Lecture 1.
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
1 Micha Feigin, Danny Feldman, Nir Sochen
Machine Learning and Data Mining Clustering
Iterative methods TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A A A A A.
Second order cone programming approaches for handing missing and uncertain data P. K. Shivaswamy, C. Bhattacharyya and A. J. Smola Discussion led by Qi.
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
Compressive Oversampling for Robust Data Transmission in Sensor Networks Infocom 2010.
On Sketching Quadratic Forms Robert Krauthgamer, Weizmann Institute of Science Joint with: Alex Andoni, Jiecao Chen, Bo Qin, David Woodruff and Qin Zhang.
Convergent and Correct Message Passing Algorithms Nicholas Ruozzi and Sekhar Tatikonda Yale University TexPoint fonts used in EMF. Read the TexPoint manual.
Tirgul 9 Amortized analysis Graph representation.
A Single-letter Characterization of Optimal Noisy Compressed Sensing Dongning Guo Dror Baron Shlomo Shamai.
Hypothesis Testing and Dynamic Treatment Regimes S.A. Murphy Schering-Plough Workshop May 2007 TexPoint fonts used in EMF. Read the TexPoint manual before.
Message Passing Algorithms for Optimization
Random Convolution in Compressive Sampling Michael Fleyer.
Gradient Methods May Preview Background Steepest Descent Conjugate Gradient.
Matrix sparsification and the sparse null space problem Lee-Ad GottliebWeizmann Institute Tyler NeylonBynomial Inc. TexPoint fonts used in EMF. Read the.
Neural Network Computing Lecture no.1. All rights reserved L. Manevitz Lecture 12 McCullogh-Pitts Neuron The activation of a McCullogh-Pitts Neuron is.
Nonlinear Sampling. 2 Saturation in CCD sensors Dynamic range correction Optical devices High power amplifiers s(-t) Memoryless nonlinear distortion t=n.
Machine Learning: Ensemble Methods
Linear Codes for Distributed Source Coding: Reconstruction of a Function of the Sources -D. Krithivasan and S. Sandeep Pradhan -University of Michigan,
Radial Basis Function Networks
Matrix Solution of Linear Systems The Gauss-Jordan Method Special Systems.
Compressive Sampling: A Brief Overview
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Stochastic Radiosity K. H. Ko School of Mechatronics Gwangju Institute.
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
C&O 355 Mathematical Programming Fall 2010 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
Computational Stochastic Optimization: Bridging communities October 25, 2012 Warren Powell CASTLE Laboratory Princeton University
The Multiplicative Weights Update Method Based on Arora, Hazan & Kale (2005) Mashor Housh Oded Cats Advanced simulation methods Prof. Rubinstein.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
Trust-Aware Optimal Crowdsourcing With Budget Constraint Xiangyang Liu 1, He He 2, and John S. Baras 1 1 Institute for Systems Research and Department.
Arindam K. Das CIA Lab University of Washington Seattle, WA MINIMUM POWER BROADCAST IN WIRELESS NETWORKS.
Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,
Privacy-Preserving Linear Programming Olvi Mangasarian UW Madison & UCSD La Jolla UCSD – Center for Computational Mathematics Seminar January 11, 2011.
Regularization and Feature Selection in Least-Squares Temporal Difference Learning J. Zico Kolter and Andrew Y. Ng Computer Science Department Stanford.
Exact Differentiable Exterior Penalty for Linear Programming Olvi Mangasarian UW Madison & UCSD La Jolla Edward Wild UW Madison December 20, 2015 TexPoint.
Hashing TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA Course: Data Structures Lecturer: Haim Kaplan and Uri Zwick.
Chance Constrained Robust Energy Efficiency in Cognitive Radio Networks with Channel Uncertainty Yongjun Xu and Xiaohui Zhao College of Communication Engineering,
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Sequential Off-line Learning with Knowledge Gradients Peter Frazier Warren Powell Savas Dayanik Department of Operations Research and Financial Engineering.
Lecture 3: System Representation
Singular Value Decomposition and its applications
Computing and Compressive Sensing in Wireless Sensor Networks
Hashing Course: Data Structures Lecturer: Uri Zwick March 2008
Basic Algorithms Christina Gallner
TexPoint fonts used in EMF.
TexPoint fonts used in EMF.
Presenter: Xudong Zhu Authors: Xudong Zhu, etc.
Nuclear Norm Heuristic for Rank Minimization
Hidden Markov Models Part 2: Algorithms
Rank-Sparsity Incoherence for Matrix Decomposition
Aishwarya sreenivasan 15 December 2006.
Unfolding with system identification
Hashing Course: Data Structures Lecturer: Uri Zwick March 2008
Linear Algebra Lecture 16.
Presentation transcript:

Constrained adaptive sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A

Andrew Massimino Deanna Needell Tina Woolf

Sensing sparse signals When (and how well) can we estimate from the measurements ? -sparse

Nonadaptive sensing Prototypical sensing model: There exist matrices and recovery algorithms that produce an estimate such that for any with we have For any matrix and any recovery algorithm, there exist with such that

Think of sensing as a game of 20 questions Simple strategy: Use half of our sensing energy to find the support, and the remainder to estimate the values. Adaptive sensing

Thought experiment Suppose that after the first stage we have perfectly estimated the support

Benefits of adaptivity Adaptivity offers the potential for tremendous benefits Suppose we wish to estimate a -sparse vector whose nonzero has amplitude : No method can find the nonzero when A simple binary search procedure will succeed in finding the location of the nonzero with probability when Not hard to extend to -sparse vectors See Arias-Castro, Candès, D; Castro; Malloy, Nowak Provided that the SNR is sufficiently large, adaptivity can reduce our error by a factor of !

Sensing with constraints Existing approaches to adaptivity require the ability to acquire arbitrary linear measurements, but in many (most?) real-world systems, our measurements are highly constrained Suppose we are limited to using measurement vectors chosen from some fixed (finite) ensemble How much room for improvement do we have in this case? How should we actually go about adaptively selecting our measurements?

Room for improvement? It depends! If is -sparse and the are chosen (potentially adaptively) by selecting up to rows from the DFT matrix, then for any adaptive scheme we will have On the other hand, if contains vectors which are better aligned with our class of signals (or if is sparse in an alternative basis/dictionary), then dramatic improvements may still be possible

How to adapt? Suppose we knew the locations of the nonzeros One can show that the error in this case is given by Ideally, we would like to choose a sequence according to where here denotes the matrix with rows given by the sequence

Convex relaxation We would like to solve Instead we consider the relaxation The diagonal entries of tell us “how much” of each sensing vector we should use, and the constraint ensures that (assuming has unit-norm rows) Equivalent to notion of “A-optimality” criterion in optimal experimental design

Generating the sensing matrix In practice, tends to be somewhat sparse, placing high weight on a small number of measurements and low weights on many others Where “sensing energy” is the operative constraint (as opposed to number of measurements) we can use directly to sense If we wish to take exactly measurements, one option is to draw measurement vectors by sampling with replacement according to the probability mass function

Example DFT measurements of signal with sparse Haar wavelet transform (supported on connected tree) Recovery performed using CoSaMP

Constrained sensing in practice The “oracle adaptive” approach can be used as a building block for a practical algorithm Simple approach: Divide sensing energy / measurements in half Use first half by randomly selecting measurement vectors and using a conventional sparse recovery algorithm to estimate the support Use this support estimate to choose second half of measurements

Simulation results

Summary Adaptivity (sometimes) allows tremendous improvements Not always easy to realize these improvements in the constrained setting –existing algorithms not applicable –room for improvement may not be quite as large Simple strategies for adaptively selecting the measurements based on convex optimization can be surprisingly effective

Thank You! arXiv: