Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.

Slides:



Advertisements
Similar presentations
JPEG Compresses real images Standard set by the Joint Photographic Experts Group in 1991.
Advertisements

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Wavelets Fast Multiresolution Image Querying Jacobs et.al. SIGGRAPH95.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Compressive Sensing IT530, Lecture Notes.
1 AN ALIGNMENT STRATEGY FOR THE ATST M2 Implementing a standalone correction strategy for ATST M2 Robert S. Upton NIO/AURA February 11,2005.
Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad.
1 Transportation problem The transportation problem seeks the determination of a minimum cost transportation plan for a single commodity from a number.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Computational Methods for Management and Economics Carla Gomes Module 8b The transportation simplex method.
COMPUTER AIDED DIAGNOSIS: FEATURE SELECTION Prof. Yasser Mostafa Kadah –
Assessment. Schedule graph may be of help for selecting the best solution Best solution corresponds to a plateau before a high jump Solutions with very.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Chapter 9 Gauss Elimination The Islamic University of Gaza
Visual Recognition Tutorial
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
1/44 1. ZAHRA NAGHSH JULY 2009 BEAM-FORMING 2/44 2.
Pattern Recognition and Machine Learning
Curve-Fitting Regression
1 Systems of Linear Equations Error Analysis and System Condition.
Normalised Least Mean-Square Adaptive Filtering
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
H 0 (z) x(n)x(n) o(n)o(n) M G 0 (z) M + vo(n)vo(n) yo(n)yo(n) H 1 (z) 1(n)1(n) M G 1 (z) M v1(n)v1(n) y1(n)y1(n) fo(n)fo(n) f1(n)f1(n) y(n)y(n) Figure.
Summarized by Soo-Jin Kim
Lecture Presentation Software to accompany Investment Analysis and Portfolio Management Seventh Edition by Frank K. Reilly & Keith C. Brown Chapter 7.
Some Background Assumptions Markowitz Portfolio Theory
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
Cs: compressed sensing
IMAGE COMPRESSION USING BTC Presented By: Akash Agrawal Guided By: Prof.R.Welekar.
Image Restoration using Iterative Wiener Filter --- ECE533 Project Report Jing Liu, Yan Wu.
VI. Evaluate Model Fit Basic questions that modelers must address are: How well does the model fit the data? Do changes to a model, such as reparameterization,
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Chapter 21 R(x) Algorithm a) Anomaly Detection b) Matched Filter.
Investment Analysis and Portfolio Management First Canadian Edition By Reilly, Brown, Hedges, Chang 6.
Image Compression – Fundamentals and Lossless Compression Techniques
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
EE565 Advanced Image Processing Copyright Xin Li Image Denoising Theory of linear estimation Spatial domain denoising techniques Conventional Wiener.
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Chapter 9 Gauss Elimination The Islamic University of Gaza
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
Principle Component Analysis and its use in MA clustering Lecture 12.
Autoregressive (AR) Spectral Estimation
Digital Image Processing
Principal Component Analysis (PCA)
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Feature Extraction 主講人:虞台文.
Computational Intelligence: Methods and Applications Lecture 22 Linear discrimination - variants Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
(iii) Simplex method - I D Nagesh Kumar, IISc Water Resources Planning and Management: M3L3 Linear Programming and Applications.
Computational Physics (Lecture 7) PHY4061. Eigen Value Problems.
Dimension reduction (2) EDR space Sliced inverse regression Multi-dimensional LDA Partial Least Squares Network Component analysis.
Compressive Coded Aperture Video Reconstruction
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Outlier Processing via L1-Principal Subspaces
PCA vs ICA vs LDA.
Image Restoration and Denoising
Principal Component Analysis
OVERVIEW OF LINEAR MODELS
Image Coding and Compression
Image and Video Processing
OVERVIEW OF LINEAR MODELS
Recursively Adapted Radial Basis Function Networks and its Relationship to Resource Allocating Networks and Online Kernel Learning Weifeng Liu, Puskal.
Image Segmentation.
Unfolding with system identification
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis

Background Lockheed Martin has been working on the DARPA ISP program  Team includes Duke, JHU, Yale and NAVAIR An adaptive sensing scheme has been developed that allocates sensor resources (spectral and spatial) based on relevant information content  Algorithms are currently working in a coded aperture hyperspectral imager hardware Compressed Sensing is a natural extension of this ISP concept

Motivation Can we create an efficient sensing process where objects of interest are well resolved, but other parts of the scene are heavily compressed?  Economize on number of data measurements required and the computations needed to reconstruct the image Currently, Compressed Sensing is focused on the general reconstruction problem  We are not interested in the perfect reconstruction of the whole scene Our approach embeds pattern recognition objectives (detection, discrimination) and compression in the sensing process, while producing visually meaningful images.

Approach It has been shown that under certain conditions, minimizing the L-1 Norm yields the optimum solution for perfect reconstruction, but the optimization requires iterative (potentially cumbersome) techniques L-2 norm techniques are well known, analytical closed form solutions that are easy to implement  Computationally attractive for the formation of large images  However, the minimum L2 norm solution does not yeild good reconstruction Can a weighted L2-norm arrive “close” to the optimum solution when we are interested in specific objects ?  How can we incorporate prior knowledge about the objects ?

The general solution Assume that the image vector y can be represented as linear combination of basis vectors (columns of the matrix A) such that  h is the coefficient vector we seek to estimate from a small number of measurements, and hence re-construct y In compressed sensing, we measure a smaller vector u, (i.e. the projection of the image y through a “random” mask W) The most general family of solution for the estimate h that satisfies the above linear constraints is  All solutions (including those which minimize the L-0, L-1 or L-2 norm) belong to this family  The particular solution is the “minimum L-2 norm” solution  The homogeneous solution can be viewed as a correction to the L-2 norm that results in other solutions with different properties Particular solutionHomogeneous Solution A random vector

Weighted L-2 norm Minimizing the L-2 norm does not relate to a well-defined “information” metric for reconstruction  It minimizes the variation in the estimate when white noise is present in the measurement Rather, we seek a weighting that minimizes the L-2 norm of the coefficient vector while maximizing information about the objects of interest  This results in attenuation of those weights which do not bear useful information for reconstruction  Or maximize  This implies that the best choice for the weights is We envision that can be calculated “apriori” from a set of representative images of the class of objects of interest, or a suitable statistical model may be used.

Solution using the methods of Lagrange multipliers Problem is stated as  Minimize the quadraticsubject to the linear constraints D is a diagonal matrix whose diagonal elements are calculated apriori from a set of representative images or a statistical model  The well known solution for the estimate of the coefficient vector is now h is estimate of the coefficients based on the measurements u A is a matrix that can be used as a basis to represent the image W is a random matrix on which the image is projected to obtain u D is a weight vector that maximizes information for the objects of interest

Reconstruction Equation The Reconstructed Image is given by where depends on the basis functions and the weights Without weights, R = I, and the solution does not depends on the underlying basis set  Minimum L-2 norm solution is then simply We will use i) DCT and ii) KL basis sets to demonstrate performance  For the KL basis set, D is the same as the eigen-values

Example using ideal weights Original image is 32 x 32 (1024 elements) DCT is used as a basis set  Any other basis set that allows compact representation can be used  ideal coefficients are used as a “place-holder” for weights In practice, these will be estimated representative images of the class of objects of interest, or statistically modeled. Weighted L2 norm produces recognizable results using 1/4 th the data (256 measurements)  Conventional L2 norm does not perform well K=256 mse=0.19 K=192 mse=0.25 K=64 mse=0.5 K=256 mse=0.86 Conventional L2 norm WEIGTED L2 norm Original 32 x 32 image

DCT Basis Set and Weights The DCT of the image shows good compaction properties.  Indicates it should be possible to achieve nearly zero mse with only 50% of the coefficients  Other basis sets should yield much greater compactness A (as a 2D image)

Example 2: weights estimated for a “class” The goal is to sense all objects that belong to a “class”  Exact weights for any one image is not known, but an average estimate for the class is used The average DCT is estimated using 1600 representative views and the inverse of the DCT coefficients is used weights in the reconstruction process Object DCT of Object Average DCT

Weighted vs. Conventional approach using DCT basis Comparison of conventional and weighted minimum L2 norm reconstruction using the DCT basis functions. Weighting the reconstruction process makes a significant difference in the reconstruction error

Reconstruction based on DCT with and without weights Reconstructions using 512 projections and the DCT basis set with weighting estimated over the class shows better performance than without weighting, i.e. the conventional minimum L2 norm solution (a) Weighted (b) Unweighted

Using the K-L Basis set The weights are the reciprocal of the square-root of the eigen-values of the auto- correlation matrix estimated using representative images of the class of vehicles of interest.  Only M=450 basis functions are necessary for accurately representing the images, which reduces the size of the matrix R and hence the overall computations

Weighted vs. Conventional approach using KL basis Reconstruction using the KL basis far out-performs DCT when weights are used  Performance of unweighted scheme is comparable to the unweighted DCT (not surprising)

Other Computational advantages of the KL set KL transform offers computational advantages in Two ways:  Fewer measurements are necessary (reduces the number of rows of R)  Fewer basis functions as required to represent the image (reduces the number of columns of R) Image on the left was reconstructed using the first 450 eigen-vectors of the KL decomposition, whereas all 1024 were used on the right. The two images are almost identical, although the image in (a) requires considerably less computations. (a) Esimated using 256 measurements and 450 eigen-vectors (b) Esimated using 256 measurements and All 1024 eigen-vectors

Example of full scene reconstruction (back to DCT) L2-norm approach easily reconstructs large scene  Computationally straightforward Weighted optimization clearly demonstrates ability to heavily compress uninteresting regions of the scene, while achieving reasonable reconstruction where true objects are present Original Image

Summary Minimizing the L-2 norm is a viable way of reconstructing objects of interest in a compressed sensing scheme  Requires prior knowledge of the weights that are representative of the class of objects Embeds attributes of pattern recognition in the sensing process to preserve visual detail for the human user, while effectively achieving detection, discrimination and compression Selection of basis set is important  Good basis sets require fewer measurements and fewer terms in the representation which speeds up the computations. The selection of basis sets and criterion for choosing weights both require further research