1 Applications on Signal Recovering Miguel Argáez Carlos A. Quintero Computational Science Program El Paso, Texas, USA April 16, 2009.

Slides:



Advertisements
Similar presentations
Nonnegative Matrix Factorization with Sparseness Constraints S. Race MA591R.
Advertisements

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Compressive Sensing IT530, Lecture Notes.
Joint work with Irad Yavneh
Pattern Recognition and Machine Learning: Kernel Methods.
Lecture 22 Exemplary Inverse Problems including Filter Design.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang
2008 SIAM Conference on Imaging Science July 7, 2008 Jason A. Palmer
Extensions of wavelets
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
The recovery of seismic reflectivity in an attenuating medium Gary Margrave, Linping Dong, Peter Gibson, Jeff Grossman, Michael Lamoureux University of.
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Sparse and Overcomplete Data Representation
Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Signal Processing.
Image Denoising via Learned Dictionaries and Sparse Representations
Compressive Signal Processing
Random Convolution in Compressive Sampling Michael Fleyer.
Recent Trends in Signal Representations and Their Role in Image Processing Michael Elad The CS Department The Technion – Israel Institute of technology.
ICA Alphan Altinok. Outline  PCA  ICA  Foundation  Ambiguities  Algorithms  Examples  Papers.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
A Weighted Average of Sparse Several Representations is Better than the Sparsest One Alone Michael Elad The Computer Science Department The Technion –
A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel.
1 Chapter 2 Matrices Matrices provide an orderly way of arranging values or functions to enhance the analysis of systems in a systematic manner. Their.
SOLVING THE KAKURO PUZZLE Andreea Erciulescu Department of Mathematics, Colorado State University, Fort Collins (Mentor: A. Hulpke)
Computational Geophysics and Data Analysis
Unitary Extension Principle: Ten Years After Zuowei Shen Department of Mathematics National University of Singapore.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
Multisource Least-squares Reverse Time Migration Wei Dai.
1 Hybrid methods for solving large-scale parameter estimation problems Carlos A. Quintero 1 Miguel Argáez 1 Hector Klie 2 Leticia Velázquez 1 Mary Wheeler.
Advanced Preconditioning for Generalized Least Squares Recall: To stabilize the inversions, we minimize the objective function J where where  is the.
-1- ICA Based Blind Adaptive MAI Suppression in DS-CDMA Systems Malay Gupta and Balu Santhanam SPCOM Laboratory Department of E.C.E. The University of.
Cs: compressed sensing
“A fast method for Underdetermined Sparse Component Analysis (SCA) based on Iterative Detection- Estimation (IDE)” Arash Ali-Amini 1 Massoud BABAIE-ZADEH.
Fast and incoherent dictionary learning algorithms with application to fMRI Authors: Vahid Abolghasemi Saideh Ferdowsi Saeid Sanei. Journal of Signal Processing.
Remarks: 1.When Newton’s method is implemented has second order information while Gauss-Newton use only first order information. 2.The only differences.
An Optimization Method on Joint Inversion of Different Types of Seismic Data M. Argaez¹, R. Romero 3, A. Sosa¹, L. Thompson² L. Velazquez¹, A. Velasco².
A Sparse Non-Parametric Approach for Single Channel Separation of Known Sounds Paris Smaragdis, Madhusudana Shashanka, Bhiksha Raj NIPS 2009.
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
Direct Robust Matrix Factorization Liang Xiong, Xi Chen, Jeff Schneider Presented by xxx School of Computer Science Carnegie Mellon University.
Introduction to Deconvolution
Full-rank Gaussian modeling of convolutive audio mixtures applied to source separation Ngoc Q. K. Duong, Supervisor: R. Gribonval and E. Vincent METISS.
On Optimization Techniques for the One-Dimensional Seismic Problem M. Argaez¹ J. Gomez¹ J. Islas¹ V. Kreinovich³ C. Quintero ¹ L. Salayandia³ M.C. Villamarin¹.
Image Priors and the Sparse-Land Model
Super-virtual Interferometric Diffractions as Guide Stars Wei Dai 1, Tong Fei 2, Yi Luo 2 and Gerard T. Schuster 1 1 KAUST 2 Saudi Aramco Feb 9, 2012.
Spatial Covariance Models For Under- Determined Reverberant Audio Source Separation N. Duong, E. Vincent and R. Gribonval METISS project team, IRISA/INRIA,
A Hybrid Optimization Approach for Automated Parameter Estimation Problems Carlos A. Quintero 1 Miguel Argáez 1, Hector Klie 2, Leticia Velázquez 1 and.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
A Hybrid Optimization Approach for Automated Parameter Estimation Problems Carlos A. Quintero 1 Miguel Argáez 1, Hector Klie 2, Leticia Velázquez 1 and.
Compressive Coded Aperture Video Reconstruction
Müjdat Çetin Stochastic Systems Group, M.I.T.
On Optimization Techniques for the One-Dimensional Seismic Problem
Automatic Picking of First Arrivals
POTSI Gaboring Advances
Outline Linear Shift-invariant system Linear filters
A Motivating Application: Sensor Array Signal Processing
Convolution and Deconvolution
Gary Margrave and Michael Lamoureux
Optimal sparse representations in general overcomplete bases
Recursively Adapted Radial Basis Function Networks and its Relationship to Resource Allocating Networks and Online Kernel Learning Weifeng Liu, Puskal.
A Direct Numerical Imaging Method for Point and Extended Targets
Learned Convolutional Sparse Coding
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

1 Applications on Signal Recovering Miguel Argáez Carlos A. Quintero Computational Science Program El Paso, Texas, USA April 16, 2009

2 Abstract Recent theoretical developments have generated a great deal of interest in sparse signal representation. A full-rank matrix generates an underdetermined system of linear equations Our purpose is to find the sparsest solution. i.e., the one with the fewest nonzero entries. Finding sparse representations ultimately requires solving for the sparsest solution of an underdetermined system of linear equations. Some recently works had shown that the minimum l 1 -norm solution to an underdetermined linear system is also the sparsest solution to that system under some conditions.

3 Objective We develop an algorithm using a fixed point method to solve the l 1 minimization problem And for solving the linear system associated to the problem we use a conjugate gradient method. Our principal purpose for this work is to show that our algorithm is capable to work efficiently to recover the reflection coefficients from seismic data in the area of seismic reflection and separating two speakers in a single channel recording in the audio separation problem.

4 Seismic Reflection Seismic reflection is a method of exploration geophysics to estimate the properties of the Earth's subsurface from reflected seismic waves. The method requires a controlled seismic source of energy (such as dynamite or vibrators). Using the time it takes for a reflection to arrive at a receiver, it is possible to estimate the depth of the feature that generated the reflection. The reflected signal is detected on surface using an array of high frequency geophones.

5 Seismic Reflection

6 How can we obtain the reflectivity function from the recorded signal? Problems: The seismic trace is the result of a convolution of the input pulse and the reflectivity function. The recorded signal has noise.

7 We can express the recorded signal as: where is the convolution between the signals and represents the reflectivity function we want to recover. The convolution kernel is a “wavelet” which depends on the pressure wave sent in the underground. ε is noise that has entered the recorded signal. Sparse-spike deconvolution

8 Clearbout and Mouir [3] proposed in 1973 to use l 1 minimization to recover x from the recorder signal y. Santosa and Symes (Rice University) [4] implemented this idea in 1986 with an l 1 relaxed minimization. The resulting sparse spike deconvolution algorithm defines the solution as:

9 To invert the convolution equation (1), we can model the reflectivity x as a sum of Diracs, that is: Each Dirac is located at a depth i. Using (3) we can express y as a function of the depth z Sparse-spike deconvolution

10 Sparse-spike deconvolution If we introduce a dictionary constructed by translating the wavelet at all locations This is a matrix whose columns vectors are: We solve the problem using a equivalent problem given by

Seismic Reflection. Sparco Problem (903): m=n= Numerical Experimentations 11

The Speech Separation Problem Separate a single-channel mixture of speech from known speakers 12

13 Non-negative Sparse Coding We assume an additive mixing model and we can represent the signal as where A and x are non-negative and x is sparse Dictionary, A Source dependent (over-complete) basis Learned from data Sparse Code, x Time and amplitude for each dictionary element Sparseness: Only a few dictionary elements active simultaneously

14 Non-negative Sparse Coding 1. Learn a dictionary for each source 2. Compute sparse coding x of mixture 3. Reconstruct each source separately

Numerical Experimentation Problem Sparco (401): m=29166, n=

16 References and Acknowledgements 1) Stochastic sparse-spike deconvolution, Danilo R. Velis. 2) Deconvolution with curvelet-domain sparsity, Vishal Kumar, EOS-UBC and Felix J. Herrmann. 3) Robust modeling of erratic data, J.F. Clearbout and F. Muir. Geophyscis, 38 4) Linear inversion of band limited reflection seismograms. F. Santosa and W.W. Symes. SIAM J. Sci. Statistic.Comput 5) Sparse coding and NMF, J. Eggert and E. Körner, Proceedings of Neural Networks The authors thank the financial support from: ARL Grant No. W911NF Computational Science Program, NSF CyberShARE grant No. NSF HRD (Some partial support) We also acknowledge the office space provided by the Department of Mathematical Sciences.