Learning With Dynamic Group Sparsity

Slides:



Advertisements
Similar presentations
Junzhou Huang, Shaoting Zhang, Dimitris Metaxas CBIM, Dept. Computer Science, Rutgers University Efficient MR Image Reconstruction for Compressed MR Imaging.
Advertisements

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Compressive Sensing IT530, Lecture Notes.
Multi-Label Prediction via Compressed Sensing By Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang (NIPS 2009) Presented by: Lingbo Li ECE, Duke University.
MMSE Estimation for Sparse Representation Modeling
Joint work with Irad Yavneh
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
UNLocBox: Matlab convex optimization toolbox epfl
Exact or stable image\signal reconstruction from incomplete information Project guide: Dr. Pradeep Sen UNM (Abq) Submitted by: Nitesh Agarwal IIT Roorkee.
Chapter 2: Lasso for linear models
Extensions of wavelets
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
1 Micha Feigin, Danny Feldman, Nir Sochen
More MR Fingerprinting
Compressed sensing Carlos Becker, Guillaume Lemaître & Peter Rennert
Robust Object Tracking via Sparsity-based Collaborative Model
Learning With Dynamic Group Sparsity Junzhou Huang Xiaolei Huang Dimitris Metaxas Rutgers University Lehigh University Rutgers University.
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Sparse and Overcomplete Data Representation
Video Coding with Linear Compensation (VCLC) Arif Mahmood, Zartash Afzal Uzmi, Sohaib A Khan Department of Computer.
Image Denoising via Learned Dictionaries and Sparse Representations
Segmentation Divide the image into segments. Each segment:
Video summarization by graph optimization Lu Shi Oct. 7, 2003.
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
6.829 Computer Networks1 Compressed Sensing for Loss-Tolerant Audio Transport Clay, Elena, Hui.
Topics in MMSE Estimation for Sparse Approximation Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000,
Model-based Compressive Sensing
An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
Introduction to variable selection I Qi Yu. 2 Problems due to poor variable selection: Input dimension is too large; the curse of dimensionality problem.
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
Cs: compressed sensing
Iterated Denoising for Image Recovery Onur G. Guleryuz To see the animations and movies please use full-screen mode. Clicking on.
Recovering low rank and sparse matrices from compressive measurements Aswin C Sankaranarayanan Rice University Richard G. Baraniuk Andrew E. Waters.
Learning With Structured Sparsity
Image Restoration using Iterative Wiener Filter --- ECE533 Project Report Jing Liu, Yan Wu.
A Novel Method for Burst Error Recovery of Images First Author: S. Talebi Second Author: F. Marvasti Affiliations: King’s College London
An Introduction to Support Vector Machines (M. Law)
Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Sudocodes Fast measurement and reconstruction of sparse signals.
CS654: Digital Image Analysis
Sparse Signals Reconstruction Via Adaptive Iterative Greedy Algorithm Ahmed Aziz, Ahmed Salim, Walid Osamy Presenter : 張庭豪 International Journal of Computer.
Sparse & Redundant Representation Modeling of Images Problem Solving Session 1: Greedy Pursuit Algorithms By: Matan Protter Sparse & Redundant Representation.
Zhilin Zhang, Bhaskar D. Rao University of California, San Diego March 28,
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Curve Simplification under the L 2 -Norm Ben Berg Advisor: Pankaj Agarwal Mentor: Swaminathan Sankararaman.
ICCV 2007 Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1,
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Sparsity Based Poisson Denoising and Inpainting
Compressive Coded Aperture Video Reconstruction
Highly Undersampled 0-norm Reconstruction
Injong Rhee ICMCS’98 Presented by Wenyu Ren
Outlier Processing via L1-Principal Subspaces
Basic Algorithms Christina Gallner
Image Processing for Physical Data
NESTA: A Fast and Accurate First-Order Method for Sparse Recovery
Presenter: Xudong Zhu Authors: Xudong Zhu, etc.
Motion Estimation Today’s Readings
The Communication Complexity of Distributed Set-Joins
Sudocodes Fast measurement and reconstruction of sparse signals
Image and Video Processing
Quicksort.
Adaptive Filter A digital filter that automatically adjusts its coefficients to adapt input signal via an adaptive algorithm. Applications: Signal enhancement.
Sudocodes Fast measurement and reconstruction of sparse signals
Quicksort.
Learned Convolutional Sparse Coding
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

Learning With Dynamic Group Sparsity Junzhou Huang Xiaolei Huang Dimitris Metaxas Rutgers University Lehigh University Rutgers University

Outline Problem: Applications where the useful information is very less compared with the given data sparse recovery Previous work and related issues Proposed method: Dynamic Group Sparsity (DGS) DGS definition and one theoretical result One greedy algorithm for DGS Extension to Adaptive DGS (AdaDGS) Applications Compressive sensing, Video Background subtraction nonzero coefficients are often not random but tend to be clustered How to recover the sparse data from its linear projections using information as less as possible

Previous Work: Standard Sparsity Problem: give the linear measurement of a sparse data and , where and m<<n. How to recover the sparse data x from its measurement y ? Without priors for nonzero entries Complexity O(k log (n/k) ), too high for large n Existing work L1 norm minimization (Lasso, GPSR, SPGL1 et al.) Greedy algorithms (OMP, ROMP, SP, CoSaMP et al.) It is well known that the standard sparsity problem has been widely studied in the past years. Its formulation is shown as here. Supp(w): the support set of sparse data w is defined as the set of indices corresponding to the nonzero entries in x.

Previous Work: Group Sparsity The indices {1, . . . , n} are divided into m disjoint groups G1,G2, . . . ,Gm. Suppose only g groups cover k nonzero entries Priors for nonzero entries Group clustering Group complexity: O(k + g log(m)). Too Restrictive for practical applications: the known group setting, inability for dynamic groups Existing work Yuan&Lin’06, Wipf&Rao’07 , Bach’08, Ji et al.’08 choosing g out of m groups (g log(m) )

Proposed Work: Motivation More knowledge about nonzero entries leads to the less complexity No information about nonzero positions: O(k log(n/k) ) Group priors for the nonzero positions: O(g log(m) ) Knowing nonzero positions: O(k) complexity Advantages Reduced complexity as group sparsity Flexible enough as standard sparsity DGS requires that the nonzero coefficients in the sparse data have the group clustering trend. Does not require to know any information about the group size and location

Dynamic Group Sparse Data Nonzero entries tend to be clustered in groups However, we do not know the group size/location group sparsity: can not be directly used stardard sparisty: high complexity A nonzero pixel implies adjacent pixels are more likely to be nonzeros data x is defined as the DGS data if it can be well approximated using k nonzero coefficients under some linear transforms and these k nonzero coefficients are clustered into q groups.

Example of DGS data

Theoretical Result for DGS Lemma: Suppose we have dynamic group sparse data , the nonzero number is k and the nonzero entries are clustered into q disjoint groups where q<< k. Then the DGS complexity is O(k+q log(n/q)) Better than the standard sparsity complexity O(k+k log(n/k)) More useful than group sparsity in practice

DGS Recovery Five main steps Prune the residue estimation using DGS approximation Merge the support sets Estimate the signal using least squares Prune the signal estimation using DGS approximation Update the signal/residue estimation and support set.

Main steps

Steps 1,4: DGS Approximation Pruning A nonzero pixel implies adjacent pixels are more likely to be nonzeros Key point: Pruning the data according to both the value of the current pixel and those of its adjacent pixels Weights can be added to adjust the balance. If weights corresponding to the adjacent pixels are zeros, it becomes the standard sparsity approximation pruning. The number of nonzero entries K must be known

AdaDGS Recovery Suppose knowing the sparsity range [kmin , kmax] Setting one sparsity step size Iteratively run the DGS recovery algorithm with incremental sparsity number until the halting criterion In practice, choosing a halting condition is very important. No optimal way.

Two Useful Halting Conditions The residue norm in the current iteration is not smaller than that in the last iteration. practically fast, used in the inner loop in AdaDGS The relative change of the recovered data between two consecutive iterations is smaller than a certain threshold. It is not worth taking more iterations if the improvement is small Used in the outer loop in AdaDGS 2. In practical applications, choosing halting condition is very important. No optimal way to choose the halting conditions.

Application on Compressive Sensing Experiment setup Quantitative evaluation: relative difference between the estimated sparse data and the ground truth Running on a 3.2 GHz PC in Matlab Demonstrate the advantage of DGS over standard sparsity on the CS of DGS data

Example: 1D Simulated Signals N=512, k=64, q=4; m=3k=192;

Statistics: 1D Simulated Signals

Example: 2D Images Figure. (a) original image, (b) recovered image with MCS [Ji et al.’08 ] (error is 0.8399 and time is 29.2656 seconds), (c) recovered image with SP [Dai’08] (error is 0.7605 and time is 1.6579 seconds) and (d) recovered image with DGS (error is 0.1176 and time is 1.0659 seconds). 48*48 k=152, q=4; m=440

Statistics: 2D Images It is not surprising that the running times with DGS are always far less than those with MCS and a little less than those with SP for all measurement numbers

Video Background Subtraction Foreground is typical DGS data The nonzero coefficients are clustered into unknown groups, which corresponding to the foreground objects Unknown group size/locations, group number Temporal and spatial sparsity Figure. Example.(a) one frame, (b) the foreground, (c) the foreground mask and (d) Our result

AdaDGS Background Subtraction Previous Video frames , Let ft is the foreground image, bt is the background image Suppose background subtraction already done in frame 1~ t and let New Frame Temporal sparisty: , x is sparse, Sparisty Constancy assumption instead of Brightness Constancy assumption Spatial sparsity: ft+1 is dynamic group sparse

Formulation Problem z is dynamic group sparse data Efficiently solved by the proposed AdaDGS algorithm

Video Results (a) Original video, (b) our result, (c) by [C. Stauffer and W. Grimson 1999]

Video Results Original video, (b) our result, (c) by [C. Stauffer and W. Grimson 1999] and (d) by [Monnet et al 2003]

Video Results (a) Original (b) proposed (c) by [J. Zhong and S. Sclaroff 2003] and (d) by [C. Stauffer and W. Grimson 1999] Original, (b) our result, (c) by [Elgammal et al 2002] and (d) by [C. Stauffer and W. Grimson 1999]

Summary Proposed work Future work Thanks! Definition and theoretical result for DGS DGS and AdaDGS recovery algorithm Two applications Future work Real time implementation of AdaDGS background subtraction (3 sec per frame in current Matlab implementation ) Thanks!