Matrix sparsification and the sparse null space problem Lee-Ad GottliebWeizmann Institute Tyler NeylonBynomial Inc. TexPoint fonts used in EMF. Read the.

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

5.2 Rank of a Matrix. Set-up Recall block multiplication:
General Linear Model With correlated error terms  =  2 V ≠  2 I.
Generalization and Specialization of Kernelization Daniel Lokshtanov.
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Section 4.6 (Rank).
Computational problems, algorithms, runtime, hardness
THE DIMENSION OF A VECTOR SPACE
Proximity algorithms for nearly-doubling spaces Lee-Ad Gottlieb Robert Krauthgamer Weizmann Institute TexPoint fonts used in EMF. Read the TexPoint manual.
CSC5160 Topics in Algorithms Tutorial 1 Jan Jerry Le
4 4.6 © 2012 Pearson Education, Inc. Vector Spaces RANK.
Linear Equations in Linear Algebra
1 © 2012 Pearson Education, Inc. Matrix Algebra THE INVERSE OF A MATRIX.
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel.
INDR 262 INTRODUCTION TO OPTIMIZATION METHODS LINEAR ALGEBRA INDR 262 Metin Türkay 1.
SOLVING THE KAKURO PUZZLE Andreea Erciulescu Department of Mathematics, Colorado State University, Fort Collins (Mentor: A. Hulpke)
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
C&O 355 Mathematical Programming Fall 2010 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Linear Algebra Lecture 25.
MATH 250 Linear Equations and Matrices
+ Review of Linear Algebra Optimization 1/14/10 Recitation Sivaraman Balakrishnan.
Graph Coalition Structure Generation Maria Polukarov University of Southampton Joint work with Tom Voice and Nick Jennings HUJI, 25 th September 2011.
C&O 355 Mathematical Programming Fall 2010 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Cs: compressed sensing
C&O 355 Mathematical Programming Fall 2010 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
Simplex method (algebraic interpretation)
Linear Programming System of Linear Inequalities  The solution set of LP is described by Ax  b. Gauss showed how to solve a system of linear.
Linear Equations in Linear Algebra
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A Image:
4 4.6 © 2012 Pearson Education, Inc. Vector Spaces RANK.
4.6: Rank. Definition: Let A be an mxn matrix. Then each row of A has n entries and can therefore be associated with a vector in The set of all linear.
Section 2.3 Properties of Solution Sets
Vector Spaces RANK © 2016 Pearson Education, Inc..
3.3 Implementation (1) naive implementation (2) revised simplex method
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
C&O 355 Mathematical Programming Fall 2010 Lecture 16 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Regularization and Feature Selection in Least-Squares Temporal Difference Learning J. Zico Kolter and Andrew Y. Ng Computer Science Department Stanford.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
X y x-y · 4 -y-2x · 5 -3x+y · 6 x+y · 3 Given x, for what values of y is (x,y) feasible? Need: y · 3x+6, y · -x+3, y ¸ -2x-5, and y ¸ x-4 Consider the.
8.4.2 Quantum process tomography 8.5 Limitations of the quantum operations formalism 量子輪講 2003 年 10 月 16 日 担当:徳本 晋
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
CPS Computational problems, algorithms, runtime, hardness (a ridiculously brief introduction to theoretical computer science) Vincent Conitzer.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
OR  Now, we look for other basic feasible solutions which gives better objective values than the current solution. Such solutions can be examined.
2 2.2 © 2016 Pearson Education, Ltd. Matrix Algebra THE INVERSE OF A MATRIX.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
Chapter 1. Linear equations Review of matrix theory Fields System of linear equations Row-reduced echelon form Invertible matrices.
7.3 Linear Systems of Equations. Gauss Elimination
Chap 10. Sensitivity Analysis
Background: Lattices and the Learning-with-Errors problem
Elementary Matrix Methid For find Inverse
4.6: Rank.
§1-3 Solution of a Dynamical Equation
Chapter 8. General LP Problems
Mathematics for Signals and Systems
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 4-5, Tuesday 18th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN.
Chapter 8. General LP Problems
Chapter 8. General LP Problems
Vector Spaces RANK © 2012 Pearson Education, Inc..
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Linear Equations in Linear Algebra
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
Presentation transcript:

Matrix sparsification and the sparse null space problem Lee-Ad GottliebWeizmann Institute Tyler NeylonBynomial Inc. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAA

Matrix sparsification and the sparse null space problem 2 Matrix sparsification Problem definition: Given a matrix, make it as sparse as possible (minimize number of non-zeros), using elementary row reductions  we want lots of 0’s Could try Gaussian elimination… but can we do better? Applications:  Computational speed-up for many fundamental matrix operations  Machine learning [SS-00]  Discovery of cycle bases of graphs [KMMP-04]

Matrix sparsification and the sparse null space problem 3 Matrix sparsification What’s known about matrix sparsification?  Precious little… mostly work by McCormick and coauthors It’s NP-hard [M-83] Known results  Heuristic [CM-02]  Algorithm under limiting condition: [HM-84] gave an approximation algorithm for matrices that satisfy the Haar condition

Matrix sparsification and the sparse null space problem 4 Sparse null space problem First recall the definition of null space:  The null space of A is the set of all nonzero vectors b for which Ab=0  A null matrix for A spans the null space of A. Finding such a matrix is a basic function in linear algebra. Problem definition: Given a matrix A, find an optimally sparse matrix N that is a full null matrix for A  N is full rank  Columns of N span the null space of A  N is sparse Applications  Helps solve Linear Equality Problems [CP-86] (optimization via gradient descent, dual variable method for Navier-Stokes, quadratic programming)  Efficient solution to the force method for structural analysis [GH-87]  Finds correlations between time series, such as financial stocks [ZZNS-05]

Matrix sparsification and the sparse null space problem 5 Sparse null space problem What’s known about sparse null space?  Precious little… First considered in [P-84], it’s known to be NP-hard [CP-86] Known results: Heuristics [BHKLPW-85, CP-86, GH-87]

Matrix sparsification and the sparse null space problem 6 Two matrix problems It’s not difficult to see that matrix sparsification and sparse null space are equivalent  Let B be a full null matrix for A. The following statements are equivalent: N = BX for some invertible matrix X N is a full null matrix for X Surprisingly then, these two lines of work have proceeded independently

Matrix sparsification and the sparse null space problem 7 Our contribution Two matrix problems  Have been around since the 80’s  Have many applications  Are equivalent – from now on, we’ll refer only to matrix sparsification Can we prove something concrete about matrix sparsification?  Hardness of approximation?  Approximation algorithms? We can do both…  Hardness of approximation As hard as label cover (quite hard to approximate) Hard to approximate within factor 2 log.5-o(1) n of optimal (with some caveats…)  Approximation algorithms For the hard problem Under limiting assumptions

Matrix sparsification and the sparse null space problem 8 Min unsatisfy As a first step towards proving hardness of approximation, we’ll show that matrix sparsification is closely related to the min unsatisfy problem introduced in [ABSZ-97] Problem definition: Given a linear system Ax=b, provide a vector x that minimizes the number of equations not satisfied What’s known about min unsatisfy  As hard to approximate as label cover [ABSZ-97]  Under Q, hard within factor 2 log.5-o(1) n of optimal under the assumption that NP does not admit a quasi-polynomial time solution.  Randomized Θ(m/log m) approximation algorithm (m is number of rows) [BK-01] x1x1 x2x x =

Matrix sparsification and the sparse null space problem 9 Hardness of matrix sparsification We’ll give a reduction from min unsatisfy to matrix sparsification, which will prove hardness of approximation for matrix sparsification. Preliminary note: There exist matrices that are unsparsifiable. and these can be construction in poly time.  M = (I X), where I is the identity matrix and X contains no 0 entries.  The identity portion can always be achieved via Gaussian elimination

Matrix sparsification and the sparse null space problem 10 Hardness of matrix sparsification Proof outline  Let (A,y) be an instance of min unsatisfy  We’ll create a matrix M with a few copies of A, but many of copies of y  Minimizing the number of non-zero entries in M reduces to finding a sparse linear combination of y with some vectors of A That is, solving the instance of min unsatisfy. Construction: Let (I q X) be an unsparsifiable matrix, and Ø be the Kronecker product We chose q=n 2 I q Ø yX Ø y 0I q Ø A

Matrix sparsification and the sparse null space problem 11 Approximation algorithm Our first result: We conclude that matrix sparsification is as hard as min unsatisfy, which itself is as hard as label cover.  Matrix sparsification is hard to approximate within factor 2 log.5-o(1) n of optimal So what can be done for matrix sparsification? We will further show that a solution to min unsatisfy implies a similar solution for matrix sparsification.  Hence, the randomized Θ(m/log m) approximation algorithm for min unsatisfy [BK-01] carries over to matrix sparsification.  More importantly, we will also show how to extend a large number of heuristics and algorithms under limiting assumptions to apply to min unsatisfy, and therefore to matrix sparsification. In particular, we’ll show that the well-known l 1 minimization heuristic applies to matrix sparsification.

Matrix sparsification and the sparse null space problem 12 Another look at min unsatisfy Consider the exact dictionary representation (EDR) problem, the major problem in sparse approximation theory.  Problem definition: Given a matrix D of dictionary vectors and a target vector s  Find the smallest subset D’ such that a linear combinations of vectors is equal to s. What’s known about this problem  A variant appeared in a paper of Schmidt in 1907 [T-03]  NP-Hard [N-95]  Applications in signal representation [CW-92,NP-09], amplitude optimization [S-90], function approximation [N-95], and data mining [CRT-06, ZGSD-06, GGIMS-02, GMS-05].  A large number of heuristics have been studied for this problem  Also approximation algorithms under limiting assumptions

Matrix sparsification and the sparse null space problem 13 Another look at min unsatisfy EDR is in fact equivalent to min-unsatisfy ([AK-95] proved one direction) although this seems to have escaped the notice of the sparse approximation theory community. We’ll show how to extend the heuristics and algorithms for EDR (and therefore, min unsatify) for matrix sparsification.

Matrix sparsification and the sparse null space problem 14 Matrix sparsification The following greedy algorithm solves matrix sparsification  We assumes existence of subroutine SIV(A,B), returns the sparsest vector in the span of matrix A that is not in the span of matrix B  Notes: This subroutine can be easily implemented using a heuristic or approximation algorithm for min unsatisfy (see paper) The matrix sparsification algorithm below is a slight simplification (again see paper) Algorithm for matrix sparsification builds matrix B one column at a time  B ← null  For i=n…1 a = SIV(A,B) B ← a [CP-86] proved that the greedy algorithm works for matroids.

Matrix sparsification and the sparse null space problem 15 Algorithms for matrix sparsification We conclude that all algorithms for min unsatisfy (and EDR) apply to matrix sparsification as well.  There exists a randomized Θ(m/log m) approximation algorithm for matrix sparsification.  A large number of heuristics for EDR carry over to matrix sparsification. Practical contribution  The popular l 1 minimization heuristic for EDR carries over to matrix sparsification This heuristic finds a vector v that satisfies Dv=s, while minimizing ||v|| 1 instead of number of non-zeros in v The heuristic is also an approximation algorithm under certain limiting assumptions [F-04]  This heuristic for matrix sparsification has already been implemented since the public posting of our paper!

Matrix sparsification and the sparse null space problem 16 Conclusion Considered the matrix sparsification and sparse null space problems. Showed that they are very hard to approximate. Showed how to extend a large number of studied heuristics and algorithms to these problems