Matrix sparsification (for rank and determinant computations) Raphael Yuster University of Haifa.

Slides:



Advertisements
Similar presentations
1 Decomposing Hypergraphs with Hypertrees Raphael Yuster University of Haifa - Oranim.
Advertisements

1 Efficient algorithms on sets of permutations, dominance, and real-weighted APSP Raphael Yuster University of Haifa.
Applied Informatics Štefan BEREŽNÝ
On the Density of a Graph and its Blowup Raphael Yuster Joint work with Asaf Shapira.
A Separator Theorem for Graphs with an Excluded Minor and its Applications Paul Seymour Noga Alon Robin Thomas Lecturer : Daniel Motil.
1 Counting Perfect Matchings of a Graph It is hard in general to count the number of perfect matchings in a graph. But for planar graphs it can be done.
Heuristics for the Hidden Clique Problem Robert Krauthgamer (IBM Almaden) Joint work with Uri Feige (Weizmann)
Multicut Lower Bounds via Network Coding Anna Blasiak Cornell University.
Solving linear systems through nested dissection Noga Alon Tel Aviv University Raphael Yuster University of Haifa.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
1 Maximum matching in graphs with an excluded minor Raphael Yuster University of Haifa Uri Zwick Tel Aviv University TexPoint fonts used in EMF. Read the.
Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees Nick Harvey University of Waterloo Department of Combinatorics and Optimization Joint.
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Ch 7.9: Nonhomogeneous Linear Systems
1 Fast Sparse Matrix Multiplication Raphael Yuster Haifa University (Oranim) Uri Zwick Tel Aviv University ESA 2004.
1 Finding cycles using rectangular matrix multiplication and dynamic programming Raphael Yuster Haifa Univ. - Oranim Uri Zwick Tel Aviv University Uri.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Algebraic Structures and Algorithms for Matching and Matroid Problems Nick Harvey.
Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.
Study Group Randomized Algorithms Jun 7, 2003 Jun 14, 2003.
1 Separator Theorems for Planar Graphs Presented by Shira Zucker.
Randomness in Computation and Communication Part 1: Randomized algorithms Lap Chi Lau CSE CUHK.
1 Operations with Matrice 2 Properties of Matrix Operations
Introduction Information in science, business, and mathematics is often organized into rows and columns to form rectangular arrays called “matrices” (plural.
Linear Algebra With Applications by Otto Bretscher. Page The Determinant of any diagonal nxn matrix is the product of its diagonal entries. True.
Randomized Algorithms Morteza ZadiMoghaddam Amin Sayedi.
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Compiled By Raj G. Tiwari
Systems and Matrices (Chapter5)
1 資訊科學數學 14 : Determinants & Inverses 陳光琦助理教授 (Kuang-Chi Chen)
Linear Algebra and Complexity Chris Dickson CAS Advanced Topics in Combinatorial Optimization McMaster University, January 23, 2006.
Gaussian Elimination, Rank and Cramer
1 A fast algorithm for Maximum Subset Matching Noga Alon & Raphael Yuster.
Graph Sparsifiers Nick Harvey University of British Columbia Based on joint work with Isaac Fung, and independent work of Ramesh Hariharan & Debmalya Panigrahi.
Strong list coloring, Group connectivity and the Polynomial method Michael Tarsi, Blavatnik School of Computer Science, Tel-Aviv University, Israel.
The Quasi-Randomness of Hypergraph Cut Properties Asaf Shapira & Raphael Yuster.
C&O 750 Randomized Algorithms Winter 2011 Lecture 24 Nicholas Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:
Matrix Completion Problems for Various Classes of P-Matrices Leslie Hogben Department of Mathematics, Iowa State University, Ames, IA 50011
The Turán number of sparse spanning graphs Raphael Yuster joint work with Noga Alon Banff 2012.
NETWORK CODING. Routing is concerned with establishing end to end paths between sources and sinks of information. In existing networks each node in a.
WEEK 8 SYSTEMS OF EQUATIONS DETERMINANTS AND CRAMER’S RULE.
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
Distance sensitivity oracles in weighted directed graphs Raphael Yuster University of Haifa Joint work with Oren Weimann Weizmann inst.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
1 Rainbow Decompositions Raphael Yuster University of Haifa Proc. Amer. Math. Soc. (2008), to appear.
Solution of Sparse Linear Systems
Matrix Completion Problems for Various Classes of P-Matrices Leslie Hogben Department of Mathematics, Iowa State University, Ames, IA 50011
Relations, Functions, and Matrices Mathematical Structures for Computer Science Chapter 4 Copyright © 2006 W.H. Freeman & Co.MSCS Slides Relations, Functions.
1 Decomposition into bipartite graphs with minimum degree 1. Raphael Yuster.
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
1 Quasi-randomness is determined by the distribution of copies of a graph in equicardinal large sets Raphael Yuster University of Haifa.
Generating a d-dimensional linear subspace efficiently Raphael Yuster SODA’10.
Chapter 2 Determinants. With each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number.
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
Chapter 13 Backtracking Introduction The 3-coloring problem
1 Covering Non-uniform Hypergraphs Endre Boros Yair Caro Zoltán Füredi Raphael Yuster.
Great Theoretical Ideas in Computer Science for Some.
Algorithms for hard problems Parameterized complexity Bounded tree width approaches Juris Viksna, 2015.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
MAT 322: LINEAR ALGEBRA.
Lap Chi Lau we will only use slides 4 to 19
Hans Bodlaender, Marek Cygan and Stefan Kratsch
Topics in Algorithms Lap Chi Lau.
Solving Linear Systems Ax=b
Matrix Martingales in Randomized Numerical Linear Algebra
Raphael Yuster Haifa University Uri Zwick Tel Aviv University
Fast Sparse Matrix Multiplication
Linear Algebra Lecture 18.
Presentation transcript:

Matrix sparsification (for rank and determinant computations) Raphael Yuster University of Haifa

2 Elimination, rank and determinants Computing ranks and determinants of matrices are fundamental algebraic problems with numerous applications. Both of these problems can be solved as by-products of Gaussian elimination (G.E.). [Hopcroft and Bunch -1974]: G.E. of a matrix requires asymptotically the same number of operations as matrix multiplication. The algebraic complexity of rank and determinant computation is O(n ω ) where ω < 2.38 [Coppersmith and Winograd -1990].

3 Elimination, rank and determinants Can we do better if the matrix is sparse having m << n 2 non-zero entries? [Yannakakis -1981]: G.E. is not likely to help. If we allow randomness there are faster methods for computing the rank of sparse matrices. [Wiedemann -1986] An O(n 2 +nm) Monte Carlo algorithm for a matrix over an arbitrary field. [Eberly et al ] An O(n 3-1/(ω-1) ) < O(n 2.28 ) Las Vegas algorithm when m=  (n).

4 Structured matrices In some important cases that arise in various applications, the matrix possesses structural properties in addition to being sparse. Let A be an n × n matrix. The representing graph denoted G A, has vertices {1,…,n} where: for i ≠ j we have an edge ij iff a i,j ≠ 0 or a j,i ≠ 0. G A is always an undirected simple graph.

5 Nested dissection [Lipton, Rose, and Tarjan – 1979] Their seminal nested dissection method asserts that if A is - real symmetric positive definite and - G A is represented by a  -separator tree then G.E. on A can be performed in O(n ω  ) time. For  < 1 better than general G.E. Planar graphs and bounded genus graphs:  = ½ [ the separator tree constructed in O(n log n) time ]. For graphs with an excluded fixed minor:  = ½ [  the separator tree can only be constructed in O(n 1.5 ) time].

6 Nested dissection - limitations Matrix needs to be: Symmetric Real positive (semi) definite The method does not apply to matrices over finite fields (not even GF(2)) nor to real non-symmetric matrices nor to symmetric non positive-semidefinite matrices. In other words: it is not general. Our main result: we can overcome all of these limitations if we wish to compute ranks or absolute determinants. Thus making nested dissection a general method for these tasks.

7 Matrix sparsification Important tool used in the main result: Let A be a square matrix of order n with m nonzero entries. Another square matrix B of order n+2t with t = O(m) is constructed in O(m) time so that: det(B) = det(A), rank(B)=rank(A)+2t, Each row and column of B have at most three non-zero entries. Sparsification lemma

8 Why is sparsification useful? Usefulness of sparsification stems from the fact that Constant powers of B are also sparse. BDB T (where D is a diagonal matrix) is sparse. This is not true for the original matrix A. Over the reals we know that rank(BB T ) = rank(B) = rank(A)+2t and also that det(BB T ) = det(A) 2. Since BB T is symmetric, and positive semidefinite (over the reals), then the nested dissection method may apply if we can also guarantee that G BB T has a good separator tree (guaranteeing this, in general, is not an easy task).

9 Main result – for ranks Let A  F n × n. If G A has bounded genus then rank(A) can be computed in O(n ω/2 ) < O(n 1.19 ) time. If G A excludes a fixed minor then rank(A) can be computed in O(n 3ω/(3+ ω) ) < O(n ) time. The algorithm is deterministic if F= R and randomized if F is a finite field. Similar result obtained for absolute determinants of real matrices.

10 Sparsification algorithm Assume that A is represented in a sparse form: Row lists R i contain elements of the form (j, a i,j ). By using symbol 0* we can assume a i,j  0  a j,i  0. At step t of the algorithm, the current matrix is denoted by B t and its order is n+2t. Initially B 0 =A. A single step constructs B t +1 from B t by increasing the number of rows and columns of B t by 2 and by modifying constantly many entries of B t. The algorithm halts when each row list of B t has at most three entries.

11 Sparsification algorithm – cont. Thus, in the final matrix B t we have that each row and column has at most 3 non-zero entries. We make sure that: det(B t+1 ) = det(B t ) and rank(B t+1 ) = rank(B t )+2. Hence, in the end we will also have det(B t ) = det(A) and rank(B t ) = rank(A)+2t. How to do it: As long as there is a row with at least 4 nonzero entries, pick such row i and suppose b i,v  0 b i,u  0.

12 Sparsification algorithm – cont. Consider the principal block defined by {i, u, v}:

13 What happens in the representing graph? Recall the vertex splitting trick : … … 8, -6 9, 0* , -1 8, -6 9, 0*

14 Separators At the top level: partition A,B,C of the vertices of G so that |C| = O(n  ) |A|, |B| < αn No edges connect A and B. Strong separator tree: recurse on A  C and on B  C. Weak separator tree: recurse on A and on B. A C B

15 Finding separators Lipton-Tarjan (1979): Planar graphs have (O(n 1/2 ), 2/3)-separators. Can be found in linear time. Alon-Seymour-Thomas (1990): H-minor free graphs have (O(n 1/2 ), 2/3)-separators. Can be found in O(n 1.5 ) time. Reed and Wood (2005): For any ν>0, there is an O(n 1+ν )-time algorithm that finds (O(n (2  ν)/3 ), 2/3)-separators of H-minor free graphs.

16 Obstacle 1: preserving separators Can we perform the (labeled) vertex splitting and guarantee that the modified representing graph still has a  -separator tree ? Easy for planar graphs and bounded genus graphs: just take the vertices u,v splitted from vertex i to be on the same face. This preserves the genus. Not so easy (actually, not true!) that splitting an H- minor free graph keeps it H-minor free. [Y. and Zwick ] vertex splitting can be performed while keeping the separation parameter  (need to use weak separators). No “additional cost”.

17 Splitting introduces a K 4 -minor

18 Main technical lemma Suppose that (O(n β ),2/3)-separators of H-minor free graphs can be found in O(n γ )-time. If G is an H-minor free graph, then a vertex-split version G’ of G of bounded degree and an (O(n β ),2/3)-separator tree of G’ can be found in O(n γ ) time.

19 Running time

20 Obstacle 2: separators of BDB T We started with A for which G A has a  -separator tree. We used sparsification to obtain a matrix B with rank(B) = rank(A) + 2t for which G B has bounded degree and also has a (weak)  -separator tree. We can compute, in linear time, BDB T where D is a chosen diagonal matrix. We do so because BDB T is always pivoting-free (analogue of positive definite). But what about the graph G C of C= BDB T ? No problem! G C = (G B ) 2 (graph squaring of bounded degree graph): k-separator => O(k)-separator.

21 Obstacle 3: rank preservation of BDB T Over the reals take D=I and use rank(BB T )=rank(B) and we are done. Over other fields (e.g. finite fields) this is not so: If D = diag(x 1,…,x n ) we are OK over the generated ring: rank(BDB T )=rank(B) over F[x 1,…,x n ]. Can’t just substitute the x i ’s for random field elements and hope that w.h.p. the rank preserves! rank(B)=2 in GF(3)rank(BB T )=1 in GF(3)

22 Obstacle 3: cont. Solution: randomly replace the the x i ’s with elements of a sufficiently large extension field. If |F|=q suffices to take extension field F ’ with q r elements where q r > 2n 2. Thus r = O(log n). Constructing F ’ (generating irreducible polynomial ) takes O(r 2 + r log q) time [Shoup – 1994]. rank(B)= n/2 in GF(p) Prob. (rank(BDB T ))= n/2 is exponentially small

23 Applications Maximum matching in bounded-genus graphs can be found in O(n ω/2 ) < O(n 1.19 ) time (rand.) Maximum matching in H-minor free graphs can be found in O(n 3ω/(3+ω) ) < O(n ) time (rand.) The number of maximum matchings in bounded-genus graphs can be computed deterministically in O(n ω/2+1 ) < O(n 2.19 ) time

24 Tutte’s matrix (Skew-symmetric symbolic adjacency matrix)

25 Tutte’s theorem Let G=(V,E) be a graph and let A be its Tutte matrix. Then, G has a perfect matching iff det A 

26 Tutte’s theorem Let G=(V,E) be a graph and let A be its Tutte matrix. Then, G has a perfect matching iff det A  0. Lovasz’s theorem Let G=(V,E) be a graph and let A be its Tutte matrix. Then, the rank of A is twice the size of a maximum matching in G.

27 Why randomization? It remains to show how to compute rank(A) (w.h.p.) in the claimed running time. By the Zippel / Schwarz polynomial identity testing method, we can replace the variables x ij in A s with random elements from {1,…,R} (where R ~ n 2 suffices here) and w.h.p. the rank does not decrease. By paying a price of randomness, we remain with the problem of computing the rank of a matrix with small integer coefficients.

28 Thanks