1 A fast algorithm for Maximum Subset Matching Noga Alon & Raphael Yuster.

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

Chapter Matrices Matrix Arithmetic
Totally Unimodular Matrices
A Separator Theorem for Graphs with an Excluded Minor and its Applications Paul Seymour Noga Alon Robin Thomas Lecturer : Daniel Motil.
1 Counting Perfect Matchings of a Graph It is hard in general to count the number of perfect matchings in a graph. But for planar graphs it can be done.
Solving linear systems through nested dissection Noga Alon Tel Aviv University Raphael Yuster University of Haifa.
Linear Equations in Linear Algebra
Lecture 17 Path Algebra Matrix multiplication of adjacency matrices of directed graphs give important information about the graphs. Manipulating these.
1.5 Elementary Matrices and a Method for Finding
1 Maximum matching in graphs with an excluded minor Raphael Yuster University of Haifa Uri Zwick Tel Aviv University TexPoint fonts used in EMF. Read the.
Matrix sparsification (for rank and determinant computations) Raphael Yuster University of Haifa.
The number of edge-disjoint transitive triples in a tournament.
Entropy Rates of a Stochastic Process
1 Representing Graphs. 2 Adjacency Matrix Suppose we have a graph G with n nodes. The adjacency matrix is the n x n matrix A=[a ij ] with: a ij = 1 if.
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
1 Finding cycles using rectangular matrix multiplication and dynamic programming Raphael Yuster Haifa Univ. - Oranim Uri Zwick Tel Aviv University Uri.
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
Matrices and Systems of Equations
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
1 Bipartite Matching Lecture 3: Jan Bipartite Matching A graph is bipartite if its vertex set can be partitioned into two subsets A and B so that.
Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.
Distributed Combinatorial Optimization
Randomness in Computation and Communication Part 1: Randomized algorithms Lap Chi Lau CSE CUHK.
Linear Equations in Linear Algebra
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Matrices and Determinants
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Linear Algebra With Applications by Otto Bretscher. Page The Determinant of any diagonal nxn matrix is the product of its diagonal entries. True.
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
1 資訊科學數學 14 : Determinants & Inverses 陳光琦助理教授 (Kuang-Chi Chen)
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
 Row and Reduced Row Echelon  Elementary Matrices.
Rev.S08 MAC 1140 Module 10 System of Equations and Inequalities II.
Chap. 2 Matrices 2.1 Operations with Matrices
Strong list coloring, Group connectivity and the Polynomial method Michael Tarsi, Blavatnik School of Computer Science, Tel-Aviv University, Israel.
Matrix. REVIEW LAST LECTURE Keyword Parametric form Augmented Matrix Elementary Operation Gaussian Elimination Row Echelon form Reduced Row Echelon form.
Lecture 22 More NPC problems
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
8.1 Matrices & Systems of Equations
The Tutte Polynomial Graph Polynomials winter 05/06.
Complexity 25-1 Complexity Andrei Bulatov Counting Problems.
Section 2.3 Properties of Solution Sets
Matrices Section 2.6. Section Summary Definition of a Matrix Matrix Arithmetic Transposes and Powers of Arithmetic Zero-One matrices.
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
CompSci 102 Discrete Math for Computer Science February 7, 2012 Prof. Rodger Slides modified from Rosen.
Generating a d-dimensional linear subspace efficiently Raphael Yuster SODA’10.
Matrices and Determinants
NP-completeness NP-complete problems. Homework Vertex Cover Instance. A graph G and an integer k. Question. Is there a vertex cover of cardinality k?
Linear Algebra Chapter 2 Matrices.
1 Covering Non-uniform Hypergraphs Endre Boros Yair Caro Zoltán Füredi Raphael Yuster.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Matrices Presentation by : Miss Matkar Pallavi. P.
Matrices, Vectors, Determinants.
7.3 Linear Systems of Equations. Gauss Elimination
CSE15 Discrete Mathematics 03/01/17
MAT 322: LINEAR ALGEBRA.
Lap Chi Lau we will only use slides 4 to 19
Linear Equations in Linear Algebra
Richard Anderson Lecture 26 NP-Completeness
Topics in Algorithms Lap Chi Lau.
Richard Anderson Lecture 26 NP-Completeness
Richard Anderson Lecture 25 NP-Completeness
Linear Equations in Linear Algebra
Chapter 2 Determinants Basil Hamed
Raphael Yuster Haifa University Uri Zwick Tel Aviv University
Matrices.
3.5 Perform Basic Matrix Operations
Answering distance queries in directed graphs using fast matrix multiplication Raphael Yuster Haifa University Uri Zwick Tel Aviv University.
Matrix Operations Ms. Olifer.
Presentation transcript:

1 A fast algorithm for Maximum Subset Matching Noga Alon & Raphael Yuster

2 Maximum subset matching – definition Input: graph G=(V,E) and subset S  V. Goal: determine the maximum number of vertices of S that can be matched in a matching of G. We denote this quantity by m G (S) Generalizes the maximum matching problem which corresponds to the case S=V and m G (V)=2m(G). Has practical applications.

3 S m G (S)=4 Not every maximum matching saturates the maximum possible number of vertices of S. S Not every matching that saturates m G (S) vertices of S can be extended to a maximum matching.

4 Efficient randomized algorithm that computes m G (S). The running time is expressed in terms of : s = |S| m = |E(S,V-S)| Note: There could be as many as Ɵ (s 2 ) edges inside S, and hence G may have as many as Ɵ (m+s 2 ) edges. Main result S m=3 s=4

5 Let G=(V,E) be a graph and let S  V, where |S|=s and there are m edges from S to V - S. Then, there is a randomized algorithm that computes m G (S) w.h.p. in time: With  < 2.38 [Coppersmith and Winograd 90] we get:

6 Comparison with previous algorithms Maximum subset matching can be solved via any maximum weighted matching algorithm: Assign 2 the to edges inside S and 1 to the m edges from S to V-S. Clearly a maximum weighted matching has weight m G (S). The fastest weighted matching algorithm [Gabow and Tarjan 91] would run, in our case, in time O(n 1/2 (m+s 2 )). For example: if m < s 1.69 then this algorithm runs in O(s 2.5 ) while ours in O(s 2.38 ).

7 The algorithm Tutte’s matrix (Skew-symmetric symbolic adjacency matrix)

8 The algorithm (cont.) A Theorem of Lovász: Let G=(V,E) be a graph and let A be its Tutte matrix. Then:2m(G) = rank(A)

9 The algorithm (cont.) We prove the following generalization: Let G=(V,E) be a graph, S  V and let A S be the s  n sub-matrix of A corresponding to the rows of S. Then:m G (S) = rank(A S ). Easy direction: m G (S)  rank(A S ). Add edges to G so that V - S becomes a clique and denote the new graph by G’. Clearly: rank(A’) =2m(G’) = n - s + m G (S). Thus: rank(A s ) = rank(A’ s )  rank(A’) - (n - s) = m G (S).

10 The algorithm (cont.) Other direction: m G (S)  rank(A S ). Suppose that rank(A S ) = r. We have to show that r vertices of S can be matched. There is a non-singular r  r sub-matrix B of A s. Let the rows of B correspond to S’. It suffices to show that there is a matching covering the vertices in S’. S S r = 3 S’={1,3,4} U ={1,4,9} V-SV-S

11 The algorithm (cont.) In det(B) we get r! products (with signs). Each product corresponds to an oriented subgraph of G obtained by orienting, for each x ij in the product, the edge from i (the row) to j (the column). This gives a subgraph in which the out-degree of every vertex of S’is 1 and the indegree of every vertex of U is 1. S SV-SV-S r = 3 S’={1,3,4} U ={1,4,9}

12 The algorithm (cont.) Any component is either a directed path, all of whose vertices are in S’ besides the last one which is in U-S’, or a cycle, all of whose vertices are in S’. The crucial point now is that if there is an odd cycle in this subgraph, then the contribution of this term to det(B) is zero, as we can orient the cycle backwards and get the same term with an opposite sign. As det(B) ≠ 0, there is at least one subgraph in which all components are either paths or even cycles, and hence there is a matching saturating all vertices in S’.

13 The algorithm (cont.) It remains to show how to compute rank(A S ) (w.h.p.) in the claimed running time. By the Zippel / Schwarz polynomial identity testing method, we can replace the variables x ij in A s with random elements from {1,…,R} (where R ~ n 2 suffices here) and w.h.p. the rank does not decrease. By paying a price of randomness, we remain with the problem of computing the rank of a rectangular matrix with small integer coefficients.

14 The algorithm (cont.) Some simple linear algebra: If A is a real matrix then A T A and A have the same kernel. In particular, rank (A T A) = rank (A). Proof: Suppose A T Ax =0 then, using the scalar product, = = 0 implying Ax = 0. Notice that the assertion may fail for general fields, as can be seen, for instance, by the p  p matrix over F p, all of whose entries are equal to 1.

15 The algorithm (cont.) Hopcroft Bunch Algorithm: We need the following result of [ Hopcroft and Bunch 74] Gaussian Elimination requires asymptotically the same number of algebraic operations needed for matrix multiplication. Notice that a by-product of Gaussian elimination is the matrix rank. Let A be an n  n matrix over a field. Then rank(A) can be computed using O(n  ) algebraic operations. In particular, if a field operation requires Ɵ (K) time then rank(A) can be computed in O(Kn  ) time.

16 The algorithm (cont.) A1A1 A2A2 A1TA1T A2TA2T  B=A S A S T Computing A S A S T : = s s m edges n-s s s B=A1A1T+A2A2TB=A1A1T+A2A2T Õ(s)Õ(s) Õ (s  ) if m < s (  +1)/2 Õ(ms (  -1/2) )if m  s (  +1)/2 [Y. and Zwick 05] [Kaplan, Sharir, Verbin 06]

17 The algorithm (cont.) It remains to compute rank(B). Since B is an s  s matrix this only requires O(s  ) algebraic operations with Hopcroft Bunch Algorithm. Careful! We cannot do arithmetic over the integers. We do it over a large finite field F p. Since each element in B is Ɵ (n 5 ) it suffices to take p as a random prime no greater than Ɵ (n 6 ). W.h.p. rank Fp (B) = rank R (B). The cost of an algebraic operation is only logarithmic.

18 Computing rank(A s ) directly without computing A s A s T, is costly. The fastest algorithms for computing the rank of an s  n matrix directly require the use of Gaussian elimination, and can be performed using O(ns (  +1) ) algebraic operations. Gaussian elimination cannot make use of the fact that the matrix is sparse (namely, in our terms, make use of m as a parameter to its running time). Note

19 Thanks