Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon.

Similar presentations


Presentation on theme: "1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon."— Presentation transcript:

1 1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

2 2 Overview A divide & conquer approach for matrix multiplication: Strassen’s method Reduction between TC and BMM The speed-up of 4-Russians for matrix multiplication

3 33 The speedup of 4-Russians for matrix multiplication The boolean multiplication of row A i by column B j is defined by: Consider a two boolean matrices, A and B of small dimensions. How can this be improved with pre-processing? 0110 0010 1000 1111 1001 1101 1110 0010 n=4 1 A basic observation The naïve boolean multiplication is done bit-after bit. This requires O(n) steps.

4 44 1 0000 0010 0 0 0 0 0 1 1 0 For each entry in the table, we pre-store the value for multiplying the indices. Each row Ai and column Bj form a pair of 4-bit binary numbers. The multiplication of Ai by Bi can be computed in O(1) time. The speedup of 4-Russians for matrix multiplication 0110 0010 1000 1111 1001 1101 1110 0010 n=4 1 These binary numbers can be regarded as indices to a table of size 2 4 x2 4 = 6 = 4 Problem: 2 n x2 n is not practical of large matrix multiplication. A basic observation

5 5 Instead of regarding a complete row/col as an index to the table, consider only part of it. 5 010010 001110 111000 011111 011000 001101 001001 101001 111010 110110 010001 100011 0 0 1 000 100 010 110 001 10000 01100 01010 The speedup of 4-Russians for matrix multiplication The speedup Now, we pre-compute multiplication values for pairs of binary vectors of size k in a table of size 2 k x2 k.

6 0 66 010010 001110 111000 011111 011000 001101 001001 101001 111010 110110 010001 100011 1 0 1 000 100 010 110 001 10000 01100 01010 The speedup of 4-Russians for matrix multiplication The speedup Instead of regarding a complete row/col as an index to the table, consider only part of it. Now, we pre-compute multiplication values for pairs of binary vectors of size k in a table of size 2 k x2 k.

7 7 Let, then all pairs of k-bit binary vectors can be represented in a table of size: 7 010010 001110 111000 011111 011000 001101 001001 101001 111010 110110 010001 100011 1 The speedup of 4-Russians for matrix multiplication The speedup Time required for multiplying Ai by Bi: O(n/logn). Total time required: O(n 3 /logn) instead of O(n 3 ). 0 1 000 100 010 110 001 10000 01100 01010

8 8 Overview A divide & conquer approach for matrix multiplication: Strassen’s method The method of 4-Russians for matrix multiplication Reduction between TC and BMM

9 99 Divide each nxn matrix into four matrices of size (n/2)x(n/2): Computing all of requires 8 multiplications and 4 additions. Therefore, the total running time is Using the Master Theorem, this solves to. Still cubic! Can we do better with a straightforward divide and conquer approach? Strassen’s method for matrix multiplication A divide and conquer approach

10 10 Define seven matrices of size (n/2)x(n/2) : The four (n/2)x(n/2) matrices can be defined in terms of M 1,…,M 7 : Strassen’s method for matrix multiplication Strassen’s algorithm

11 Running time? Each matrix Mi requires additions and subtractions but only one multiplication: which solves to O(n 2.807 )Volker Strassen: Strassen’s method, First sub-cubic time algorithm1969 Strassen’s method for matrix multiplication Strassen’s algorithm 11

12 O(n 2.796 )V. Y. Pan. Strassen’s algorithm is not optimal.1978 O(n 2.7799 )D. Bini et al. O(n 2.7799 ) complexity for nxn approximate matrix multiplication. 1979 O(n 2.522 )A. Schönhage. Partial and total matrix multiplication.1981 Strassen’s method for matrix multiplication Improvements O(n 2.496 )CopperSmith and Winograd On the asymptotic complexity of matrix multiplication. 1981 O(n 2.479 )Volker Strassen1986 O(n 2.376 )CopperSmith and Winograd Matrix multiplication via arithmetic progressions. 1989 O(n 2.3727 )Virginia Vassilevska Williams: Breaking the Coppersmith-Winograd barrier 2011 First to break the 2.5 barrier: 12

13 13 Best choices for matrix multiplication Using the exact formulas for time complexity, for square matrices, crossover points have been found: For n<7, the naïve algorithm for matrix multiplication is preferred. As an example, a 6x6 matrix requires 482 steps for the method of 4- Russians, but 468 steps for the naïve multiplication. For 6<n<513, the method of 4-Russians is most efficient. For 512<n, Strassen’s approach costs the least number of steps.

14 14 Overview A divide & conquer approach for matrix multiplication: Strassen’s method The method of 4-Russians for matrix multiplication Reduction between TC and BMM

15 15 Realization of matrix multiplication in graphs Let A,B be adjacency matrices of two graphs over the same set of vertices {1,2,…,n} An (A,B)-path is a path of length two whose first edge belongs to A and its second edge belongs to B. 00000 00001 00000 00000 01100 1 2 3 4 5 54321 00100 00000 00000 00010 00000 1 2 3 4 5 54321 00000 00100 00000 00000 01010 1 2 3 4 5 54321 if and only if there is an (A,B)-path from vertex i to vertex j. Therefore, C is the adjacency matrix with respect to (A,B)-paths.

16 16 Definition and a cubic solution Given a directed graph G=(V,E), the transitive closure of G is defined as the graph G*=(V,E*) where E*={(i,j) : there is a path from vertex i to vertex j}. A dynamic programming algorithm, has been devised: Floyd-Warshall’s algorithm: Requires O(n 3 ) time. Could it be beaten? Transitive Closure by Matrix Multiplication

17 17 Beating the cubic solution By squaring the matrix, we get (i,j)=1 iff we can get from i to j in exactly two steps: 1 2 3 4 4321 0010 0100 1000 0000 0010 0100 1000 0000 0100 1000 0000 0000 How could we make (i,j) equal 1 iff there’s a path from i to j in at most 2 steps? Storing 1’s in all diagonal entries. What about (i,j)=1 iff there’s a path from i to j in at most 4 steps?Keep multiplying. Transitive Closure by Matrix Multiplication

18 18 Beating the cubic solution In total, the longest path had 4 vertices and 2 multiplications are required. 1 2 3 4 4321 0011 0110 1100 1000 0011 0110 1100 1000 1110 1100 1000 0000 How many multiplications are required for the general case? Log 2 (n). The transitive closure can be obtained in O(n 2.37 log 2 (n)) time: Multiply the matrix log 2 (n) times. Each multiplication requires O(n 2.37 ) steps using Strassen’s approach. Better still: we can get rid of the log 2 (n) factor. Transitive Closure by Matrix Multiplication

19 19 Better still: getting rid of the log(n) factor The log(n) factor can be dropped by applying the following steps: Determine the strongly connected components of the graph: O(n 2 ) Collapse each component to a single vertex. The problem is now reduced to the problem for the new graph. Transitive Closure by Matrix Multiplication

20 20 Better still: getting rid of the log(n) factor The log(n) factor can be dropped by applying the following steps: Generate a topological sort for the new graph. Divide the graph into two sections: A (first half) and B (second half). The adjacency matrix of sorted graph is upper triangular: Transitive Closure by Matrix Multiplication

21 Better still: getting rid of the log(n) factor Connections within A are independent of B. Similarly, connections within B are independent of A. Connections from A to B are found by: To find the transitive closure of G, notice that: 21 i u in A a path in A A*(i,u) = 1 Transitive Closure by Matrix Multiplication

22 22 Better still: getting rid of the log(n) factor Connections within A are independent of B. Similarly, connections within B are independent of A. Connections from A to B are found by: To find the transitive closure of G, notice that: u v an edge in Ca path in A i u in AandA*C(i,v) = 1 Transitive Closure by Matrix Multiplication

23 23 Better still: getting rid of the log(n) factor Connections within A are independent of B. Similarly, connections within B are independent of A. Connections from A to B are found by: To find the transitive closure of G, notice that: and u j in Au v an edge in Ca path in A i u in Aand a path in B A*CB*(i,j) = 1 Transitive Closure by Matrix Multiplication

24 24 Better still: getting rid of the log(n) factor Transitive Closure by Matrix Multiplication Connections within A are independent of B. Similarly, connections within A are independent of A. To find the transitive closure of G, notice that: Hence, G* can be found with determining A*, B*, and computing A*CB* This requires finding the transitive closure of two (n/2)x(n/2) matrices, And performing two matrix multiplications: O(n 2.37 ). Running time? Solves to O( 2.37 )

25 25 Matrix Multiplication by Transitive Closure Let A,B be two boolean matrices, to compute C=AB, form the following matrix: The transitive closure of such a graph is formed by adding the edges from the 1 st part to 2 nd. These edges are described by the product of matrices A,B. Therefore,

26 26 Thanks


Download ppt "1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon."

Similar presentations


Ads by Google