1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon.

Slides:



Advertisements
Similar presentations
1 Efficient algorithms on sets of permutations, dominance, and real-weighted APSP Raphael Yuster University of Haifa.
Advertisements

CSNB143 – Discrete Structure
Chapter Matrices Matrix Arithmetic
Lecture 19: Parallel Algorithms
8.3 Representing Relations Connection Matrices Let R be a relation from A = {a 1, a 2,..., a m } to B = {b 1, b 2,..., b n }. Definition: A n m  n connection.
Graph-02.
Lecture 3: Parallel Algorithm Design
A simple example finding the maximum of a set S of n numbers.
Introduction to Bioinformatics Algorithms Divide & Conquer Algorithms.
Introduction to Bioinformatics Algorithms Divide & Conquer Algorithms.
1 Appendix B: Solving TSP by Dynamic Programming Course: Algorithm Design and Analysis.
Discussion #33 Adjacency Matrices. Topics Adjacency matrix for a directed graph Reachability Algorithmic Complexity and Correctness –Big Oh –Proofs of.
Lecture 17 Path Algebra Matrix multiplication of adjacency matrices of directed graphs give important information about the graphs. Manipulating these.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Representing Relations Using Matrices
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming Reading Material: Chapter 7..
1 Lecture 25: Parallel Algorithms II Topics: matrix, graph, and sort algorithms Tuesday presentations:  Each group: 10 minutes  Describe the problem,
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 16 All shortest paths algorithms Properties of all shortest paths Simple algorithm:
Graphs, relations and matrices
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
MATRICES. Matrices A matrix is a rectangular array of objects (usually numbers) arranged in m horizontal rows and n vertical columns. A matrix with m.
Applied Discrete Mathematics Week 10: Equivalence Relations
Section 4.1 Using Matrices to Represent Data. Matrix Terminology A matrix is a rectangular array of numbers enclosed in a single set of brackets. The.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2013 Lecture 12.
Divide-and-Conquer 7 2  9 4   2   4   7
Chap. 2 Matrices 2.1 Operations with Matrices
All-Pairs Bottleneck Paths in Vertex Weighted graphs Asaf Shapira Microsoft Research Raphael Yuster University of Haifa Uri Zwick Tel-Aviv University.
Chapter 2 Graph Algorithms.
MA/CSSE 473 Day 17 Divide-and-conquer Convex Hull Strassen's Algorithm: Matrix Multiplication.
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
Discrete Math for CS Binary Relation: A binary relation between sets A and B is a subset of the Cartesian Product A x B. If A = B we say that the relation.
AIM: How do we perform basic matrix operations? DO NOW:  Describe the steps for solving a system of Inequalities  How do you know which region is shaded?
MA/CSSE 473 Day 28 Dynamic Programming Binomial Coefficients Warshall's algorithm Student questions?
1 Closures of Relations: Transitive Closure and Partitions Sections 8.4 and 8.5.
Slide Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
2009/9 1 Matrices(§3.8)  A matrix is a rectangular array of objects (usually numbers).  An m  n (“m by n”) matrix has exactly m horizontal rows, and.
Relations and their Properties
Module #9: Matrices Rosen 5 th ed., §2.7 Now we are moving on to matrices, section 7.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
Matrices Section 2.6. Section Summary Definition of a Matrix Matrix Arithmetic Transposes and Powers of Arithmetic Zero-One matrices.
ADA: 4.5. Matrix Mult.1 Objective o an extra divide and conquer example, based on a question in class Algorithm Design and Analysis (ADA) ,
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
Greatest Common Divisors & Least Common Multiples  Definition 4 Let a and b be integers, not both zero. The largest integer d such that d|a and d|b is.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Copyright © Cengage Learning. All rights reserved. 2 SYSTEMS OF LINEAR EQUATIONS AND MATRICES.
1 How to Multiply Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. integers, matrices, and polynomials.
PARALLEL PROCESSING From Applications to Systems Gorana Bosic Veljko Milutinovic
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
MA/CSSE 473 Day 30 B Trees Dynamic Programming Binomial Coefficients Warshall's algorithm No in-class quiz today Student questions?
LEARNING OUTCOMES At the end of this topic, student should be able to :  D efination of matrix  Identify the different types of matrices such as rectangular,
MA/CSSE 473 Day 14 Strassen's Algorithm: Matrix Multiplication Decrease and Conquer DFS.
Section 2.4. Section Summary  Sequences. o Examples: Geometric Progression, Arithmetic Progression  Recurrence Relations o Example: Fibonacci Sequence.
1 Lecture 5 (part 2) Graphs II (a) Circuits; (b) Representation Reading: Epp Chp 11.2, 11.3
Matrix Operations McDougal Littell Algebra 2 Larson, Boswell, Kanold, Stiff Larson, Boswell, Kanold, Stiff Algebra 2: Applications, Equations, Graphs Algebra.
Matrix Operations McDougal Littell Algebra 2 Larson, Boswell, Kanold, Stiff Larson, Boswell, Kanold, Stiff Algebra 2: Applications, Equations, Graphs Algebra.
A very brief introduction to Matrix (Section 2.7) Definitions Some properties Basic matrix operations Zero-One (Boolean) matrices.
All-pairs Shortest paths Transitive Closure
CSE 504 Discrete Mathematics & Foundations of Computer Science
Matrix Representation of Graphs
Computing Connected Components on Parallel Computers
Mathematical Structures for Computer Science Chapter 6
Lecture 22: Parallel Algorithms
2.2 Introduction to Matrices
Haitao Wang Utah State University WADS 2017, St. John’s, Canada
3.5 Perform Basic Matrix Operations
Chapter 4 Matrices & Determinants
Divide-and-Conquer 7 2  9 4   2   4   7
CSE 5290: Algorithms for Bioinformatics Fall 2009
Presentation transcript:

1 Reduction between Transitive Closure & Boolean Matrix Multiplication Presented by Rotem Mairon

2 Overview A divide & conquer approach for matrix multiplication: Strassen’s method Reduction between TC and BMM The speed-up of 4-Russians for matrix multiplication

33 The speedup of 4-Russians for matrix multiplication The boolean multiplication of row A i by column B j is defined by: Consider a two boolean matrices, A and B of small dimensions. How can this be improved with pre-processing? n=4 1 A basic observation The naïve boolean multiplication is done bit-after bit. This requires O(n) steps.

For each entry in the table, we pre-store the value for multiplying the indices. Each row Ai and column Bj form a pair of 4-bit binary numbers. The multiplication of Ai by Bi can be computed in O(1) time. The speedup of 4-Russians for matrix multiplication n=4 1 These binary numbers can be regarded as indices to a table of size 2 4 x2 4 = 6 = 4 Problem: 2 n x2 n is not practical of large matrix multiplication. A basic observation

5 Instead of regarding a complete row/col as an index to the table, consider only part of it The speedup of 4-Russians for matrix multiplication The speedup Now, we pre-compute multiplication values for pairs of binary vectors of size k in a table of size 2 k x2 k.

The speedup of 4-Russians for matrix multiplication The speedup Instead of regarding a complete row/col as an index to the table, consider only part of it. Now, we pre-compute multiplication values for pairs of binary vectors of size k in a table of size 2 k x2 k.

7 Let, then all pairs of k-bit binary vectors can be represented in a table of size: The speedup of 4-Russians for matrix multiplication The speedup Time required for multiplying Ai by Bi: O(n/logn). Total time required: O(n 3 /logn) instead of O(n 3 )

8 Overview A divide & conquer approach for matrix multiplication: Strassen’s method The method of 4-Russians for matrix multiplication Reduction between TC and BMM

99 Divide each nxn matrix into four matrices of size (n/2)x(n/2): Computing all of requires 8 multiplications and 4 additions. Therefore, the total running time is Using the Master Theorem, this solves to. Still cubic! Can we do better with a straightforward divide and conquer approach? Strassen’s method for matrix multiplication A divide and conquer approach

10 Define seven matrices of size (n/2)x(n/2) : The four (n/2)x(n/2) matrices can be defined in terms of M 1,…,M 7 : Strassen’s method for matrix multiplication Strassen’s algorithm

Running time? Each matrix Mi requires additions and subtractions but only one multiplication: which solves to O(n )Volker Strassen: Strassen’s method, First sub-cubic time algorithm1969 Strassen’s method for matrix multiplication Strassen’s algorithm 11

O(n )V. Y. Pan. Strassen’s algorithm is not optimal.1978 O(n )D. Bini et al. O(n ) complexity for nxn approximate matrix multiplication O(n )A. Schönhage. Partial and total matrix multiplication.1981 Strassen’s method for matrix multiplication Improvements O(n )CopperSmith and Winograd On the asymptotic complexity of matrix multiplication O(n )Volker Strassen1986 O(n )CopperSmith and Winograd Matrix multiplication via arithmetic progressions O(n )Virginia Vassilevska Williams: Breaking the Coppersmith-Winograd barrier 2011 First to break the 2.5 barrier: 12

13 Best choices for matrix multiplication Using the exact formulas for time complexity, for square matrices, crossover points have been found: For n<7, the naïve algorithm for matrix multiplication is preferred. As an example, a 6x6 matrix requires 482 steps for the method of 4- Russians, but 468 steps for the naïve multiplication. For 6<n<513, the method of 4-Russians is most efficient. For 512<n, Strassen’s approach costs the least number of steps.

14 Overview A divide & conquer approach for matrix multiplication: Strassen’s method The method of 4-Russians for matrix multiplication Reduction between TC and BMM

15 Realization of matrix multiplication in graphs Let A,B be adjacency matrices of two graphs over the same set of vertices {1,2,…,n} An (A,B)-path is a path of length two whose first edge belongs to A and its second edge belongs to B if and only if there is an (A,B)-path from vertex i to vertex j. Therefore, C is the adjacency matrix with respect to (A,B)-paths.

16 Definition and a cubic solution Given a directed graph G=(V,E), the transitive closure of G is defined as the graph G*=(V,E*) where E*={(i,j) : there is a path from vertex i to vertex j}. A dynamic programming algorithm, has been devised: Floyd-Warshall’s algorithm: Requires O(n 3 ) time. Could it be beaten? Transitive Closure by Matrix Multiplication

17 Beating the cubic solution By squaring the matrix, we get (i,j)=1 iff we can get from i to j in exactly two steps: How could we make (i,j) equal 1 iff there’s a path from i to j in at most 2 steps? Storing 1’s in all diagonal entries. What about (i,j)=1 iff there’s a path from i to j in at most 4 steps?Keep multiplying. Transitive Closure by Matrix Multiplication

18 Beating the cubic solution In total, the longest path had 4 vertices and 2 multiplications are required How many multiplications are required for the general case? Log 2 (n). The transitive closure can be obtained in O(n 2.37 log 2 (n)) time: Multiply the matrix log 2 (n) times. Each multiplication requires O(n 2.37 ) steps using Strassen’s approach. Better still: we can get rid of the log 2 (n) factor. Transitive Closure by Matrix Multiplication

19 Better still: getting rid of the log(n) factor The log(n) factor can be dropped by applying the following steps: Determine the strongly connected components of the graph: O(n 2 ) Collapse each component to a single vertex. The problem is now reduced to the problem for the new graph. Transitive Closure by Matrix Multiplication

20 Better still: getting rid of the log(n) factor The log(n) factor can be dropped by applying the following steps: Generate a topological sort for the new graph. Divide the graph into two sections: A (first half) and B (second half). The adjacency matrix of sorted graph is upper triangular: Transitive Closure by Matrix Multiplication

Better still: getting rid of the log(n) factor Connections within A are independent of B. Similarly, connections within B are independent of A. Connections from A to B are found by: To find the transitive closure of G, notice that: 21 i u in A a path in A A*(i,u) = 1 Transitive Closure by Matrix Multiplication

22 Better still: getting rid of the log(n) factor Connections within A are independent of B. Similarly, connections within B are independent of A. Connections from A to B are found by: To find the transitive closure of G, notice that: u v an edge in Ca path in A i u in AandA*C(i,v) = 1 Transitive Closure by Matrix Multiplication

23 Better still: getting rid of the log(n) factor Connections within A are independent of B. Similarly, connections within B are independent of A. Connections from A to B are found by: To find the transitive closure of G, notice that: and u j in Au v an edge in Ca path in A i u in Aand a path in B A*CB*(i,j) = 1 Transitive Closure by Matrix Multiplication

24 Better still: getting rid of the log(n) factor Transitive Closure by Matrix Multiplication Connections within A are independent of B. Similarly, connections within A are independent of A. To find the transitive closure of G, notice that: Hence, G* can be found with determining A*, B*, and computing A*CB* This requires finding the transitive closure of two (n/2)x(n/2) matrices, And performing two matrix multiplications: O(n 2.37 ). Running time? Solves to O( 2.37 )

25 Matrix Multiplication by Transitive Closure Let A,B be two boolean matrices, to compute C=AB, form the following matrix: The transitive closure of such a graph is formed by adding the edges from the 1 st part to 2 nd. These edges are described by the product of matrices A,B. Therefore,

26 Thanks