All Pairs Shortest Path Algorithms Aditya Sehgal Amlan Bhattacharya.

Slides:



Advertisements
Similar presentations
1 Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 19 Prof. Erik Demaine.
Advertisements

Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar
Graph Algorithms Carl Tropper Department of Computer Science McGill University.
Lecture 19: Parallel Algorithms
Advanced Algorithm Design and Analysis (Lecture 7) SW5 fall 2004 Simonas Šaltenis E1-215b
1 Chapter 26 All-Pairs Shortest Paths Problem definition Shortest paths and matrix multiplication The Floyd-Warshall algorithm.
Parallel Graph Algorithms
Chapter 25: All-Pairs Shortest-Paths
Discussion #34 1/17 Discussion #34 Warshall’s and Floyd’s Algorithms.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Warshall’s and Floyd’sAlgorithm Dr. Ying Lu RAIK 283: Data Structures.
All Pairs Shortest Paths and Floyd-Warshall Algorithm CLRS 25.2
1 Lecture 25: Parallel Algorithms II Topics: matrix, graph, and sort algorithms Tuesday presentations:  Each group: 10 minutes  Describe the problem,
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Lecture 22: Matrix Operations and All-pair Shortest Paths II Shang-Hua Teng.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
1 Advanced Algorithms All-pairs SPs DP algorithm Floyd-Warshall alg.
1 Parallel Algorithms III Topics: graph and sort algorithms.
Algorithms All pairs shortest path
All-Pairs Shortest Paths Given an n-vertex directed weighted graph, find a shortest path from vertex i to vertex j for each of the n 2 vertex pairs (i,j).
Design of parallel algorithms Matrix operations J. Porras.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
All-Pairs-Shortest-Paths for Large Graphs on the GPU Gary J Katz 1,2, Joe Kider 1 1 University of Pennsylvania 2 Lockheed Martin IS&GS.
Dynamic Programming 2 Mani Chandy
1 Dynamic programming algorithms for all-pairs shortest path and longest common subsequences We will study a new technique—dynamic programming algorithms.
CS 473 All Pairs Shortest Paths1 CS473 – Algorithms I All Pairs Shortest Paths.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
Graph Algorithms. Definitions and Representation An undirected graph G is a pair (V,E), where V is a finite set of points called vertices and E is a finite.
1 The Floyd-Warshall Algorithm Andreas Klappenecker.
Graph Algorithms Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar Adapted for 3030 To accompany the text ``Introduction to Parallel Computing'',
Parallel Programming: All-Pairs Shortest Path CS599 David Monismith Based upon notes from multiple sources.
The all-pairs shortest path problem (APSP) input: a directed graph G = (V, E) with edge weights goal: find a minimum weight (shortest) path between every.
Graph Algorithms Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text ``Introduction to Parallel Computing'', Addison Wesley,
1 Prim’s algorithm. 2 Minimum Spanning Tree Given a weighted undirected graph G, find a tree T that spans all the vertices of G and minimizes the sum.
Graph Algorithms Gayathri R To accompany the text ``Introduction to Parallel Computing'', Addison Wesley, 2003.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
All-Pairs Shortest Paths
Graph Theory. undirected graph node: a, b, c, d, e, f edge: (a, b), (a, c), (b, c), (b, e), (c, d), (c, f), (d, e), (d, f), (e, f) subgraph.
Group 5 Algorithms Presentation. Agenda Items and Presenters Bell Numbers All Pairs Shortest Path Shell Sort and Radix Sort Psuedocode.
Parallel Graph Algorithms Sathish Vadhiyar. Graph Traversal  Graph search plays an important role in analyzing large data sets  Relationship between.
All-Pairs Shortest Paths Csc8530 – Dr. Prasad Jon A Preston March 17, 2004.
1 Chapter 6 : Graph – Part 2 교수 : 이상환 강의실 : 113,118 호, 324 호 연구실 : 과학관 204 호 Home :
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
All-Pairs SPs on DG Run Dijkstra;s algorithm for each vertex or
Parallel Graph Algorithms
CS330 Discussion 6.
Lecture 22: Parallel Algorithms
Dynamic Programming Characterize the structure (problem state) of optimal solution Recursively define the value of optimal solution Compute the value of.
Shortest paths & Weighted graphs
Analysis and design of algorithm
Chapter 8 Dynamic Programming
Dynamic Programming Characterize the structure (problem state) of optimal solution Recursively define the value of optimal solution Compute the value of.
Floyd-Warshall Algorithm
Floyd’s Algorithm (shortest-path problem)
Dynamic Programming.
All pairs shortest path problem
Single-Source All-Destinations Shortest Paths With Negative Costs
Near-neighbor or Mesh Based Paradigm
CS 584 Project Write up Poster session for final Due on day of final
Dynamic Programming.
Parallel Graph Algorithms
Single-Source All-Destinations Shortest Paths With Negative Costs
All-Pairs Shortest Paths
Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar
Presentation transcript:

All Pairs Shortest Path Algorithms Aditya Sehgal Amlan Bhattacharya

Floyd Warshal Algorithm All-pair shortest path on a graph Negative edges may be present Negative weight cycles not allowed Uses dynamic programming Therefore bottom up solution

Problem Description…… A graph G(V,E,w) be a weighted graph with vertices v 1, v 2, …………v n. Find the shortest path between each pair of vertices v i, v j, where i,j <= n and find out the all pairs shortest distance matrix D

Principle on which it is based…….. For 2 vertices v i, v j in V, out of all paths between them, consider the ones which their intermediate vertices in { v 1, v 2, …..v k } If p k i,j be the minimum cost path among them, the cost being d (k) i,j If v k is not in the shortest path then p k i,j = p k-1 i,j

Principles cont……. If v k is in p k i,j, then we can p k i,j break into 2 paths - from v i to v k and from v k to v j Each of them uses vertices in the set { v 1, v 2, ……….v k-1 }

Recurrence equation ……… d (k) i,j = { w(v i, v j ) if k =0 ; min { d (k-1) i,j, d (k-1) i,k + d (k-1) k,j } if k>0 } The output of the algorithm is the matrix D (n) = (d (n) i,j )

Algorithm…… procedure FLOYD_ALL_PAIRS (A) begin D(0) = A ; for k = 1 to n do for i = 1 to n do for j = 1 to n do d (k) i,j = min { d (k-1) i,j, d (k-1) i,k + d (k-1) k,j }; end FLOYD_ALL_PAIRS

Complexity of the sequential algorithm…….. Running time O (n 3 ) Space complexity O (n 2 )

An example graph…

How it works ………. 038m-4 m0m17 D (0) = m40mm 2m-50m mmm60

How it works ………. 038m-4 m0m17 D (1) = m40mm 25-50m mmm60

How it works ……… m0m17 D (2) = m mmm60

How it works ……… m0m17 D (3) = m mmm60

How it works ……… D (4) =

How it works ……… D (5) =

Parallel formulation……… If p is the number of processors available, the matrix D (k) is partitioned into p parts For computation, each process needs some elements from the k th row and column in the matrix D (k-1) Use 2-D or 1-D block mapping

2-D block mapping

2-D Block mapping cont…… Each block size is (n / p 0.5 ) by (n / p 0.5 ) k th iteration, P i,j needs some part of k th row and column of the D k-1 matrix Thus at this time p 0.5 processors containing a part of the k th row sends it to the p processes in the same column Similarly, p 0.5 processors containing a part of the k th column sends it to the p processes in the same row

Communication patterns …..

Algorithm …….

Analysis(computation)…….. Each process is assigned n / p 0.5 * n / p 0.5 i.e n 2 / p numbers. In each iteration, the computation time is O (n 2 / p ) For n iterations the computation time is O (n 3 / p )

Analysis(communication)…….. After a broadcast, a synchronization time of O(n log p) is required Each process broadcasts n / p 0.5 elements Thus total communication time takes O((n log p) / p 0.5 ) Thus for n iterations time O((n 2 log p) / p 0.5 )

Total parallel run time…… T p = O((n 2 log p) / p 0.5 ) + O (n 3 / p ) S = O( n 3 ) / T p E = 1 / (1 + O((p 0.5 log p)/n) ) Number of processors used efficiently is O(n 2 log 2 n ) Isoeffiency function = O (p 1.5 log 3 p )

How to improve on this ……. Waiting for the k th row elements by each processor after all the processors complete the k-1 th iteration Start as soon as it finishes the k-1 th iteration and it has got the relevant parts of the D (k-1) matrix Known as Pipelined 2-D Block mapping

How it is carried out …….. P i,j which has elements of the k th row after the k-1 th iteration sends the part of the D (k-1) matrix to processors P i,j+1, P i,j-1. Similarly P i,j which has elements of the k th column after the k-1 th iteration sends the part of the D (k-1) matrix to processors P i-1,i, P i+1,j Each of these values are stored and forwarded to the all the processors in the same row and column. The forwarding is stopped when mesh boundaries are reached

Pipelined 2 D Block mapping ……

Total parallel run time…… T p = O(n 3 / p ) + O (n ) S = O( n 3 ) / O(n 3 / p ) + O (n ) E = 1/ (1 + O(p/n 2 ) Number of processors used efficiently O(n 2 ) Isoeffiency function = O (p 1.5 )

Comparison……. AlgorithmMax Processes for E = O(1) Parallel Run time Isoefficiency function Djikstra source- partitioned O(n) O(n 2 )O(p 3 ) Djikstra source- parallel O(n 2 / log n)O(n log n)O(p 1.5 log 1.5 p) Floyd 1 D BlockO(n/ log n)O(n 2 log n)O(p 3 log 3 p) Floyd 2-D BlockO(n 2 / log 2 n)O(n log 2 n)O(p 1.5 log 3 p) Floyd pipelined 2-D Block O(n 2 )O(n)O(p 1.5 )