Notes This set of slides (Handout #7) is missing the material on Reductions and Greed that was presented in class. Those slides are still under construction,

Slides:



Advertisements
Similar presentations
Single Source Shortest Paths
Advertisements

October 31, Algorithms and Data Structures Lecture XIII Simonas Šaltenis Nykredit Center for Database Research Aalborg University
November 14, Algorithms and Data Structures Lecture XIII Simonas Šaltenis Aalborg University
CS138A Single Source Shortest Paths Peter Schröder.
Introduction to Algorithms 6.046J/18.401J/SMA5503
Chapter 25: All-Pairs Shortest-Paths
The Shortest Path Problem. Shortest-Path Algorithms Find the “shortest” path from point A to point B “Shortest” in time, distance, cost, … Numerous.
CPSC 411, Fall 2008: Set 9 1 CPSC 411 Design and Analysis of Algorithms Set 9: More Graph Algorithms Prof. Jennifer Welch Fall 2008.
CS420 lecture twelve Shortest Paths wim bohm cs csu.
Jim Anderson Comp 122, Fall 2003 Single-source SPs - 1 Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph.
Shortest Path Problems
Shortest Paths Definitions Single Source Algorithms –Bellman Ford –DAG shortest path algorithm –Dijkstra All Pairs Algorithms –Using Single Source Algorithms.
1 8-ShortestPaths Shortest Paths in a Graph Fundamental Algorithms.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 15 Shortest paths algorithms Properties of shortest paths Bellman-Ford algorithm.
Shortest Path Algorithms
Shortest Paths Definitions Single Source Algorithms
DAST 2005 Tirgul 12 (and more) sample questions. DAST 2005 Q.We’ve seen that solving the shortest paths problem requires O(VE) time using the Belman-Ford.
CSE 780 Algorithms Advanced Algorithms SSSP Dijkstra’s algorithm SSSP in DAGs.
1 Graph Algorithms Single source shortest paths problem Dana Shapira.
Tirgul 13. Unweighted Graphs Wishful Thinking – you decide to go to work on your sun-tan in ‘ Hatzuk ’ beach in Tel-Aviv. Therefore, you take your swimming.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
All-Pairs Shortest Paths
Dijkstra’s Algorithm Slide Courtesy: Uwash, UT 1.
CS 473 All Pairs Shortest Paths1 CS473 – Algorithms I All Pairs Shortest Paths.
Theory of Computing Lecture 7 MAS 714 Hartmut Klauck.
Topological Sorting and Least-cost Path Algorithms.
Graphs – Shortest Path (Weighted Graph) ORD DFW SFO LAX
David Luebke 1 9/10/2015 CS 332: Algorithms Single-Source Shortest Path.
Shortest Path Algorithms. Kruskal’s Algorithm We construct a set of edges A satisfying the following invariant:  A is a subset of some MST We start with.
David Luebke 1 9/10/2015 ITCS 6114 Single-Source Shortest Path.
Dynamic Programming (16.0/15) The 3-d Paradigm 1st = Divide and Conquer 2nd = Greedy Algorithm Dynamic Programming = metatechnique (not a particular algorithm)
David Luebke 1 9/13/2015 CS 332: Algorithms S-S Shortest Path: Dijkstra’s Algorithm Disjoint-Set Union Amortized Analysis.
Jim Anderson Comp 122, Fall 2003 Single-source SPs - 1 Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph.
Chapter 9 – Graphs A graph G=(V,E) – vertices and edges
1 Shortest Path Problems How can we find the shortest route between two points on a road map? Model the problem as a graph problem: –Road map is a weighted.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Graph Algorithms. Definitions and Representation An undirected graph G is a pair (V,E), where V is a finite set of points called vertices and E is a finite.
Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph. Want to compute a shortest path for each possible.
Master Method (4. 3) Recurrent formula T(n) = a  T(n/b) + f(n) 1) if for some  > 0 then 2) if then 3) if for some  > 0 and a  f(n/b)  c  f(n) for.
Minimum weight spanning trees
Introduction to Algorithms Jiafen Liu Sept
The single-source shortest path problem (SSSP) input: a graph G = (V, E) with edge weights, and a specific source node s. goal: find a minimum weight (shortest)
CSE 2331 / 5331 Topic 12: Shortest Path Basics Dijkstra Algorithm Relaxation Bellman-Ford Alg.
1 Prim’s algorithm. 2 Minimum Spanning Tree Given a weighted undirected graph G, find a tree T that spans all the vertices of G and minimizes the sum.
Lecture 13 Algorithm Analysis
1 Weighted Graphs. 2 Outline (single-source) shortest path –Dijkstra (Section 4.4) –Bellman-Ford (Section 4.6) (all-pairs) shortest path –Floyd-Warshall.
Greedy Algorithms Z. GuoUNC Chapel Hill CLRS CH. 16, 23, & 24.
Introduction to Algorithms All-Pairs Shortest Paths My T. UF.
David Luebke 1 3/1/2016 CS 332: Algorithms Dijkstra’s Algorithm Disjoint-Set Union.
CSCE 411 Design and Analysis of Algorithms Set 9: More Graph Algorithms Prof. Jennifer Welch Spring 2012 CSCE 411, Spring 2012: Set 9 1.
Single Source Shortest Paths Chapter 24 CSc 4520/6520 Fall 2013 Slides adapted from George Bebis, University of Reno, Nevada.
Shortest paths Given: A single source vertex (given s) in a weighted, directed graph. Want to compute a shortest path for each possible destination. (Single.
Single-Source Shortest Paths (25/24) HW: 25-2 and 25-3 p. 546/24-2 and 24-3 p.615 Given a graph G=(V,E) and w: E   weight of is w(p) =  w(v[i],v[i+1])
TIRGUL 10 Dijkstra’s algorithm Bellman-Ford Algorithm 1.
David Luebke 1 11/21/2016 CS 332: Algorithms Minimum Spanning Tree Shortest Paths.
Algorithms and Data Structures Lecture XIII
Minimum Spanning Tree Shortest Paths
All-Pairs Shortest Paths (26.0/25)
Algorithms (2IL15) – Lecture 5 SINGLE-SOURCE SHORTEST PATHS
Algorithms and Data Structures Lecture XIII
Lecture 13 Algorithm Analysis
Lecture 13 Algorithm Analysis
CSC 413/513: Intro to Algorithms
Lecture 13 Algorithm Analysis
Lecture 13 Algorithm Analysis
Chapter 24: Single-Source Shortest Paths
Chapter 24: Single-Source Shortest Paths
Parallel Graph Algorithms
CS 3013: DS & Algorithms Shortest Paths.
Presentation transcript:

Notes This set of slides (Handout #7) is missing the material on Reductions and Greed that was presented in class. Those slides are still under construction, and will be posted as Handout #6. The quiz on Thursday of 8 th week will cover Greed and Dynamic Programming, and through HW #4

Topological Sorting Prelude to shortest paths Generic scheduling problem Input: –Set of tasks {T 1, T 2, T 3, …, T n } Example: getting dressed in the morning: put on shoes, socks, shirt, pants, belt, … –Set of dependencies {T 1  T 2, T 3  T 4, T 5  T 1, …} Example: must put on socks before shoes, pants before belt, … Want: –ordering of tasks which is consistent with dependencies Problem representation: Directed Acyclic Graph –Vertices = tasks; Directed Edges = dependencies –Acyclic: if  cycle of dependencies, no solution possible

Topological Sorting TOP_SORT PROBLEM: Given a DAG G=(V,E) with |V|=n, assign labels 1,...,n to v i  V s.t. if v has label k, all vertices reachable from v have labels > k “Induction idea”: –Know how to label DAG’s with < n vertices Claim: A DAG G always has some vertex with indegree = 0 –Take an arbitrary vertex v. If v doesn’t have indegree = 0, traverse any incoming edge to reach a predecessor of v. If this vertex doesn’t have indegree = 0, traverse any incoming edge to reach a predecessor, etc. –Eventually, this process will either identify a vertex with indegree = 0, or else reach a vertex that has been reached previously (a contradiction, given that G is acyclic). “Inductive approach”: Find v with indegree(v) = 0, give it lowest available label, then delete v (and incident edges), update degrees of remaining vertices, and repeat

Dynamic Programming Key Phrases As we discuss shortest paths, and then the dynamic programming approach, keep in mind the following “key phrases” –“Principle of Optimality”: Any subsolution of an optimal solution is itself an optimal solution –“Overlapping Subproblems”: Can exploit a polynomially bounded number of possible subproblems –“Bottom-Up D/Q” / “Table (cache) of Subproblem Solutions”: avoid recomputation by tabulating subproblem solutions –“Relaxation / Successive Approximation”: Often, the DP approach makes multiple passes, each time solving a less restricted version of the original problem instance Eventually, solve completely unrestricted = original problem instance

Single-Source Shortest Paths Given G=(V,E) a directed graph, L : E   + a length function, and a distinguished source s  V. Find the shortest path in G from s to each v i  V, v i  s. Let L*(i,j) = length of SP from v i to v j Lemma: Suppose S   V contains s = v i and L*(1,w) is known  w  S. For v i  S: –Let D(v i ) = min w  S L*(1,w) + L(w,i) (*) L*(1,w) = path length, with path restricted to vertices of S L(w,i) = edge length –Let v v minimize D(v i ) over all nodes v i  S (**) Then L*(1,v) = D(v v ). Notation: D(v) = length of the 1-to-v SP that uses only vertices of S (except for v). (D(v) is not necessarily the same as L*(1,v), because the path is restricted!)

Single-Source Shortest Paths Proof of Lemma: To prove equality, prove  and  L*(1,v)  D(v) is obvious because D(v) is the minimum length of a restricted path, while L*(1,v) is unrestricted Let L*(1,v) be the shortest path s=v 1, v 2, …, v r = v from the source v 1 to v. Let v j be the first vertex in this SP that is not in S Then: L*(1,v) = L*(1, v j-1 ) + L(v j-1, v j ) + L*(v j,v) // else, not a shortest path  D(v j ) + L*(v j,v) // D(v j ) minimizes over all w in S, including v j-1  D(v) + L*(v j,v) // because both v j and v v  S, but v v chosen first  D(v) // since L*(v j,v)  0. Lemma  Dijkstra’s Algorithm

A Fact About Shortest Paths Triangle Inequality:  (u,v)   (u,x) +  (x,v) (shortest path distances induce a metric) u x v

Shortest Path Formulations Given a graph G=(V,E) and w: E   –(1 to 2) “s-t”: Find a shortest path from s to t –(1 to all) “single-source”: Find a shortest path from a source s to every other vertex v  V –(All to all) “all-pairs”: Find a shortest path from every vertex to every other vertex Weight of path =  w(v[i],v[i+1]) Sometimes: no negative edges –Examples of “negative edges”: travel inducements, exothermic reactions in chemistry, unprofitable transactions in arbitrage, … Always: no negative cycles –Makes the shortest-path problem well-defined <0

Shortest Paths First case: All edges have positive length –Length of (v i,v j ) edge = d ij Condition 1: d ij > 0 Condition 2: d i + d jk  d ik for some i,j,k –else shortest-path problem would be trivial Observation 1: Length of a path > length of any of its subpaths Observation 2: Any subpath of a shortest path is itself a shortest path Principle of Optimality Observation 3: Any shortest path contains  n-1 edges pigeonhole principle; assumes no negative cycles; n nodes total

Shortest Paths Scenario: All shortest paths from v 0 = source to other nodes are ordered by increasing length: –|P 1 |  |P 2 |  …  |P n-1 | –Index nodes accordingly Algorithm: Find P 1, then find P 2, etc. Q: How many edges are there in P 1 ? –Exactly 1 edge, else can find a subpath that is shorter Q: How many edges are there in P k ? –At most k edges, else can find k (shorter) subpaths, which would contradict the definition of P k Observation 4: P k contains  k edges To find P 1 : only look at one-edge paths (min = P 1 ) To find P 2 : only look at one- and two-edge paths –But, need only consider two-edge paths of form d 01 + d 1i –Else would have  1 paths shorter than P 2, a contradiction

Another Presentation of Dijkstra’s Algorithm Terminology –Permanent label: true SP distance from v 0 to v i –Temporary label: restricted SP distance from v 0 to v i (going through only existing permanently-labeled nodes) Permanently labeled nodes = set S in previous development Dijkstra’s Algorithm 0. All vertices v i, i = 1,…, n-1, receive temporary labels l i with value d 0i LOOP: 1. Among all temporary labels, pick l k = min I l i and change l k to l k * (i.e., make l k ‘s label permanent) // stop if no temporary labels left 2. Replace all temporary labels of v k ‘s neighbors, using l i  min (l i, l k * + d ki )

Prim’s Algorithm vs. Dijkstra’s Algorithm Prim: Iteratively add edge e ij to T, such that v i  T, v j  T, and d ij is minimum Dijkstra: Iteratively add edge e ij to T, such that v i  T, v j  T, and l i + d ij is minimum Both are building trees, in very similar ways! –Prim: Minimum Spanning Tree –Dijkstra: Shortest Path Tree What kind of tree does the following algorithm build? Prim-Dijkstra: Iteratively add edge e ij to T, such that v i  T, v j  T, and c  l i + d ij is minimum // 0  c  1

Bellman-Ford Algorithm Idea: Successive Approximation / Relaxation –Find SP using  1 edges –Find SP using  2 edges –… –Find SP using  n-1 edges  have true shortest paths Let l j (k) denote shortest v 0 – v j pathlength using  k edges Then, l i (1) = d 0j  j = 1, …, n-1 // d ij =  if no i-j edge In general, l j (k+1) = min { l j (k), min i (l i (k) + d ij ) } –l j (k) : don’t need k+1 arcs –min i (l i (k) + d ij ) : view as length-k SP plus a single edge

Bellman-Ford vs. Dijkstra B S C D A Pass Label A 8 min(8, 3+4,  +1, 2+  ) = 7 min(7, 3+4, 4+1, 2+  ) = 5 min(5, 3+4, 4+1, 2+  )= 5 B 3 min(3, 8+4,  + , 2+  ) = 3 min(3, 7+4, 4+ , 2+  ) = 3 min(3, 5+4, 4+ , 2+  ) = 3 C  min( , 8+1, 3+ , 2+2) = 4 min(4, 7+1, 3+ , 2+2) = 4 min(4, 5+1, 3+ , 2+2) = 4 D 2 min(2, 8+ , 3+ ,  +2) = 2 min(2, 7+ , 3+ , 4+2) = 2 min(2, 5+ , 3+ , 4+2) = 2

Bellman-Ford vs. Dijkstra B S C D A Pass Label A 8 min([8], 2+  ) = 8 min([8], 3+4) = 7 min([7], 4+1) = 5* B 3 min([3], 2+  ) = 3* C  min([  ], 2+2) = 4 min(4, 3+  ) = 4* D 2*

Special Case: DAGs (24.2) Longest-Path Problem: well-defined only when there are no cycles DAG: topologically sort the vertices  labels v 1, …, v n s.t. all edges directed from v i to v j, i < j Let l i denote longest v 0 – v j pathlength –l 0 = 0 –l 1 = d 01 // d ij = -  if no i-j edge –l 2 = max(d 01 + d 12, d 02 ) –In general, l k = max j<k (l j + d jk ) Shortest pathlength in DAG: replace max by min, use d ij = +  if no i-j edge

DAG Shortest Paths Complexity (24.2) Bellman-Ford = O(VE) Topological sort O(V+E) (DFS) Will never relax edges out of vertex v until have done all edges in to v –Runtime O(V+E) Application: PERT ( program evaluation and review technique ) – critical path is the longest path through the DAG

All-Pairs Shortest Paths ( ) Directed graph G = (V,E), weight E   Goal: Create n  n matrix of SP distances  (u,v) Running Bellman-Ford once from each vertex O( ) = O( ) on dense graphs Adjacency-matrix representation of graph: – n  n matrix W = (w ij ) of edge weights –assume w ii = 0  i, SP to self has no edges, as long as there are no negative cycles

Simple APSP Dynamic Programming (25.1) d ij (m) = weight of s-p from i to j with  m edges d ij (0) = 0 if i = j and d ij (0) =  if i  j d ij (m) = min k {d ik (m-1) + w kj } Runtime = O( n 4 ) n-1 passes, each computing n 2 d’s in O(n) time j i  m-1

Matrix Multiplication (25.1) Similar: C = A  B, two n  n matrices c ij =  k a ik  b kj O(n 3 ) operations replacing: ‘‘ + ’’  ‘‘ min ’’ ‘‘  ’’  ‘‘ + ’’ –gives c ij = min k {a ik + b kj } –D (m) = D (m-1) ‘‘  ’’ W –identity matrix is D (0) –Cannot use Strassen’s because no subtraction Time is still O(n  n 3 ) = O(n 4 ) Repeated squaring: W 2n = W n  W n (addition chains) Compute W, W 2, W 4,..., W 2 k, k = log n  O(n 3 log n)

Floyd-Warshall Algorithm (26.2/25.2) Also DP, but even faster (by another log n actor  O(n 3 )) c ij (m) = weight of SP from i to j with intermediate vertices in the set {1, 2,..., m}   (i, j)= c ij (n) DP: compute c ij (n) in terms of smaller c ij (n-1) – c ij (0) = w ij –c ij (m) = min {c ij (m), c im (m-1) + c mj (m-1) } intermediate nodes in {1, 2,..., m} j i c im (m-1) c mj (m-1) m c ij (m-1)

Floyd-Warshall Algorithm (26.2/25.2) Difference from previous: we do not check all possible intermediate vertices. for m=1..n do for i=1..n do for j = 1..n do c ij (m) = min {c ij (m-1), c im (m-1) + c mj (m-1) } Runtime O(n 3 ) Transitive Closure G* of graph G: –(i,j)  G* iff  path from i to j in G –Adjacency matrix, elements on {0,1} –Floyd-Warshall with ‘‘ min ’’  ‘‘OR’’, ‘‘+’’  ‘‘ AND ’’ –Runtime O(n 3 ) –Useful in many problems

BEGIN DIGRESSION (alternative presentation, can be skipped)

CLRS Notation: Bellman-Ford (24.1) SSSP in General Graphs Essentially a BFS based algorithm Shortest paths (tree) easy to reconstruct for each v  V do d[v]   ; d[s]  0 // initialization for i =1,...,|V|-1 do for each edge (u,v)  E do d[v]  min{d[v], d[u]+w(u,v)} // relaxation for each v  V do if d[v]> d[u] + w(u,v) then no solution // negative cycle checking for each v  V, d[v]=  (s,v) // have true shortest paths

Bellman-Ford Analysis (CLRS 24.1) Runtime = O(VE) Correctness –Lemma: d[v]   (s,v) Initially true Let d[v] = d[u] +w(u,v) by triangle inequality, for first violation d[v] <  (s,v)   (s,u)+w(u,v)  d(u)+w(u,v) –After |V|-1 passes all d values are  ’s if there are no negative cycles s  v[1]  v[2] ...  v (some shortest path) After i-th iteration d[s,v[i]] is correct and final

Dijkstra Yet Again (CLRS 24.3) Faster than Bellman-Ford because can exploit having only non-negative weights Like BFS, but uses priority queue for each v  V do d[v]   ; d[s]  0 S   ; Q  V While Q   do u  Extract-Min(Q) S  S + u for v adjacent to u do d[v]  min{d[v], d[u]+w(u,v)} (relaxation = Decrease-Key)

Dijkstra Runtime (24.3) Extract-Min executed |V| times Decrease-Key executed |E| times Time = |V|  T(Extract-Min) // find+delete = O(log V)) + |E|  T(Decrease-Key) // delete+add =O(log V)) Binary Heap = E  log V (30 years ago) Fibonacci Heap = E + V  log V (10 years ago) Optimal time algorithm found 1 year ago; runs in time O(E) (Mikel Thorup)

Same Lemma as for Bellman-Ford: d[v]   (s,v) Thm: Whenever u is added to S, d[u] =  (s,u) Proof: –Assume that u is the first vertex s.t. d[u] >  (s,u) –Let y be first vertex  V-S that is on actual shortest s-u path  d[y] =  (s,y) –For y’s predecessor x, d[x] =  (s,x) –At the moment that we put x in S, d[y] gets value  (s,y) –d[u] >  (s,u) =  (s,y) +  (y,u) = d[y] +  (y,u)  d[y] Dijkstra Correctness (24.3) s x y u S Q

END DIGRESSION (alternative presentation, can be skipped)

Dynamic Programming (CLRS 15) Third Major Paradigm so far –DQ, Greed are previous paradigms Dynamic Programming = metatechnique (not a particular algorithm) “Programming” refers to use of a “tableau” in the method (cf. “mathematical programming”), not to writing of code

Longest Common Subsequence (CLRS 15.4) Problem: Given x[1..m] and y[1..n], find LCS x: A B C B D A B  B C B A y: B D C A B A Brute-force algorithm: –for every subsequence of x, check if it is in y –O(n2 m ) time 2 m subsequences of x (each element is either in or out of the subsequence) O(n) for scanning y with x-subsequence (m  n)

Recurrent Formula for LCS (15.4) Let c[i,j] = length of LCS of X[i]=x[1..i],Y[j]= y[1..j] Then c[m,n] = length of LCS of x and y Theorem: if x[i] = y[j] otherwise Proof: x[i] = y[j]  LCS([X[i],Y[j]) = LCS(X[i-1],Y[j-1]) + x[i]

DP Properties(15.3) Any part of the optimal answer is also optimal –A subsequence of LCS(X,Y) is the LCS for some subsequences of X and Y. Subproblems overlap –LCS(X[m],Y[n-1]) and LCS(X[m-1],Y[n]) have common subproblem LCS(X[m-1],Y[n-1]) –There are polynomially few subproblems in total = mn for LCS –Unlike divide and conquer

DP (CLRS 15.3) After computing solution of a subproblem, store in table Time = O(mn) When computing c[i,j] we need O(1) time if we have: –x[i], y[j] –c[i,j-1] –c[i-1,j] –c[i-1,j-1]

DP Table for LCS y B D C A B A xABCBDABxABCBDAB

DP Table for LCS y B D C A B A xABCBDABxABCBDAB

DP Table for LCS y B D C A B A xABCBDABxABCBDAB

DP Table for LCS y B D C A B A xABCBDABxABCBDAB

Optimal Polygon Triangulation Polygon has sides and vertices Polygon is simple = not self-intersecting. Polygon P is convex if any line segment with ends in P lies entirely in P Triangulation of P is partition of P with chords into triangles. Problem: Given a convex polygon and weight function defined on triangles (e.g. the perimeter). Find triangulation of minimum weight (of minimum total length). # of triangles: Always have n-2 triangles with n-3 chords

Optimal Polygon Triangulation Optimal sub-triangulation of optimal triangulation Recurrent formula: t[i,j] = the weight of the optimal triangulation of the polygon ; If i=j, then t[i,j]=0; else Runtime O(n 3 ) and space is O(n 2 ) v[i-1] v[i] v[j] v[i-1] v[i] v[j] v[k]