Download presentation
Presentation is loading. Please wait.
Published byIzabel Amália de Paiva Estrela Modified over 6 years ago
1
Many slides here are based on D. Luebke slides
CS583 Lecture 10 Jana Kosecka Graph Algorithms Shortest Path Algorithms Minimum Spanning Tree, Kruskal Many slides here are based on D. Luebke slides
2
Previously Depth first search - types of edges
- starting time/ finishing time - how to detect cycles - running times DAG’s - topological sort
3
DFS Example d f 1 |12 8 |11 13|16 14|15 5 | 6 3 | 4 2 | 7 9 |10
source vertex d f 1 |12 8 |11 13|16 14|15 5 | 6 3 | 4 2 | 7 9 |10 Tree edges Back edges Forward edges Cross edges
4
Review: Topological Sort
Topological sort of a DAG: Linear ordering of all vertices in graph G such that vertex u comes before vertex v if edge (u, v) G Real-world example: getting dressed
5
Getting Dressed Underwear Socks Shoes Pants Belt Shirt Watch Tie
Jacket
6
Getting Dressed Underwear Socks Shoes Pants Belt Shirt Watch Tie
Jacket Underwear Pants Shoes Watch Shirt Belt Tie Jacket Socks
7
Review: Topological Sort
{ Run DFS When a vertex is finished, output it On the front of linked list Vertices are output in reverse topological order } Time: O(V+E) Correctness: Want to prove that (u,v) G uf > vf
8
Review: Strongly Connected Components
Call DFS to compute finishing times f[u] of each vertex Create transpose graph (directions of edges reversed) Call DFS on the transpose, but in the main loop of DFS, consider vertices in the decreasing order of f[u] Output the vertices of each tree in the depth-first forest formed in line 3 as a separate strongly connected component What about undirected graph ?
9
Review: Strongly Connected components
SSC algorithm will induce component graph which is a DAG 13|14 11|16 1|10 8|9 12|15 3 | 4 2|7 5|6
10
Review: Prim’s Algorithm
14 10 3 6 4 5 2 9 15 8 u MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v); Red vertices have been removed from Q
11
Review: Prim’s Algorithm
3 14 10 6 4 5 2 9 15 8 u MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v); Red arrows indicate parent pointers
12
Review: Prim’s Algorithm
3 14 10 6 4 5 2 9 15 8 u MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v);
13
Review: Prim’s Algorithm
3 10 14 6 4 5 2 9 15 8 MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v); u
14
Review: Prim’s Algorithm
3 10 8 14 6 4 5 2 9 15 MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v);
15
Review: Prim’s Algorithm
3 10 8 14 6 4 5 2 9 15 MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v); 6
16
Review: Prim’s Algorithm
MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v); 3 10 8 14 6 4 5 2 9 15 u
17
Review: Prim’s Algorithm
MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v); 3 10 2 8 14 6 4 5 9 15
18
Review: Prim’s Algorithm
MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v); 3 10 2 8 15 14 6 4 5 9
19
Review: Prim’s Algorithm
3 10 2 9 8 15 14 6 4 5 MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v);
20
Review: Prim’s Algorithm
MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v); 3 10 2 9 8 15 4 14 6 5
21
Review: Prim’s Algorithm
3 5 2 9 8 15 4 14 10 6 MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v);
22
Review: Prim’s Algorithm
3 5 2 9 8 15 4 14 10 6 MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v);
23
Review: Prim’s Algorithm
3 5 2 9 8 15 4 14 10 6 MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v);
24
Review: Prim’s Algorithm
3 5 2 9 8 15 4 14 10 6 MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v);
25
Review: Prim’s Algorithm
3 5 2 9 8 15 4 14 10 6 MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v);
26
Review: Prim’s Algorithm
MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; DecreaseKey(v, w(u,v));
27
Review: Prim’s Algorithm
MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; DecreaseKey(v, w(u,v)); How often is ExtractMin() called? How often is DecreaseKey() called?
28
Review: Prim’s Algorithm
MST-Prim(G, w, r) Q = V[G]; for each u Q key[u] = ; key[r] = 0; p[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v Adj[u] if (v Q and w(u,v) < key[v]) p[v] = u; key[v] = w(u,v); What will be the running time? A: Depends on queue binary heap: O(E lg V) Fibonacci heap: O(V lg V + E) ExtractMin total number of calls O(V log V) DecreaseKey total number of calls O(E log V) Total number of calls O(V logV +E logV) = O(E log V) Think why we can combine things in the expression above ExtractMin total number of calls O(V log V) DecreaseKey total number of calls O(E log V)
29
Minimum Weight Spanning Tree Kruskal’s Algorithm
{ T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }
30
Disjoint-Set Union Problem
Want a data structure to support disjoint sets Collection of disjoint sets S = {Si}, Si ∩ Sj = Need to support following operations: MakeSet(x): S = S U {{x}} Union(Si, Sj): S = S - {Si, Sj} U {Si U Sj} FindSet(X): return Si S such that x Si Before discussing implementation details, we look at example application: MSTs
31
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
32
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
33
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
34
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1?
35
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
36
Kruskal’s Algorithm Kruskal() { T = ; for each v V MakeSet(v);
Run the algorithm: 2? Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
37
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
38
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5? 21 13 1
39
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
40
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8? 25 5 21 13 1
41
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
42
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9? 14 17 8 25 5 21 13 1
43
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
44
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13? 1
45
Kruskal’s Algorithm Kruskal() { T = ; for each v V MakeSet(v);
Run the algorithm: 2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
46
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14? 17 8 25 5 21 13 1
47
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
48
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17? 8 25 5 21 13 1
49
Kruskal’s Algorithm Kruskal() { T = ; for each v V MakeSet(v);
Run the algorithm: 2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19? 9 14 17 8 25 5 21 13 1
50
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 19 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 9 14 17 8 25 5 21? 13 1
51
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25? 5 21 13 1
52
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 19 9 14 17 8 25 5 21 13 1
53
Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ;
2 19 Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); } 9 14 17 8 25 5 21 13 1
54
Correctness Of Kruskal’s Algorithm
Sketch of a proof that this algorithm produces an MST for T: Assume algorithm is wrong: result is not an MST Then algorithm adds a wrong edge at some point If it adds a wrong edge, there must be a lower weight edge (cut and paste argument) But algorithm chooses lowest weight edge at each step -> Contradiction Again, important to be comfortable with cut and paste arguments
55
Kruskal’s Algorithm What will affect the running time? Kruskal() {
for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }
56
Kruskal’s Algorithm What will affect the running time? Kruskal()
1 Sort O(V) MakeSet() calls O(E) FindSet() calls O(V) Union() calls (Exactly how many Union()s?) Kruskal() { T = ; for each v V MakeSet(v); sort E by increasing edge weight w for each (u,v) E (in sorted order) if FindSet(u) FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }
57
Kruskal’s Algorithm: Running Time
To summarize: Sort edges: O(E lg E) O(V) MakeSet()’s O(E) FindSet()’s and Union()’s Upshot: Best disjoint-set union algorithm makes above 3 operation stake O((V+E)(V)), almost constant (slowly growing function of V) Since E >= V-1 then we have O(E(V)) Also since (V) = O(lg V) = O(lg E) Overall thus O(E lg E), almost linear w/o sorting
58
Disjoint Sets (ch 21) In Kruskal’s alg., Connected Components
Need to do set membership and set union efficiently Typical operations on disjoint sets member(a,s) insert(a,s) delete(a,s) union(s1, s2, s3) find(a) make-set(x) Analysis in terms on n number of make-set operations And m total number of make-set, find, union
59
Disjoint Set Union So how do we implement disjoint-set union?
Naïve implementation: use a linked list to represent each set: MakeSet(): ??? time FindSet(): ??? time Union(A,B): “copy” elements of A into B: ??? time Set representative
60
Disjoint Set Union So how do we implement disjoint-set union?
Naïve implementation: use a linked list to represent each set: MakeSet(): O(1) time FindSet(): O(1) time Union(A,B): “copy” elements of A into B: O(A) time Why ? How long can a single Union() take? How long will n Union()’s take?
61
Disjoint Set Union: Analysis
Worst-case analysis: O(n2) time for n Union’s Union(S1, S2) “copy” 1 element Union(S2, S3) “copy” 2 elements … Union(Sn-1, Sn) “copy” n-1 elements O(n2) Appending largest set to smaller – worst case Average per operation O(n) (2n-1) operations total cost O(n2) (n MakeSet(s) and n-1 Union(s)) Improvement: always copy smaller into larger Why will this make things better? What is the worst-case time of Union()? But now n Union’s take only O(n lg n) time!
62
Disjoint Set Union: Analysis
Worst-case analysis: O(n2) time for n Union’s Union(S1, S2) “copy” 1 element Union(S2, S3) “copy” 2 elements … Union(Sn-1, Sn) “copy” n-1 elements O(n2) Appending largest set to smaller – worst case Average per operation O(n) (2n-1) operations total cost O(n2) (n MakeSet(s) and n-1 Union(s)) Improvement: always copy smaller into larger Why will this make things better? What is the worst-case time of Union()? But now n Union’s take only O(n lg n) time!
63
Amortized Analysis Amortized analysis computes average times without using probabilities Worst case: Each time element is copied, element in smaller set must have the pointer updated 1st time resulting set size at least 2 members 2 2nd time 4 (lg n)-th time n With our new Union(), any individual element is copied at most lg n times when forming the complete set from 1-element sets After lg n times the resulting set will have already n numbers For n Union operations time spent updating objects pointers O(n lg n)
64
Amortized Analysis of Disjoint Sets
Since we have n elements each copied at most lg n times, n Union()’s takes O(n lg n) time We say that each Union() takes O(lg n) amortized time Financial term: imagine paying $(lg n) per Union() operation At first we are overpaying; initial Union $O(1) But we accumulate enough $ in bank to pay for later expensive O(n) operation. Important: amount in bank never goes negative
65
Amortized Analysis of Disjoint Sets
Since we have n elements each copied at most lg n times, n Union()’s takes O(n lg n) time Therefore we say the amortized cost of a Union() operation is O(lg n) This is the aggregate method of amortized analysis: n operations take time T(n) Average cost of an operation = T(n)/n In this style of analysis the amortized cost is applied to each operation, although different operations may have different costs
66
Shortest Path Algorithms
67
Single-Source Shortest Path
Problem: given a weighted directed graph G, find the minimum-weight path from a given source vertex s to another vertex v “Shortest-path” = minimum weight Weight of path is sum of edges E.g., a road map: what is the shortest path from Faixfax to Washington DC?
68
Shortest Path Properties
Again, we have optimal substructure: the shortest path consists of shortest subpaths: Proof: suppose some subpath is not a shortest path There must then exist a shorter subpath Could substitute the shorter subpath for a shorter path but then overall path is not shortest path. Contradiction Optimal substructure property – hallmark of dynamic programming
69
Shortest Path Properties
In graphs with negative weight cycles, some shortest paths will not exist (Why?): < 0
70
Relaxation A key technique in shortest path algorithms is relaxation
Idea: for all v, maintain upper bound d[v] on (s,v) Relax(u,v,w) { if (d[v] > d[u]+w) then d[v]=d[u]+w; } 9 5 2 7 Relax 6
71
Shortest Path Properties
Define (u,v) to be the weight of the shortest path from u to v Shortest paths satisfy the triangle inequality: (u,v) (u,x) + (x,v) “Proof”: x u v This path is no longer than any other path
72
Bellman Ford Algorithm
Single source shortest path algorithm Weights can be negative Algorithm returns NIL if there is negative weight cycle Otherwise returns produces shortest paths from source to all other vertices
73
Bellman-Ford Algorithm
Initialize d[], which will converge to shortest-path value BellmanFord() for each v V d[v] = ; d[s] = 0; for i=1 to |V|-1 for each edge (u,v) E Relax(u,v, w(u,v)); if (d[v] > d[u] + w(u,v)) return “no solution”; Relax(u,v,w): if (d[v] > d[u]+w) then d[v]=d[u]+w Relaxation: Make |V|-1 passes, relaxing each edge Test for solution Under what condition do we get a solution?
74
Bellman-Ford Algorithm
for each v V d[v] = ; d[s] = 0; for i=1 to |V|-1 for each edge (u,v) E Relax(u,v, w(u,v)); if (d[v] > d[u] + w(u,v)) return “no solution”; Relax(u,v,w): if (d[v] > d[u]+w) then d[v]=d[u]+w What will be the running time?
75
Bellman-Ford Algorithm
for each v V d[v] = ; d[s] = 0; for i=1 to |V|-1 for each edge (u,v) E Relax(u,v, w(u,v)); if (d[v] > d[u] + w(u,v)) return “no solution”; Relax(u,v,w): if (d[v] > d[u]+w) then d[v]=d[u]+w What will be the running time? A: O(VE)
76
Bellman-Ford Algorithm
for each v V d[v] = ; d[s] = 0; for i=1 to |V|-1 for each edge (u,v) E Relax(u,v, w(u,v)); if (d[v] > d[u] + w(u,v)) return “no solution”; Relax(u,v,w): if (d[v] > d[u]+w) then d[v]=d[u]+w B s -1 2 A E 2 3 1 -3 4 C D 5 Ex: work on board
77
Bellman-Ford Prove: after |V|-1 passes, all d values correct (for all vertices) Consider shortest path from s to v: s v1 v2 v3 v4 v Initially, d[s] = 0 is correct, and doesn’t change (Why?) After 1 pass through edges, d[v1] is correct (Why?) and doesn’t change After 2 passes, d[v2] is correct and doesn’t change … Terminates in |V| - 1 passes: (Why?) What if it doesn’t?
78
if (d[v] > d[u] + w(u,v))
Bellman-Ford Note that order in which edges are processed affects how quickly it converges Correctness: show d[v] = (s,v) for all vertices or returns FALSE (negative weight cycle) By previous lemma at the termination we have d[v] = (s,v) <= (s,u) + w(u,v) by triangle inequality = d[u] + w(u,v) And none of the test in the last loop will return ”no solution”, i.e. FALSE. If there is a negative weight cycle – prove by contradiction that Bellman Ford algorithm will return FALSE (see book) if (d[v] > d[u] + w(u,v)) return “no solution”;
79
DAG Shortest Paths Problem: finding shortest paths in DAG
Bellman-Ford takes O(VE) time. How can we do better? Idea: use topological sort If were lucky and processes vertices on each shortest path from left to right, would be done in one pass Every path in a dag is subsequence of topologically sorted vertex order, so processing vertices in that order, we will do each path in forward order (will never relax edges out of vertex before doing all edges into vertex). Thus: just one pass. What will be the running time?
80
Dijkstra’s Algorithm If no negative edge weights, we can beat Bellman-Ford Similar to breadth-first search Grow a tree gradually, advancing from vertices taken from a queue Also similar to Prim’s algorithm for MST Use a priority queue keyed on d[v]
81
Dijkstra’s Algorithm Dijkstra(G) for each v V d[v] = ;
d[s] = 0; S = ; Q = V; while (Q ) u = ExtractMin(Q); S = S U {u}; for each v u->Adj[] if (d[v] > d[u]+w(u,v)) d[v] = d[u]+w(u,v); B C D A 10 4 3 2 1 5 Ex: run the algorithm Relaxation Step Note: this is really a call to Q->DecreaseKey()
82
Dijkstra’s Algorithm Dijkstra(G)
for each v V d[v] = ; d[s] = 0; S = ; Q = V; while (Q ) u = ExtractMin(Q); S = S U {u}; for each v u->Adj[] if (d[v] > d[u]+w(u,v)) d[v] = d[u]+w(u,v); How many times is ExtractMin() called? How many times is DecraseKey() called? What will be the total running time?
83
Dijkstra’s Algorithm Dijkstra(G)
for each v V d[v] = ; d[s] = 0; S = ; Q = V; while (Q ) u = ExtractMin(Q); S = S U {u}; for each v u->Adj[] if (d[v] > d[u]+w(u,v)) d[v] = d[u]+w(u,v); How many times is ExtractMin() called? How many times is DecraseKey() called? A: O(E lg V) using binary heap for Q Can acheive O(V lg V + E) with Fibonacci heaps
84
Dijkstra’s Algorithm Dijkstra(G) for each v V d[v] = ;
d[s] = 0; S = ; Q = V; while (Q ) u = ExtractMin(Q); S = S U{u}; for each v u->Adj[] if (d[v] > d[u]+w(u,v)) d[v] = d[u]+w(u,v); Correctness: we must show that when u is removed from Q, it has already converged
85
Correctness Of Dijkstra's Algorithm
p2 u s y x p2
86
Correctness Of Dijkstra's Algorithm
p2 u s y x p2
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.