# Introduction to Algorithms 6.046J/18.401J L ECTURE 16 Greedy Algorithms (and Graphs) Graph representation Minimum spanning trees Optimal substructure Greedy.

## Presentation on theme: "Introduction to Algorithms 6.046J/18.401J L ECTURE 16 Greedy Algorithms (and Graphs) Graph representation Minimum spanning trees Optimal substructure Greedy."— Presentation transcript:

Introduction to Algorithms 6.046J/18.401J L ECTURE 16 Greedy Algorithms (and Graphs) Graph representation Minimum spanning trees Optimal substructure Greedy choice Prim’s greedy MST algorithm Prof. Charles E. Leiserson November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.1

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.2 Graphs (review) Definition. A directed graph (digraph) G = (V, E) is an ordered pair consisting of a set V of vertices (singular: vertex), a set E ⊆  V  V of edges. In an undirected graph G = (V, E), the edge set E consists of unordered pairs of vertices. In either case, we have | E | = O(V 2 ). Moreover, if G is connected, then | E |  | V | –1, which implies that lg | E | =  (lg V). (Review CLRS, Appendix B.)

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.3 The adjacency matrix of a graph G = (V, E), where V = {1, 2, …, n}, is the matrix A[1.. n, 1.. n] given by 1 if (i, j)  E, 0 if (i, j) ∉  E. Adjacency-matrix representation A[i,j] =

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.4 The adjacency matrix of a graph G = (V, E), where V = {1, 2, …, n}, is the matrix A[1.. n, 1.. n] given by 1 if (i, j)  E, 0 if (i, j) ∉  E. Adjacency-matrix representation A[i,j] = 12341234 0 1 1 0 0 0 1 0 0 0 0 0 1 0 A 1 2 3 4  (V 2 ) storage  dense representation. 2 1 3 4

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.8 Minimum spanning trees Input: A connected, undirected graph G = (V, E) with weight function w : E →  R. For simplicity, assume that all edge weights are distinct. (CLRS covers the general case.)

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.9 Minimum spanning trees Input: A connected, undirected graph G = (V, E) with weight function w : E →  R. For simplicity, assume that all edge weights are distinct. (CLRS covers the general case.) Output: A spanning tree T — a tree that connects all vertices — of minimum weight: w (T) =  w(u,v). (u,v)  T

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.10 Example of MST 6 12 14 8 5 7 9 15 3 10

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.11 Example of MST 6 12 14 8 5 7 9 15 3 10

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.12 Optimal substructure MST T: (Other edges of G are not shown.)

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.13 Optimal substructure MST T: (Other edges of G are not shown.) Remove any edge (u, v) ∈  T. u v

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.14 Optimal substructure MST T: (Other edges of G are not shown.) Remove any edge (u, v)  T. u v

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.15 Optimal substructure MST T: (Other edges of G are not shown.) Remove any edge (u, v)  T. Then, T is partitioned into two subtrees T 1 and T 2. T1T1 u v T2T2

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.16 Optimal substructure MST T: (Other edges of G are not shown.) Remove any edge (u, v)  T. Then, T is partitioned into two subtrees T 1 and T 2. Theorem. The subtree T 1 is an MST of G 1 = (V 1, E 1 ), the subgraph of G induced by the vertices of T 1 : V 1 = vertices of T 1, E 1 = { (x, y)  E : x, y  V 1 }. T1T1 u v T2T2 Similarly for T 2.

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.17 Proof of optimal substructure Proof. Cut and paste: w(T) = w(u, v) + w(T 1 ) + w(T 2 ). If T 1 were a lower-weight spanning tree than T 1 for G 1, then T = {(u, v)} ∪ T 1 ∪ T 2 would be a lower-weight spanning tree than T for G.

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.18 Proof of optimal substructure Proof. Cut and paste: w(T) = w(u, v) + w(T 1 ) + w(T 2 ). If T 1 were a lower-weight spanning tree than T 1 for G 1, then T = {(u, v)} ∪ T 1 ∪ T 2 would be a lower-weight spanning tree than T for G. Do we also have overlapping subproblems? Yes.

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.19 Proof of optimal substructure Proof. Cut and paste: w(T) = w(u, v) + w(T 1 ) + w(T 2 ). If T 1 were a lower-weight spanning tree than T 1 for G 1, then T = {(u, v)} ∪ T 1 ∪ T 2 would be a lower-weight spanning tree than T for G. Do we also have overlapping subproblems? Yes. Great, then dynamic programming may work! Yes, but MST exhibits another powerful property which leads to an even more efficient algorithm.

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.20 Hallmark for “greedy” algorithms Greedy-choice property A locally optimal choice is globally optimal.

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.21 Hallmark for “greedy” algorithms Greedy-choice property A locally optimal choice is globally optimal. Theorem.Let T be the MST of G = (V, E), and let A ⊆  V. Suppose that (u, v)  E is the least-weight edge connecting A to V – A. Then, (u, v)  T.

∈  A ∈  V – A November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.22 Proof of theorem Proof. Suppose (u, v) ∉  T. Cut and paste. T:T: v (u, v) = least-weight edge connecting A to V – A u

 A  V – A November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.23 Proof of theorem Proof. Suppose (u, v) ∉  T. Cut and paste. T:T: v (u, v) = least-weight edge connecting A to V – A u Consider the unique simple path from u to v in T.

 A  V – A November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.24 Proof of theorem Proof. Suppose (u, v) ∉  T. Cut and paste. T:T: v (u, v) = least-weight edge connecting A to V – A u Consider the unique simple path from u to v in T. Swap (u, v) with the first edge on this path that connects a vertex in A to a vertex in V – A.

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.25 Proof of theorem Proof. Suppose (u, v) ∉  T. Cut and paste. T :  A   V – A v (u, v) = least-weight edge connecting A to V – A u Consider the unique simple path from u to v in T. Swap (u, v) with the first edge on this path that connects a vertex in A to a vertex in V – A. A lighter-weight spanning tree than T results.

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.26 Prim’s algorithm I DEA : Maintain V – A as a priority queue Q. Key each vertex in Q with the weight of the least- weight edge connecting it to a vertex in A. Q ←  V key[v] ←  ∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ dou ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v) ⊳ D ECREASE -K EY  [v] ←  u At the end, {(v,  [v])} forms the MST.

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.27 Example of Prim’s algorithm  ∞ 6 12 14 3 8 5 ∞ 7 10 9 15 ∞∞∞∞ ∞ 0 ∞∞∞∞  A  V – A

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.28 Example of Prim’s algorithm  ∞ 6 12 14 3 8 5 ∞ 7 10 9 15 ∞∞∞∞ ∞∞∞∞  A  V – A ∞ 0∞ 0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.29 Example of Prim’s algorithm  ∞ 6 12 14 3 8 5 10 7 10 9 15 ∞∞∞∞ ∞   A  V – A  0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.30 Example of Prim’s algorithm  ∞ 6 12 14 3 8 5 10 7 10 9 15 ∞∞∞∞ ∞   A  V – A  0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.31 Example of Prim’s algorithm  6 12 14 3 8 5 10 7 10 9 15 ∞∞    A  V – A  0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.32 Example of Prim’s algorithm  6 12 14 3 8 5 10 7 10 9 15 ∞∞    A  V – A  0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.33 Example of Prim’s algorithm  6 12 14 3 8 5 8 7 10 9 15      A  V – A  0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.34 Example of Prim’s algorithm  6 12 14 3 8 5 8 7 10 9 15      A  V – A  0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.35 Example of Prim’s algorithm  6 12 14 3 8 5 8 7 10 9 15      A  V – A  0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.36 Example of Prim’s algorithm  6 12 14 3 8 5 8 7 10 9 15      A  V – A  0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.37 Example of Prim’s algorithm  6 12 14 3 8 5 8 7 10 9 15     A  V – A  0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.38 Example of Prim’s algorithm  6 12 14 3 8 5 8 7 10 9 15     A  V – A  0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.39 Example of Prim’s algorithm  6 12 14 3 8 5 8 7 10 9 15     A  V – A  0

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.40 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.41 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u  (V) total

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.42 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u  (V) total |V | time s

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.43 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u  (V) total |V | time s degree(u) times

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.44 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u  (V) total |V | time s degree(u) times Handshaking Lemma  (E) implicit D ECREASE -K EY ’s.

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.45 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u  (V) total |V | time s degree(u) times Handshaking Lemma  (E) implicit D ECREASE -K EY ’s. Time =  (V)·T EXTRACT-MIN +  (E)·T DECREASE-KEY

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.46 Analysis of Prim (continued) Time =  (V) · T EXTRACT-MIN +  (E) · T DECREASE-KEY

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.47 Analysis of Prim (continued) Time =  (V) · T EXTRACT-MIN +  (E) · T DECREASE-KEY Q T EXTRACT-MIN T DECREASE-KEY Total

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.48 Analysis of Prim (continued) Time =  (V) · T EXTRACT-MIN +  (E) · T DECREASE-KEY Q T EXTRACT-MIN T DECREASE-KEY Total array O(V) O(1) O(V 2 )

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.49 Analysis of Prim (continued) Time =  (V) · T EXTRACT-MIN +  (E) · T DECREASE-KEY Q T EXTRACT-MIN T DECREASE-KEY Total array O(V) O(1) O(V 2 ) binary heap O(lg V) O(lg V) O(E lg V)

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.50 Analysis of Prim (continued) Time =  (V) · T EXTRACT-MIN +  (E) · T DECREASE-KEY Q T EXTRACT-MIN T DECREASE-KEY Total array O(V) O(1) O(V 2 ) binary heap O(lg V) O(lg V) O(E lg V) Fibonacci heap O(lg V) amortized O(1) amortized O(E + V lg V) worst case

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.51 MST algorithms Kruskal’s algorithm (see CLRS): Uses the disjoint-set data structure (Lecture 10). Running time = O(E lg V).

November 9, 2005Copyright © 2001-5 by Erik D. Demaine and Charles E. LeisersonL16.52 MST algorithms Kruskal’s algorithm (see CLRS): Uses the disjoint-set data structure (Lecture 10). Running time = O(E lg V). Best to date: Karger, Klein, and Tarjan [1993]. Randomized algorithm. O(V + E) expected time.

Download ppt "Introduction to Algorithms 6.046J/18.401J L ECTURE 16 Greedy Algorithms (and Graphs) Graph representation Minimum spanning trees Optimal substructure Greedy."

Similar presentations