Introduction to Algorithms 6.046J/18.401J L ECTURE 16 Greedy Algorithms (and Graphs) Graph representation Minimum spanning trees Optimal substructure Greedy.

Slides:



Advertisements
Similar presentations
Lecture 15. Graph Algorithms
Advertisements

Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
Minimum Spanning Trees Definition Two properties of MST’s Prim and Kruskal’s Algorithm –Proofs of correctness Boruvka’s algorithm Verifying an MST Randomized.
Introduction to Algorithms Jiafen Liu Sept
Minimum Spanning Tree CSE 331 Section 2 James Daly.
Introduction to Algorithms 6.046J/18.401J/SMA5503
UNC Chapel Hill Lin/Foskey/Manocha Minimum Spanning Trees Problem: Connect a set of nodes by a network of minimal total length Some applications: –Communication.
1 Greedy 2 Jose Rolim University of Geneva. Algorithmique Greedy 2Jose Rolim2 Examples Greedy  Minimum Spanning Trees  Shortest Paths Dijkstra.
Chapter 23 Minimum Spanning Trees
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 13 Minumum spanning trees Motivation Properties of minimum spanning trees Kruskal’s.
Minimum Spanning Tree Algorithms
CSE 780 Algorithms Advanced Algorithms Minimum spanning tree Generic algorithm Kruskal’s algorithm Prim’s algorithm.
Minimum Spanning Trees Definition Algorithms –Prim –Kruskal Proofs of correctness.
Lecture 18: Minimum Spanning Trees Shang-Hua Teng.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Analysis of Algorithms CS 477/677
DAST 2005 Tirgul 12 (and more) sample questions. DAST 2005 Q.We’ve seen that solving the shortest paths problem requires O(VE) time using the Belman-Ford.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
Design and Analysis of Computer Algorithm September 10, Design and Analysis of Computer Algorithm Lecture 5-2 Pradondet Nilagupta Department of Computer.
9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 8 Greedy Graph.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 3: Greedy algorithms Phan Th ị Hà D ươ ng 1.
MST Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
CS 3343: Analysis of Algorithms Lecture 21: Introduction to Graphs.
2IL05 Data Structures Fall 2007 Lecture 13: Minimum Spanning Trees.
Introduction to Algorithms L ECTURE 14 (Chap. 22 & 23) Greedy Algorithms I 22.1 Graph representation 23.1 Minimum spanning trees 23.1 Optimal substructure.
Spring 2015 Lecture 11: Minimum Spanning Trees
UNC Chapel Hill Lin/Foskey/Manocha Minimum Spanning Trees Problem: Connect a set of nodes by a network of minimal total length Some applications: –Communication.
Algorithm Course Dr. Aref Rashad February Algorithms Course..... Dr. Aref Rashad Part: 5 Graph Algorithms.
Minimum Spanning Trees and Kruskal’s Algorithm CLRS 23.
 2004 SDU Lecture 7- Minimum Spanning Tree-- Extension 1.Properties of Minimum Spanning Tree 2.Secondary Minimum Spanning Tree 3.Bottleneck.
1 Minimum Spanning Trees. Minimum- Spanning Trees 1. Concrete example: computer connection 2. Definition of a Minimum- Spanning Tree.
9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 7 Greedy Graph.
Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph. Want to compute a shortest path for each possible.
November 13, Algorithms and Data Structures Lecture XII Simonas Šaltenis Aalborg University
Chapter 23: Minimum Spanning Trees: A graph optimization problem Given undirected graph G(V,E) and a weight function w(u,v) defined on all edges (u,v)
1 Introduction to Algorithms L ECTURE 17 (Chap. 24) Shortest paths I 24.2 Single-source shortest paths 24.3 Dijkstra’s algorithm Edsger Wybe Dijkstra
F d a b c e g Prim’s Algorithm – an Example edge candidates choosen.
Finding Minimum Spanning Trees Algorithm Design and Analysis Week 4 Bibliography: [CLRS]- Chap 23 – Minimum.
Graphs Slide credits:  K. Wayne, Princeton U.  C. E. Leiserson and E. Demaine, MIT  K. Birman, Cornell U.
Greedy Algorithms Z. GuoUNC Chapel Hill CLRS CH. 16, 23, & 24.
MST Lemma Let G = (V, E) be a connected, undirected graph with real-value weights on the edges. Let A be a viable subset of E (i.e. a subset of some MST),
Lecture 12 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
Algorithm Design and Analysis June 11, Algorithm Design and Analysis Pradondet Nilagupta Department of Computer Engineering This lecture note.
Minimum Spanning Tree. p2. Minimum Spanning Tree G=(V,E): connected and undirected w: E  R, weight function a b g h i c f d e
November 22, Algorithms and Data Structures Lecture XII Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Greedy Algorithms General principle of greedy algorithm
Lecture ? The Algorithms of Kruskal and Prim
Algorithm Analysis Fall 2017 CS 4306/03
Introduction to Algorithms
Topological Sort Minimum Spanning Tree
CS 3343: Analysis of Algorithms
Minimum Spanning Tree Shortest Paths
CSC 413/513: Intro to Algorithms
Minimum Spanning Trees
Lecture 12 Algorithm Analysis
CS 3343: Analysis of Algorithms
Many slides here are based on E. Demaine , D. Luebke slides
CS200: Algorithm Analysis
Algorithms and Data Structures Lecture XII
Data Structures – LECTURE 13 Minumum spanning trees
Lecture 12 Algorithm Analysis
Minimum Spanning Tree Algorithms
Minimum Spanning Trees
Greedy Algorithms Comp 122, Spring 2004.
Introduction to Algorithms: Greedy Algorithms (and Graphs)
Lecture 12 Algorithm Analysis
Advanced Algorithms Analysis and Design
Chapter 23: Minimum Spanning Trees: A graph optimization problem
Prim’s algorithm IDEA: Maintain V – A as a priority queue Q. Key each vertex in Q with the weight of the least-weight edge connecting it to a vertex in.
Minimum Spanning Trees
Presentation transcript:

Introduction to Algorithms 6.046J/18.401J L ECTURE 16 Greedy Algorithms (and Graphs) Graph representation Minimum spanning trees Optimal substructure Greedy choice Prim’s greedy MST algorithm Prof. Charles E. Leiserson November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.1

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.2 Graphs (review) Definition. A directed graph (digraph) G = (V, E) is an ordered pair consisting of a set V of vertices (singular: vertex), a set E ⊆  V  V of edges. In an undirected graph G = (V, E), the edge set E consists of unordered pairs of vertices. In either case, we have | E | = O(V 2 ). Moreover, if G is connected, then | E |  | V | –1, which implies that lg | E | =  (lg V). (Review CLRS, Appendix B.)

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.3 The adjacency matrix of a graph G = (V, E), where V = {1, 2, …, n}, is the matrix A[1.. n, 1.. n] given by 1 if (i, j)  E, 0 if (i, j) ∉  E. Adjacency-matrix representation A[i,j] =

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.4 The adjacency matrix of a graph G = (V, E), where V = {1, 2, …, n}, is the matrix A[1.. n, 1.. n] given by 1 if (i, j)  E, 0 if (i, j) ∉  E. Adjacency-matrix representation A[i,j] = A  (V 2 ) storage  dense representation

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.5 Adjacency-list representation An adjacency list of a vertex v  V is the list Adj[v] of vertices adjacent to v Adj[1] = {2, 3} Adj[2] = {3} Adj[3] = {} Adj[4] = {3}

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.6 Adjacency-list representation An adjacency list of a vertex v  V is the list Adj[v] of vertices adjacent to v Adj[1] = {2, 3} Adj[2] = {3} Adj[3] = {} Adj[4] = {3} For undirected graphs, | Adj[v] | = degree(v). For digraphs, | Adj[v] | = out-degree(v).

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.7 Adjacency-list representation An adjacency list of a vertex v  V is the list Adj[v] of vertices adjacent to v Adj[1] = {2, 3} Adj[2] = {3} Adj[3] = {} Adj[4] = {3} For undirected graphs, | Adj[v] | = degree(v). For digraphs, | Adj[v] | = out-degree(v). Handshaking Lemma: ∑ v ∈ V = 2 | E | for undirected graphs ⇒ adjacency lists use È(V + E) storage — a sparse representation (for either type of graph)

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.8 Minimum spanning trees Input: A connected, undirected graph G = (V, E) with weight function w : E →  R. For simplicity, assume that all edge weights are distinct. (CLRS covers the general case.)

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.9 Minimum spanning trees Input: A connected, undirected graph G = (V, E) with weight function w : E →  R. For simplicity, assume that all edge weights are distinct. (CLRS covers the general case.) Output: A spanning tree T — a tree that connects all vertices — of minimum weight: w (T) =  w(u,v). (u,v)  T

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.10 Example of MST

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.11 Example of MST

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.12 Optimal substructure MST T: (Other edges of G are not shown.)

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.13 Optimal substructure MST T: (Other edges of G are not shown.) Remove any edge (u, v) ∈  T. u v

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.14 Optimal substructure MST T: (Other edges of G are not shown.) Remove any edge (u, v)  T. u v

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.15 Optimal substructure MST T: (Other edges of G are not shown.) Remove any edge (u, v)  T. Then, T is partitioned into two subtrees T 1 and T 2. T1T1 u v T2T2

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.16 Optimal substructure MST T: (Other edges of G are not shown.) Remove any edge (u, v)  T. Then, T is partitioned into two subtrees T 1 and T 2. Theorem. The subtree T 1 is an MST of G 1 = (V 1, E 1 ), the subgraph of G induced by the vertices of T 1 : V 1 = vertices of T 1, E 1 = { (x, y)  E : x, y  V 1 }. T1T1 u v T2T2 Similarly for T 2.

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.17 Proof of optimal substructure Proof. Cut and paste: w(T) = w(u, v) + w(T 1 ) + w(T 2 ). If T 1 were a lower-weight spanning tree than T 1 for G 1, then T = {(u, v)} ∪ T 1 ∪ T 2 would be a lower-weight spanning tree than T for G.

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.18 Proof of optimal substructure Proof. Cut and paste: w(T) = w(u, v) + w(T 1 ) + w(T 2 ). If T 1 were a lower-weight spanning tree than T 1 for G 1, then T = {(u, v)} ∪ T 1 ∪ T 2 would be a lower-weight spanning tree than T for G. Do we also have overlapping subproblems? Yes.

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.19 Proof of optimal substructure Proof. Cut and paste: w(T) = w(u, v) + w(T 1 ) + w(T 2 ). If T 1 were a lower-weight spanning tree than T 1 for G 1, then T = {(u, v)} ∪ T 1 ∪ T 2 would be a lower-weight spanning tree than T for G. Do we also have overlapping subproblems? Yes. Great, then dynamic programming may work! Yes, but MST exhibits another powerful property which leads to an even more efficient algorithm.

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.20 Hallmark for “greedy” algorithms Greedy-choice property A locally optimal choice is globally optimal.

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.21 Hallmark for “greedy” algorithms Greedy-choice property A locally optimal choice is globally optimal. Theorem.Let T be the MST of G = (V, E), and let A ⊆  V. Suppose that (u, v)  E is the least-weight edge connecting A to V – A. Then, (u, v)  T.

∈  A ∈  V – A November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.22 Proof of theorem Proof. Suppose (u, v) ∉  T. Cut and paste. T:T: v (u, v) = least-weight edge connecting A to V – A u

 A  V – A November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.23 Proof of theorem Proof. Suppose (u, v) ∉  T. Cut and paste. T:T: v (u, v) = least-weight edge connecting A to V – A u Consider the unique simple path from u to v in T.

 A  V – A November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.24 Proof of theorem Proof. Suppose (u, v) ∉  T. Cut and paste. T:T: v (u, v) = least-weight edge connecting A to V – A u Consider the unique simple path from u to v in T. Swap (u, v) with the first edge on this path that connects a vertex in A to a vertex in V – A.

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.25 Proof of theorem Proof. Suppose (u, v) ∉  T. Cut and paste. T :  A   V – A v (u, v) = least-weight edge connecting A to V – A u Consider the unique simple path from u to v in T. Swap (u, v) with the first edge on this path that connects a vertex in A to a vertex in V – A. A lighter-weight spanning tree than T results.

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.26 Prim’s algorithm I DEA : Maintain V – A as a priority queue Q. Key each vertex in Q with the weight of the least- weight edge connecting it to a vertex in A. Q ←  V key[v] ←  ∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ dou ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v) ⊳ D ECREASE -K EY  [v] ←  u At the end, {(v,  [v])} forms the MST.

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.27 Example of Prim’s algorithm  ∞ ∞ ∞∞∞∞ ∞ 0 ∞∞∞∞  A  V – A

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.28 Example of Prim’s algorithm  ∞ ∞ ∞∞∞∞ ∞∞∞∞  A  V – A ∞ 0∞ 0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.29 Example of Prim’s algorithm  ∞ ∞∞∞∞ ∞   A  V – A  0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.30 Example of Prim’s algorithm  ∞ ∞∞∞∞ ∞   A  V – A  0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.31 Example of Prim’s algorithm  ∞∞    A  V – A  0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.32 Example of Prim’s algorithm  ∞∞    A  V – A  0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.33 Example of Prim’s algorithm       A  V – A  0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.34 Example of Prim’s algorithm       A  V – A  0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.35 Example of Prim’s algorithm       A  V – A  0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.36 Example of Prim’s algorithm       A  V – A  0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.37 Example of Prim’s algorithm      A  V – A  0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.38 Example of Prim’s algorithm      A  V – A  0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.39 Example of Prim’s algorithm      A  V – A  0

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.40 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.41 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u  (V) total

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.42 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u  (V) total |V | time s

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.43 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u  (V) total |V | time s degree(u) times

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.44 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u  (V) total |V | time s degree(u) times Handshaking Lemma  (E) implicit D ECREASE -K EY ’s.

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.45 Analysis of Prim Q ←  V key[v] ←∞  for all v  V key[s] ←  0 for some arbitrary s  V while Q  ∅ do u ←  E XTRACT -M IN (Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then key[v] ←  w(u, v)  [v] ←  u  (V) total |V | time s degree(u) times Handshaking Lemma  (E) implicit D ECREASE -K EY ’s. Time =  (V)·T EXTRACT-MIN +  (E)·T DECREASE-KEY

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.46 Analysis of Prim (continued) Time =  (V) · T EXTRACT-MIN +  (E) · T DECREASE-KEY

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.47 Analysis of Prim (continued) Time =  (V) · T EXTRACT-MIN +  (E) · T DECREASE-KEY Q T EXTRACT-MIN T DECREASE-KEY Total

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.48 Analysis of Prim (continued) Time =  (V) · T EXTRACT-MIN +  (E) · T DECREASE-KEY Q T EXTRACT-MIN T DECREASE-KEY Total array O(V) O(1) O(V 2 )

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.49 Analysis of Prim (continued) Time =  (V) · T EXTRACT-MIN +  (E) · T DECREASE-KEY Q T EXTRACT-MIN T DECREASE-KEY Total array O(V) O(1) O(V 2 ) binary heap O(lg V) O(lg V) O(E lg V)

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.50 Analysis of Prim (continued) Time =  (V) · T EXTRACT-MIN +  (E) · T DECREASE-KEY Q T EXTRACT-MIN T DECREASE-KEY Total array O(V) O(1) O(V 2 ) binary heap O(lg V) O(lg V) O(E lg V) Fibonacci heap O(lg V) amortized O(1) amortized O(E + V lg V) worst case

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.51 MST algorithms Kruskal’s algorithm (see CLRS): Uses the disjoint-set data structure (Lecture 10). Running time = O(E lg V).

November 9, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL16.52 MST algorithms Kruskal’s algorithm (see CLRS): Uses the disjoint-set data structure (Lecture 10). Running time = O(E lg V). Best to date: Karger, Klein, and Tarjan [1993]. Randomized algorithm. O(V + E) expected time.