Chapter 23 Minimum Spanning Tree

Slides:



Advertisements
Similar presentations
Unit-iv.
Advertisements

Lecture 15. Graph Algorithms
Implementing a Graph Implement a graph in three ways: –Adjacency List –Adjacency-Matrix –Pointers/memory for each node (actually a form of adjacency list)
Advanced Algorithm Design and Analysis Jiaheng Lu Renmin University of China
Great Theoretical Ideas in Computer Science
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
CSCE 411H Design and Analysis of Algorithms Set 8: Greedy Algorithms Prof. Evdokia Nikolova* Spring 2013 CSCE 411H, Spring 2013: Set 8 1 * Slides adapted.
Greedy Algorithms Greed is good. (Some of the time)
Greed is good. (Some of the time)
More Graph Algorithms Minimum Spanning Trees, Shortest Path Algorithms.
1 Greedy 2 Jose Rolim University of Geneva. Algorithmique Greedy 2Jose Rolim2 Examples Greedy  Minimum Spanning Trees  Shortest Paths Dijkstra.
Chapter 23 Minimum Spanning Trees
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 13 Minumum spanning trees Motivation Properties of minimum spanning trees Kruskal’s.
Minimum Spanning Tree Algorithms
CSE 780 Algorithms Advanced Algorithms Minimum spanning tree Generic algorithm Kruskal’s algorithm Prim’s algorithm.
1 7-MST Minimal Spanning Trees Fonts: MTExtra:  (comment) Symbol:  Wingdings: Fonts: MTExtra:  (comment) Symbol:  Wingdings:
Chapter 9: Greedy Algorithms The Design and Analysis of Algorithms.
Analysis of Algorithms CS 477/677
Minimum Spanning Trees CIS 606 Spring Problem A town has a set of houses and a set of roads. A road connects 2 and only 2 houses. A road connecting.
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 11 Instructor: Paul Beame.
Design and Analysis of Algorithms Minimum Spanning trees
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Minimum Spanning Tree Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2008.
Analysis of Algorithms
Chapter 9 – Graphs A graph G=(V,E) – vertices and edges
MST Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
SPANNING TREES Lecture 21 CS2110 – Spring
Graph Dr. Bernard Chen Ph.D. University of Central Arkansas.
2IL05 Data Structures Fall 2007 Lecture 13: Minimum Spanning Trees.
Spring 2015 Lecture 11: Minimum Spanning Trees
UNC Chapel Hill Lin/Foskey/Manocha Minimum Spanning Trees Problem: Connect a set of nodes by a network of minimal total length Some applications: –Communication.
Minimum Spanning Trees CSE 2320 – Algorithms and Data Structures Vassilis Athitsos University of Texas at Arlington 1.
Minimum Spanning Trees and Kruskal’s Algorithm CLRS 23.
1 Minimum Spanning Trees. Minimum- Spanning Trees 1. Concrete example: computer connection 2. Definition of a Minimum- Spanning Tree.
Lecture 19 Greedy Algorithms Minimum Spanning Tree Problem.
Chapter 23: Minimum Spanning Trees: A graph optimization problem Given undirected graph G(V,E) and a weight function w(u,v) defined on all edges (u,v)
SPANNING TREES Lecture 20 CS2110 – Fall Spanning Trees  Definitions  Minimum spanning trees  3 greedy algorithms (incl. Kruskal’s & Prim’s)
Minimum- Spanning Trees
Lecture 19 Minimal Spanning Trees CSCI – 1900 Mathematics for Computer Science Fall 2014 Bill Pine.
MST Lemma Let G = (V, E) be a connected, undirected graph with real-value weights on the edges. Let A be a viable subset of E (i.e. a subset of some MST),
Lecture 12 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
Minimum Spanning Trees CSE 2320 – Algorithms and Data Structures Vassilis Athitsos and Alexandra Stefan University of Texas at Arlington These slides are.
Greedy Algorithms General principle of greedy algorithm
Lecture ? The Algorithms of Kruskal and Prim
Chapter 9 : Graphs Part II (Minimum Spanning Trees)
Introduction to Algorithms
Minimum Spanning Trees
Minimum Spanning Trees and Shortest Paths
CS 3343: Analysis of Algorithms
Minimum Spanning Trees
Lecture 12 Algorithm Analysis
Minimum Spanning Trees
Minimum Spanning Trees
Minimum Spanning Tree.
Minimum Spanning Tree.
Data Structures – LECTURE 13 Minumum spanning trees
Greedy Algorithm (17.4/16.4) Greedy Algorithm (GA)
Minimum Spanning Trees
Kruskal’s Minimum Spanning Tree Algorithm
Advanced Algorithms Analysis and Design
Analysis of Algorithms CS 477/677
Lecture 12 Algorithm Analysis
Minimum Spanning Tree Algorithms
Minimum Spanning Tree.
Greedy Algorithms Comp 122, Spring 2004.
Lecture 12 Algorithm Analysis
Advanced Algorithms Analysis and Design
Spanning Trees Lecture 20 CS2110 – Spring 2015.
Chapter 23: Minimum Spanning Trees: A graph optimization problem
More Graphs Lecture 19 CS2110 – Fall 2009.
Presentation transcript:

Chapter 23 Minimum Spanning Tree CS 253: Algorithms Chapter 23 Minimum Spanning Tree Credit: Dr. George Bebis

Minimum Spanning Trees A tree (i.e., connected, acyclic graph) which contains all the vertices of the graph Minimum Spanning Tree Spanning tree with the minimum sum of weights Spanning forest If a graph is not connected, then there is a spanning tree for each connected component of the graph a b c d e h g f i 4 8 7 11 1 2 14 9 10 6

Sample applications of MST Find the least expensive way to connect a set of cities, terminals, computers, etc. A town has a set of houses. A road connects 2 and only 2 houses. Repair Cost for road (u,v) is w(u, v) Problem: Repair enough roads such that: Everyone stays connected Total repair cost is minimum a b c d e h g f i 4 8 7 11 1 2 14 9 10 6

Properties of Minimum Spanning Trees Minimum spanning tree is not unique MST has no cycles (by definition) : # of edges in a MST: |V| - 1

Prim’s Algorithm Starts from an arbitrary “root”: VA = {a} At each step: Find a light edge crossing (VA, V - VA) Add this edge to set A (The edges in set A always form a single tree) Repeat until the tree spans all vertices 8 7 b c d 4 9 2 a 11 i 14 e 4 7 6 8 10 h g f 1 2

How to Find Light Edges Quickly? Use a priority queue Q contains vertices not yet included in the tree, i.e. (V - VA) VA = {a}, Q = {b, c, d, e, f, g, h, i} We associate a key with each vertex v in Q: key[v] = minimum weight of any edge (u, v) connecting v to VA After adding a new node to VA we update the weights of all the nodes adjacent to it. e.g., after adding a to the tree, Key[b]=4 and Key[h]=8 a b c d e h g f i 4 8 7 11 1 2 14 9 10 6

Example 0         Q = {a, b, c, d, e, f, g, h, i} VA =  4 8 7 11 1 2 14 9 10 6 0         Q = {a, b, c, d, e, f, g, h, i} VA =  Extract-MIN(Q)  a 4    key [b] = 4  [b] = a key [h] = 8  [h] = a 4      8  Q = {b, c, d, e, f, g, h, i} VA = {a} Extract-MIN(Q)  b 8 7 b c d 4 9  2  a 11 i 14 e 4 7 6 8 10 h g f 1 2    8

Example Q = {c, d, e, f, g, h, i} VA = {a, b} key [c] = 8  [c] = b key [h] = 8  [h] = a - unchanged 8     8  Extract-MIN(Q)  c 4  8  8 a b c d e h g f i 4 8 7 11 1 2 14 9 10 6  Q = {d, e, f, g, h, i} VA = {a, b, c} key [d] = 7  [d] = c key [f] = 4  [f] = c key [i] = 2  [i] = c 7  4  8 2 Extract-MIN(Q)  i 4  8 7 a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 2 4

Example Q = {d, e, f, g, h} VA = {a, b, c, i} key [h] = 7  [h] = i 4 7  8 2 Q = {d, e, f, g, h} VA = {a, b, c, i} key [h] = 7  [h] = i key [g] = 6  [g] = i 7  4 6 7 Extract-MIN(Q)  f a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 7 6 4 7  6 8 2 Q = {d, e, g, h} VA = {a, b, c, i, f} key [g] = 2  [g] = f key [d] = 7  [d] = c unchanged key [e] = 10  [e] = f 7 10 2 7 Extract-MIN(Q)  g a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 10 2

Example Q = {d, e, h} VA = {a, b, c, i, f, g} key [h] = 1  [h] = g 4 7 10 2 8 a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 Q = {d, e, h} VA = {a, b, c, i, f, g} key [h] = 1  [h] = g 7 10 1 Extract-MIN(Q)  h 1 4 7 10 1 2 8 a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 Q = {d, e} VA = {a, b, c, i, f, g, h} 7 10 Extract-MIN(Q)  d

Example Q = {e} VA = {a, b, c, i, f, g, h, d} key [e] = 9  [e] = d 9 Extract-MIN(Q)  e Q =  VA = {a, b, c, i, f, g, h, d, e} 4 7 10 1 2 8 a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 9

PRIM(V, E, w, r) % r : starting vertex Q ←  for each u  V do key[u] ← ∞ π[u] ← NIL INSERT(Q, u) DECREASE-KEY(Q, r, 0) % key[r] ← 0 while Q   do u ← EXTRACT-MIN(Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then π[v] ← u DECREASE-KEY(Q, v, w(u, v)) Total time: O(VlgV + ElgV) = O(ElgV) O(V) if Q is implemented as a min-heap O(lgV) Min-heap operations: O(VlgV) Executed |V| times Takes O(lgV) Executed O(E) times total Constant Takes O(lgV) O(ElgV)

Prim’s Algorithm Total time: O(ElgV ) Prim’s algorithm is a “greedy” algorithm Greedy algorithms find solutions based on a sequence of choices which are “locally” optimal at each step. Nevertheless, Prim’s greedy strategy produces a globally optimum solution!

Kruskal’s Algorithm Start with each vertex being its own component Repeatedly merge two components into one by choosing the lightest edge that connects them Which components to consider at each iteration? Scan the set of edges in monotonically increasing order by weight. Choose the smallest edge. a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 We would add edge (c, f)

Example a b c d e h g f i 1: (h, g) 2: (c, i), (g, f) Add (h, g) Add (c, i) Add (g, f) Add (a, b) Add (c, f) Ignore (i, g) Add (c, d) Ignore (i, h) Add (a, h) Ignore (b, c) Add (d, e) Ignore (e, f) Ignore (b, h) Ignore (d, f) {g, h}, {a}, {b}, {c},{d},{e},{f},{i} {g, h}, {c, i}, {a}, {b}, {d}, {e}, {f} {g, h, f}, {c, i}, {a}, {b}, {d}, {e} {g, h, f}, {c, i}, {a, b}, {d}, {e} {g, h, f, c, i}, {a, b}, {d}, {e} {g, h, f, c, i, d}, {a, b}, {e} {g, h, f, c, i, d, a, b}, {e} {g, h, f, c, i, d, a, b, e} a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 1: (h, g) 2: (c, i), (g, f) 4: (a, b), (c, f) 6: (i, g) 7: (c, d), (i, h) 8: (a, h), (b, c) 9: (d, e) 10: (e, f) 11: (b, h) 14: (d, f) {a}, {b}, {c}, {d}, {e}, {f}, {g}, {h}, {i}

Operations on Disjoint Data Sets Kruskal’s Alg. uses Disjoint Data Sets (UNION-FIND : Chapter 21) to determine whether an edge connects vertices in different components MAKE-SET(u) – creates a new set whose only member is u FIND-SET(u) – returns a representative element from the set that contains u. It returns the same value for any element in the set UNION(u, v) – unites the sets that contain u and v, say Su and Sv E.g.: Su = {r, s, t, u}, Sv = {v, x, y} UNION (u, v) = {r, s, t, u, v, x, y} We had seen earlier that FIND-SET can be done in O(lgn) or O(1) time and UNION operation can be done in O(1) (see Chapter 21)

KRUSKAL(V, E, w) Running time: O(V+ElgE+ElgV)  O(ElgE) O(V) O(ElgE) for each vertex v  V do MAKE-SET(v) sort E into non-decreasing order by w for each (u, v) taken from the sorted list do if FIND-SET(u)  FIND-SET(v) then A ← A  {(u, v)} UNION(u, v) return A Running time: O(V+ElgE+ElgV)  O(ElgE) Implemented by using the disjoint-set data structure (UNION-FIND) Kruskal’s algorithm is “greedy” It produces a globally optimum solution O(V) O(ElgE) O(E) O(lgV)

Problem 1 Compare Prim’s algorithm with Kruskal’s algorithm assuming: (a) Sparse graphs:  E=O(V) Kruskal: UNION-FIND: O(ElgE) = O(VlgV) Prim: Binary heap: O(ElgV) = O(VlgV) (b) Dense graphs:  E = O(V2) Kruskal: O(ElgE) = O(V2lgV2) = O(2V2lgV) = O(V2lgV) Prim: Binary heap: O(ElgV) = O(V2lgV)

Problem 2 Analyze the running time of Kruskal’s algorithm when weights are in the range [1 … V] ANSWER: Sorting can be done in O(E) time (e.g., using counting sort) However, overall running time will not change, i.e., O(ElgV)

Problem 3 Suppose that some of the weights in a connected graph G are negative. Will Prim’s algorithm still work? What about Kruskal’s algorithm? Justify your answers. ANSWER: Yes, both algorithms will work with negative weights. There is no assumption in the algorithm about the weights being positive.

Problem 4 Analyze Prim’s algorithm assuming: (a) an adjacency-list representation of G O(ElgV) (b) an adjacency-matrix representation of G O(ElgV+V2) (see next slide)

PRIM(V, E, w, r) Q ←  Total time: O(VlgV + ElgV) = O(ElgV) for each u  V do key[u] ← ∞ π[u] ← NIL INSERT(Q, u) DECREASE-KEY(Q, r, 0) % key[r] ← 0 while Q   do u ← EXTRACT-MIN(Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then π[v] ← u DECREASE-KEY(Q, v, w(u, v)) Total time: O(VlgV + ElgV) = O(ElgV) O(V) if Q is implemented as a min-heap O(lgV) Min-heap operations: O(VlgV) Executed |V| times Takes O(lgV) Executed O(E) times total Constant O(ElgV) Takes O(lgV)

PRIM(V, E, w, r) Q ←  Total time: O(ElgV + V2) for each u  V do key[u] ← ∞ π[u] ← NIL INSERT(Q, u) DECREASE-KEY(Q, r, 0) % key[r] ← 0 while Q   do u ← EXTRACT-MIN(Q) for (j=0; j<|V|; j++) do if (A[u][j]=1) and (vQ) and (w(u, v)<key[v]) then π[v] ← u DECREASE-KEY(Q, v, w(u, v)) Total time: O(ElgV + V2) O(V) if Q is implemented as a min-heap O(lgV) Min-heap operations: O(VlgV) Executed |V| times Takes O(lgV) Executed O(V2) times total O(ElgV) Takes O(lgV)

Problem 5 Find an algorithm for the “maximum” spanning tree. That is, given an undirected weighted graph G, find a spanning tree of G of maximum cost. Prove the correctness of your algorithm. Consider choosing the “heaviest” edge (i.e., the edge associated with the largest weight) in a cut. The generic proof can be modified easily to show that this approach will work. Alternatively, multiply the weights by -1 and apply either Prim’s or Kruskal’s algorithms without any modification at all!

Problem 6 Let T be a MST of a graph G, and let L be the sorted list of the edge weights of T. Show that for any other MST T’ of G, the list L is also the sorted list of the edge weights of T’ Proof: Kruskal’s algorithm will find T in the order specified by L. Similarly, if T’ is also an MST, Kruskal’s algorithm should be able to find it in the sorted order, L’. If L’  L, then there is a contradiction! Because at the point where L and L’ differ, Kruskal’s should have picked the smaller of the two and therefore L’ is impossible to obtain. T, L = {1,2} T’, L’ = {1,2}