1 Fibonacci heaps, and applications. 2 Yet a better MST algorithm (Fredman and Tarjan) Iteration i: We grow a forest, tree by tree, as follows. Start.

Slides:



Advertisements
Similar presentations
Lecture 15. Graph Algorithms
Advertisements

Priority Queues  MakeQueuecreate new empty queue  Insert(Q,k,p)insert key k with priority p  Delete(Q,k)delete key k (given a pointer)  DeleteMin(Q)delete.
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
Minimum Spanning Trees Definition Two properties of MST’s Prim and Kruskal’s Algorithm –Proofs of correctness Boruvka’s algorithm Verifying an MST Randomized.
By Amber McKenzie and Laura Boccanfuso. Dijkstra’s Algorithm Question: How do you know that Dijkstra’s algorithm finds the shortest path and is optimal.
A Randomized Linear-Time Algorithm to Find Minimum Spanning Trees David R. Karger David R. Karger Philip N. Klein Philip N. Klein Robert E. Tarjan.
1 Greedy 2 Jose Rolim University of Geneva. Algorithmique Greedy 2Jose Rolim2 Examples Greedy  Minimum Spanning Trees  Shortest Paths Dijkstra.
Chapter 23 Minimum Spanning Trees
Discussion #36 Spanning Trees
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 13 Minumum spanning trees Motivation Properties of minimum spanning trees Kruskal’s.
Graph Algorithms: Minimum Spanning Tree We are given a weighted, undirected graph G = (V, E), with weight function w:
CSE 780 Algorithms Advanced Algorithms Minimum spanning tree Generic algorithm Kruskal’s algorithm Prim’s algorithm.
Minimum-Cost Spanning Tree weighted connected undirected graph spanning tree cost of spanning tree is sum of edge costs find spanning tree that has minimum.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Shortest Path Problems Directed weighted graph. Path length is sum of weights of edges on path. The vertex at which the path begins is the source vertex.
CSE Algorithms Minimum Spanning Trees Union-Find Algorithm
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 11 Instructor: Paul Beame.
Chapter 9: Graphs Spanning Trees Mark Allen Weiss: Data Structures and Algorithm Analysis in Java Lydia Sinapova, Simpson College.
Minimum spanning tree Prof Amir Geva Eitan Netzer.
Minimal Spanning Trees What is a minimal spanning tree (MST) and how to find one.
1 Binomial heaps, Fibonacci heaps, and applications.
Algorithms 2005 Ramesh Hariharan. Amortization in Dynamic Algorithms A single insertion/deletion might take say O(log n) time Does a sequence of n insertions.
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck.
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck.
MST Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Binomial heaps, Fibonacci heaps, and applications
2IL05 Data Structures Fall 2007 Lecture 13: Minimum Spanning Trees.
Spring 2015 Lecture 11: Minimum Spanning Trees
Minimum Spanning Trees CSE 2320 – Algorithms and Data Structures Vassilis Athitsos University of Texas at Arlington 1.
© 2015 JW Ryder CSCI 203 Data Structures1. © 2015 JW Ryder CSCI 203 Data Structures2.
Spanning Trees CSIT 402 Data Structures II 1. 2 Two Algorithms Prim: (build tree incrementally) – Pick lower cost edge connected to known (incomplete)
Lecture 19 Greedy Algorithms Minimum Spanning Tree Problem.
Spanning Trees. A spanning tree for a connected, undirected graph G is a graph S consisting of the nodes of G together with enough edges of G such that:
F d a b c e g Prim’s Algorithm – an Example edge candidates choosen.
Minimum Spanning Trees CSE 373 Data Structures Lecture 21.
Minimum- Spanning Trees
1 22c:31 Algorithms Minimum-cost Spanning Tree (MST)
WEEK 12 Graphs IV Minimum Spanning Tree Algorithms.
Lecture 12 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
CSE 589 Applied Algorithms Spring 1999 Prim’s Algorithm for MST Load Balance Spanning Tree Hamiltonian Path.
Data Structures & Algorithms Graphs Richard Newman based on book by R. Sedgewick and slides by S. Sahni.
Lecture ? The Algorithms of Kruskal and Prim
Chapter 9 : Graphs Part II (Minimum Spanning Trees)
Introduction to Algorithms
Minimum Spanning Trees
Binomial heaps, Fibonacci heaps, and applications
Spanning Trees.
Heaps Binomial Heaps Lazy Binomial Heaps 1.
Binomial heaps, Fibonacci heaps, and applications
CS 3343: Analysis of Algorithms
Lecture 12 Algorithm Analysis
Data Structures & Algorithms Graphs
Minimum-Cost Spanning Tree
Minimum Spanning Tree.
Minimum-Cost Spanning Tree
Data Structures – LECTURE 13 Minumum spanning trees
Chapter 23 Minimum Spanning Tree
Minimum-Cost Spanning Tree
Lecture 12 Algorithm Analysis
Minimum Spanning Trees
Minimum Spanning Tree.
Minimum Spanning Trees
Binomial heaps, Fibonacci heaps, and applications
Binomial heaps, Fibonacci heaps, and applications
Lecture 12 Algorithm Analysis
Chapter 14 Graphs © 2011 Pearson Addison-Wesley. All rights reserved.
Lecture 14 Minimum Spanning Tree (cont’d)
Chapter 9: Graphs Spanning Trees
Minimum-Cost Spanning Tree
Presentation transcript:

1 Fibonacci heaps, and applications

2 Yet a better MST algorithm (Fredman and Tarjan) Iteration i: We grow a forest, tree by tree, as follows. Start with a singleton vertex and continue as in Prim’s algorithm until either 1) The size of the heap is larger than k i 2) Next edge picked is connected to an already grown tree 3) Heap is empty (if the graph is connected this will happen only at the very end)

3 Contract each tree into a single vertex and start iteration i+1. How do we contract ? Do a DFS on the tree, marking for each vertex the # of the tree which contains it. Each edge e gets two numbers l(e), h(e) of the trees at its endpoints. If h(e) = l(e) remove e (self loop). (stable) Bucket sort by h(e) and by l(e), parallel edge then become consecutive so we can easily remove them. O(m) time overall.

4 Let n i be the number of vertices in the i-th iteration. O(m) inserts, O(m) decrease-key, O(n i ) delete-min total : O(n i log(k i ) + m) Set k i = 2 (2m/ni) so the work per phase is O(m). Analysis: each iteration takes linear time

5 How many iterations do we have ? Every tree in iteration i is incident with at least k i edges. So n i+1 k i  2m i  2m ==> n i+1  2m i / k i  2m / k i ==> k i+1 = 2 (2m/n i+1 )  2 ki  m/n

6 This runs in O(m  (m,n)) Once k i  n we stop. So the number of iterations is bounded by the minimum i such that m/n  n i j = min{i | 2m/n  log i (n) } =  (m,n)

7 Summary The overall complexity of the algorithm is O(m  (m,n) ) Where  (m,n) = min{i | log i (n)  2m/n} for every m  n  (m,n)  log*(n) For m > n log(n) the algorithm degenerates to Prim’s. One can prove that O(m  (m,n) ) = O(nlogn + m).

8 So our record is O(m  (m,n) ), can we do better ? Where is the bottleneck now ? We may scan an edge  (m,n) times. When we abandon a heap we will rescan an edge per vertex in the heap. Delay scanning of edges (heavy edges)

9 Packets (Gabow, Galil, Spencer, Tarjan) Group the edges incident to each vertex into packets of size p each Sort each packet Treat each packet as a single edge (the first edge in the packet)

10 Working with packets When you extract the min from the heap it is associated with a packet whose top edge is (u,v). You add (u,v) to the tree, delete (u,v) from its packet, and relax this packet of u Traverse the packets of v and relax each of them How do you relax a packet ?

11 Relaxing packet p of vertex v Check the smallest edge (v,u) in p If u is already in the tree, discard (v,u), and recur If u is not in the heap: insert it into the heap with weight w(v,u) If u is in the heap: If the weight of u is larger than the weight of (v,u) then decrease its key Let p be the packet with larger weight among the current and the previous packet associated with u, discard its first edge and recur on p

12 Analysis Initialization: O(m) to partition into packets, O(mlog(p)) to sort the packets An iteration: O(n i log(k i )) for extract-mins, and O(m/p) work for each packet Additional work per packet we can charge to an edge which we discard…… Total O(m) O ( m + mlog(p) + Σ(n i log(k i ) + m/p) ) iterations Summing up

13 Set k i = 2 (2m/pn i ) so the work per phase is O(m/p). Analysis (Cont) O ( mlog(p) + Σ(n i log(k i ) + m/p) ) iterations Every tree in iteration i is incident with at least k i packets So n i+1 k i  2m/p ==> n i+1  2m / pk i ==> k i+1 = 2 (2m/pn i+1 )  2 ki  m/n Set k 1 = 2 (2m/n) ; the work in the first phase is O(m)  We have ≤  (m,n) iterations

14 The running time is Analysis (Cont) O ( mlog(p) +  (m,n) m/p ) Choose p=  (m,n) so we get O ( mlog(  (m,n)) )

15 If we want the running time per iteration to be m/p, how do we do contractions ? But we cheated… Use union/find (conract by uniting vertices and concatenating their adjacency lists) You get an  (m,n) overhead when you relax a packet and overall O ( m  (m,n) + m log(  (m,n)) + Σn i ) = O(m log(  (m,n)))

16 We cannot partition the edges incident to a vertex into packets each of size p exactly ? Furthermore, it wasn’t our only cheat.. So there can be at most one undersized packet per vertex When you contract merge undersized packets How do you merge undersized packets ?

17 This gives log(p) overhead to relax a packet Use a little F-heap to represent each packet But still the overall time is O(m log(  (m,n)))