Lecture 1: The Greedy Method 主講人 : 虞台文. Content What is it? Activity Selection Problem Fractional Knapsack Problem Minimum Spanning Tree – Kruskal’s Algorithm.

Slides:



Advertisements
Similar presentations
Chapter 9 Greedy Technique. Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible - b feasible.
Advertisements

CSCE 411H Design and Analysis of Algorithms Set 8: Greedy Algorithms Prof. Evdokia Nikolova* Spring 2013 CSCE 411H, Spring 2013: Set 8 1 * Slides adapted.
Chapter 5 Fundamental Algorithm Design Techniques.
Greedy Algorithms Greed is good. (Some of the time)
Minimum Spanning Tree CSE 331 Section 2 James Daly.
Merge Sort 4/15/2017 6:09 PM The Greedy Method The Greedy Method.
1 Spanning Trees Lecture 20 CS2110 – Spring
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 13 Minumum spanning trees Motivation Properties of minimum spanning trees Kruskal’s.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
© 2004 Goodrich, Tamassia Greedy Method and Compression1 The Greedy Method and Text Compression.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Lecture 18: Minimum Spanning Trees Shang-Hua Teng.
Chapter 9: Greedy Algorithms The Design and Analysis of Algorithms.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Chapter 9 Greedy Technique Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Analysis of Algorithms CS 477/677
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
1 The Greedy Method CSC401 – Analysis of Algorithms Lecture Notes 10 The Greedy Method Objectives Introduce the Greedy Method Use the greedy method to.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
CS420 lecture eight Greedy Algorithms. Going from A to G Starting with a full tank, we can drive 350 miles before we need to gas up, minimize the number.
16.Greedy algorithms Hsu, Lih-Hsing. Computer Theory Lab. Chapter 16P An activity-selection problem Suppose we have a set S = {a 1, a 2,..., a.
David Luebke 1 9/10/2015 CS 332: Algorithms Single-Source Shortest Path.
Shortest Path Algorithms. Kruskal’s Algorithm We construct a set of edges A satisfying the following invariant:  A is a subset of some MST We start with.
1.1 Data Structure and Algorithm Lecture 13 Minimum Spanning Trees Topics Reference: Introduction to Algorithm by Cormen Chapter 13: Minimum Spanning Trees.
Design and Analysis of Computer Algorithm September 10, Design and Analysis of Computer Algorithm Lecture 5-2 Pradondet Nilagupta Department of Computer.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 3: Greedy algorithms Phan Th ị Hà D ươ ng 1.
MST Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
SPANNING TREES Lecture 21 CS2110 – Spring
2IL05 Data Structures Fall 2007 Lecture 13: Minimum Spanning Trees.
Spring 2015 Lecture 11: Minimum Spanning Trees
Minimum Spanning Trees and Kruskal’s Algorithm CLRS 23.
1 Minimum Spanning Trees. Minimum- Spanning Trees 1. Concrete example: computer connection 2. Definition of a Minimum- Spanning Tree.
1 Greedy Algorithms and MST Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 9 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
GREEDY ALGORITHMS UNIT IV. TOPICS TO BE COVERED Fractional Knapsack problem Huffman Coding Single source shortest paths Minimum Spanning Trees Task Scheduling.
The Greedy Method. The Greedy Method Technique The greedy method is a general algorithm design paradigm, built on the following elements: configurations:
Chapter 23: Minimum Spanning Trees: A graph optimization problem Given undirected graph G(V,E) and a weight function w(u,v) defined on all edges (u,v)
Finding Minimum Spanning Trees Algorithm Design and Analysis Week 4 Bibliography: [CLRS]- Chap 23 – Minimum.
1 Greedy Technique Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible b locally optimal.
SPANNING TREES Lecture 20 CS2110 – Fall Spanning Trees  Definitions  Minimum spanning trees  3 greedy algorithms (incl. Kruskal’s & Prim’s)
Greedy Algorithms Z. GuoUNC Chapel Hill CLRS CH. 16, 23, & 24.
Greedy Algorithms Analysis of Algorithms.
Spring 2008The Greedy Method1. Spring 2008The Greedy Method2 Outline and Reading The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1)
Greedy Algorithms. Zhengjin,Central South University2 Review: Dynamic Programming Summary of the basic idea: Optimal substructure: optimal solution to.
MST Lemma Let G = (V, E) be a connected, undirected graph with real-value weights on the edges. Let A be a viable subset of E (i.e. a subset of some MST),
Algorithm Design and Analysis June 11, Algorithm Design and Analysis Pradondet Nilagupta Department of Computer Engineering This lecture note.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
Greedy Algorithms. p2. Activity-selection problem: Problem : Want to schedule as many compatible activities as possible., n activities. Activity i, start.
November 22, Algorithms and Data Structures Lecture XII Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Greedy Algorithms General principle of greedy algorithm
CSCE 411 Design and Analysis of Algorithms
Greedy Technique Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: feasible locally optimal irrevocable.
Greedy Technique.
Merge Sort 7/29/ :21 PM The Greedy Method The Greedy Method.
The Greedy Method and Text Compression
The Greedy Method and Text Compression
Minimum Spanning Tree Shortest Paths
Minimum Spanning Trees
CISC 235: Topic 10 Graph Algorithms.
Merge Sort 11/28/2018 2:18 AM The Greedy Method The Greedy Method.
The Greedy Method Spring 2007 The Greedy Method Merge Sort
Merge Sort 11/28/2018 2:21 AM The Greedy Method The Greedy Method.
Merge Sort 11/28/2018 8:16 AM The Greedy Method The Greedy Method.
Merge Sort 1/17/2019 3:11 AM The Greedy Method The Greedy Method.
Greedy Algorithms TOPICS Greedy Strategy Activity Selection
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Merge Sort 5/2/2019 7:53 PM The Greedy Method The Greedy Method.
More Graphs Lecture 19 CS2110 – Fall 2009.
Presentation transcript:

Lecture 1: The Greedy Method 主講人 : 虞台文

Content What is it? Activity Selection Problem Fractional Knapsack Problem Minimum Spanning Tree – Kruskal’s Algorithm – Prim’s Algorithm Shortest Path Problem – Dijkstra’s Algorithm Huffman Codes

Lecture 1: The Greedy Method What is it?

The Greedy Method A greedy algorithm always makes the choice that looks best at the moment For some problems, it always give a globally optimal solution. For others, it may only give a locally optimal one.

Main Components Configurations – different choices, collections, or values to find Objective function – a score assigned to configurations, which we want to either maximize or minimize

Example: Making Change Problem – A dollar amount to reach and a collection of coin amounts to use to get there. Configuration – A dollar amount yet to return to a customer plus the coins already returned Objective function – Minimize number of coins returned. Greedy solution – Always return the largest coin you can Is the solution always optimal?

Example: Largest k-out-of-n Sum Problem – Pick k numbers out of n numbers such that the sum of these k numbers is the largest. Exhaustive solution – There are choices. – Choose the one with subset sum being the largest Greedy Solution FOR i = 1 to k pick out the largest number and delete this number from the input. ENDFOR Is the greedy solution always optimal?

Example: Shortest Paths on a Special Graph Problem – Find a shortest path from v 0 to v 3 Greedy Solution

Example: Shortest Paths on a Special Graph Problem – Find a shortest path from v 0 to v 3 Greedy Solution Is the solution optimal?

Example: Shortest Paths on a Multi-stage Graph Problem – Find a shortest path from v 0 to v 3 Is the greedy solution optimal?

Example: Shortest Paths on a Multi-stage Graph Problem – Find a shortest path from v 0 to v 3 Is the greedy solution optimal? The optimal path 

Example: Shortest Paths on a Multi-stage Graph Problem – Find a shortest path from v 0 to v 3 Is the greedy solution optimal? The optimal path  What algorithm can be used to find the optimum?

Advantage and Disadvantage of the Greedy Method Advantage – Simple – Work fast when they work Disadvantage – Not always work  Short term solutions can be disastrous in the long term – Hard to prove correct

Lecture 1: The Greedy Method Activity Selection Problem

Activity Selection Problem (Conference Scheduling Problem) Input: A set of activities S = {a 1,…, a n } Each activity has a start time and a finish time a i = [s i, f i ) Two activities are compatible if and only if their interval does not overlap Output: a maximum-size subset of mutually compatible activities

Example: Activity Selection Problem Assume that f i ’s are sorted.

Example: Activity Selection Problem

Example: Activity Selection Problem Is the solution optimal?

Example: Activity Selection Problem Is the solution optimal?

Activity Selection Algorithm Greedy-Activity-Selector (s, f) // Assume that f 1  f 2 ...  f n n  length [s] A  { 1 } j  1 for i  2 to n if s i  f j then A  A  { i } j  i return A Greedy-Activity-Selector (s, f) // Assume that f 1  f 2 ...  f n n  length [s] A  { 1 } j  1 for i  2 to n if s i  f j then A  A  { i } j  ij  i return A Is the algorithm optimal?

Proof of Optimality Suppose A  S is an optimal solution and the first activity is k  1. If k  1, one can easily show that B = A – {k}  {1} is also optimal. (why?) This reveals that greedy-choice can be applied to the first choice. Now, the problem is reduced to activity selection on S ’ = {2, …, n}, which are all compatible with 1. By the same argument, we can show that, to retain optimality, greedy-choice can also be applied for next choices.

Lecture 1: The Greedy Method Fractional Knapsack Problem

The Fractional Knapsack Problem Given: A set S of n items, with each item i having – b i - a positive benefit – w i - a positive weight Goal: Choose items, allowing fractional amounts, to maximize total benefit but with weight at most W.

The Fractional Knapsack Problem wi :wi : bi :bi : ml8 ml2 ml6 ml1 ml $12$32$40$30$50 Items: 3 Value: ($ per ml ) ml Solution: 1 ml of 5 2 ml of 3 6 ml of 4 1 ml of 2 “knapsack”

The Fractional Knapsack Algorithm Greedy choice: Keep taking item with highest value Algorithm fractionalKnapsack(S, W) Input: set S of items w/ benefit b i and weight w i ; max. weight W Output: amount x i of each item i to maximize benefit w/ weight at most W for each item i in S x i  0 v i  b i / w i {value} w  0 {total weight} while w < W remove item i with highest v i x i  min{w i, W  w} w  w + min{w i, W  w} Algorithm fractionalKnapsack(S, W) Input: set S of items w/ benefit b i and weight w i ; max. weight W Output: amount x i of each item i to maximize benefit w/ weight at most W for each item i in S xi  0xi  0 v i  b i / w i {value} w  0 {total weight} while w < W remove item i with highest v i x i  min{w i, W  w} w  w + min{w i, W  w} Does the algorithm always gives an optimum?

Proof of Optimality Suppose there is a better solution Then, there is an item i with higher value than a chosen item j, but x i 0 and v i > v j Substituting some i with j, we’ll get a better solution How much of i : min{w i  x i, x j } Thus, there is no better solution than the greedy one

Recall: 0-1 Knapsack Problem Which boxes should be chosen to maximize the amount of money while still keeping the overall weight under 15 kg ? Is the fractional knapsack algorithm applicable?

Exercise 1. Construct an example show that the fractional knapsack algorithm doesn’t give the optimal solution when applying it to the 0-1 knapsack problem.

Lecture 1: The Greedy Method Minimum Spanning Tree

What is a Spanning Tree? A tree is a connected undirected graph that contains no cycles A spanning tree of a graph G is a subgraph of G that is a tree and contains all the vertices of G

Properties of a Spanning Tree The spanning tree of a n-vertex undirected graph has exactly n – 1 edges It connects all the vertices in the graph A spanning tree has no cycles Undirected GraphSome Spanning Trees A A E E D D C C B B A A E E D D C C B B A A E E D D C C B B

What is a Minimum Spanning Tree? A spanning tree of a graph G is a subgraph of G that is a tree and contains all the vertices of G A minimum spanning tree is the one among all the spanning trees with the lowest cost

Applications of MSTs Computer Networks – To find how to connect a set of computers using the minimum amount of wire Shipping/Airplane Lines – To find the fastest way between locations

Two Greedy Algorithms for MST Kruskal’s Algorithm – merges forests into tree by adding small-cost edges repeatedly Prim’s Algorithm – attaches vertices to a partially built tree by adding small-cost edges repeatedly

Kruskal’s Algorithm a b h i c g e d f a b h i c g e d f

 Kruskal’s Algorithm a b h i c g e d f a b h i c g e d f  16

Kruskal’s Algorithm MST-Kruksal(G) T ← Ø for each vertex v  V[G] Make-Set(v) // Make separate sets for vertices sort the edges by increasing weight w for each edge (u, v)  E, in sorted order if Find-Set(u) ≠ Find-Set(v) // If no cycles are formed T ← T  {(u, v)} // Add edge to Tree Union(u, v) // Combine Sets return T MST-Kruksal(G) T ← Ø for each vertex v  V[G] Make-Set(v) // Make separate sets for vertices sort the edges by increasing weight w for each edge (u, v)  E, in sorted order if Find-Set(u) ≠ Find-Set(v) // If no cycles are formed T ← T  {(u, v)} // Add edge to Tree Union(u, v) // Combine Sets return T G = (V, E) – Graph w: E  R + – Weight T  Tree G = (V, E) – Graph w: E  R + – Weight T  Tree

Time Complexity MST-Kruksal(G, w) T ← Ø for each vertex v  V[G] Make-Set(v) // Make separate sets for vertices sort the edges by increasing weight w for each edge (u, v)  E, in sorted order if Find-Set(u) ≠ Find-Set(v) // If no cycles are formed T ← T  {(u, v)} // Add edge to Tree Union(u, v) // Combine Sets return T MST-Kruksal(G, w) T ← Ø for each vertex v  V[G] Make-Set(v) // Make separate sets for vertices sort the edges by increasing weight w for each edge (u, v)  E, in sorted order if Find-Set(u) ≠ Find-Set(v) // If no cycles are formed T ← T  {(u, v)} // Add edge to Tree Union(u, v) // Combine Sets return T G = (V, E) – Graph w: E  R + – Weight T  Tree G = (V, E) – Graph w: E  R + – Weight T  Tree O(1) O(|V|) O(|E|) O(|V|) O(1) O(|E|log|E|)

Prim’s Algorithm a b h i c g e d f a b h i c g e d f

Prim’s Algorithm a b h i c g e d f a b h i c g e d f a a b b c c i i 16 f f g g h h d d e e

Prim’s Algorithm MST-Prim(G, w, r) Q ← V[G]// Initially Q holds all vertices for each u  Q Key[u] ← ∞ // Initialize all Keys to ∞ Key[r] ← 0 // r is the first tree node π[r] ← Nil while Q ≠ Ø u ← Extract_min(Q) // Get the min key node for each v  Adj[u] if v  Q and w(u, v) < Key[v] // If the weight is less than the Key π[v] ← u Key[v] ← w(u, v) MST-Prim(G, w, r) Q ← V[G]// Initially Q holds all vertices for each u  Q Key[u] ← ∞ // Initialize all Keys to ∞ Key[r] ← 0 // r is the first tree node π[r] ← Nil while Q ≠ Ø u ← Extract_min(Q) // Get the min key node for each v  Adj[u] if v  Q and w(u, v) < Key[v] // If the weight is less than the Key π[v] ← u Key[v] ← w(u, v) G = (V, E) – Graph w: E  R + – Weight r – Starting vertex Q – Priority Queue Key[v] – Key of Vertex v π[v] –Parent of Vertex v Adj[v] – Adjacency List of v G = (V, E) – Graph w: E  R + – Weight r – Starting vertex Q – Priority Queue Key[v] – Key of Vertex v π[v] –Parent of Vertex v Adj[v] – Adjacency List of v

MST-Prim(G, r) Q ← V[G]// Initially Q holds all vertices for each u  Q Key[u] ← ∞ // Initialize all Keys to ∞ Key[r] ← 0 // r is the first tree node π[r] ← Nil while Q ≠ Ø u ← Extract_min(Q) // Get the min key node for each v  Adj[u] if v  Q and w(u, v) < Key[v] // If the weight is less than the Key π[v] ← u Key[v] ← w(u, v) MST-Prim(G, r) Q ← V[G]// Initially Q holds all vertices for each u  Q Key[u] ← ∞ // Initialize all Keys to ∞ Key[r] ← 0 // r is the first tree node π[r] ← Nil while Q ≠ Ø u ← Extract_min(Q) // Get the min key node for each v  Adj[u] if v  Q and w(u, v) < Key[v] // If the weight is less than the Key π[v] ← u Key[v] ← w(u, v) Time Complexity O(|E|log|V|) G = (V, E) – Graph w: E  R + – Weight r – Starting vertex Q – Priority Queue Key[v] – Key of Vertex v π[v] –Parent of Vertex v Adj[v] – Adjacency List of v G = (V, E) – Graph w: E  R + – Weight r – Starting vertex Q – Priority Queue Key[v] – Key of Vertex v π[v] –Parent of Vertex v Adj[v] – Adjacency List of v

Optimality Kruskal’s Algorithm – merges forests into tree by adding small-cost edges repeatedly Prim’s Algorithm – attaches vertices to a partially built tree by adding small-cost edges repeatedly Are the algorithms optimal? Yes

Lecture 1: The Greedy Method Shortest Path Problem

Shortest Path Problem (SPP) Single-Source SPP – Given a graph G = (V, E), and weight w: E  R +, find the shortest path from a source node s  V to any other node, say, v  V. All-Pairs SPP – Given a graph G = (V, E), and weight w: E  R +, find the shortest path between each pair of nodes in G.

Dijkstra's Algorithm Dijkstra's algorithm, named after its discoverer, Dutch computer scientist Edsger Dijkstra, is an algorithm that solves the single-source shortest path problem for a directed graph with nonnegative edge weights.

Dijkstra's Algorithm Start from the source vertex, s Take the adjacent nodes and update the current shortest distance Select the vertex with the shortest distance, from the remaining vertices Update the current shortest distance of the Adjacent Vertices where necessary, – i.e. when the new distance is less than the existing value Stop when all the vertices are checked

Dijkstra's Algorithm

0     s uv x y

0     s uv x y     

0     s uv x y      0

0     s uv x y      0 9 5

0     s uv x y     

0     s uv x y    0 9 5

0     s uv x y   

Dijkstra's Algorithm 0     s uv x y   

Dijkstra's Algorithm 0     s uv x y  

Dijkstra's Algorithm 0     s uv x y  

Dijkstra's Algorithm 0     s uv x y  

 8 Dijkstra's Algorithm 0    s uv x y  

 8 Dijkstra's Algorithm 0    s uv x y  

 8 Dijkstra's Algorithm 0    s uv x y  

Dijkstra's Algorithm Dijkstra(G, w,s) for each vertex v  V[G] d[v]   // Initialize all distances to  π[v]  Nil d[s]  0 // Set distance of source to 0 S   Q  V[G] while Q ≠  u  Extract_Min(Q) // Get the min in Q S  S  {u} // Add it to the already known list for each vertex v  Adj[u] if d[v] > d[u] + w(u, v) // If the new distance is shorter d[v]  d[u] + w(u, v) π[v]  u Dijkstra(G, w,s) for each vertex v  V[G] d[v]   // Initialize all distances to  π[v]  Nil d[s]  0 // Set distance of source to 0 S  S   Q  V[G]Q  V[G] while Q ≠  u  Extract_Min(Q) // Get the min in Q S  S  {u} // Add it to the already known list for each vertex v  Adj[u] if d[v] > d[u] + w(u, v) // If the new distance is shorter d[v]  d[u] + w(u, v) π[v]  u G = (V, E) – Graph w: E  R+ – Weight s – Source d[v] – Current shortest distance from s to v S – Set of nodes whose shortest distance is known Q – Set of nodes whose shortest distance is unknown G = (V, E) – Graph w: E  R+ – Weight s – Source d[v] – Current shortest distance from s to v S – Set of nodes whose shortest distance is known Q – Set of nodes whose shortest distance is unknown

Lecture 1: The Greedy Method Huffman Codes

Huffman code is a technique for compressing data. – Variable-Length code Huffman's greedy algorithm look at the occurrence of each character and it as a binary string in an optimal way.

Example abcdef Frequency45,00013,00012,00016,0009,0005,000 Suppose we have a data consists of 100,000 characters with following frequencies.

Fixed vs. Variable Length Codes abcdef Frequency45,00013,00012,00016,0009,0005,000 Suppose we have a data consists of 100,000 characters with following frequencies. Fixed Length Code Variable Length Code Total Bits: Fixed Length Code Variable Length Code 1  45,  13,  12,  16,  9,  5,000= 224,000 3  45,  13,  12,  16,  9,  5,000= 300,000

Prefix Codes abcdef Frequency45%13%12%16%9%5% Variable Length Code In which no codeword is a prefix of other codeword. a:45 c:12 b:13 d:16 f:5 e: Encode Decode aceabfd= aceabfd

Huffman-Code Algorithm abcdef Frequency45%13%12%16%9%5% Variable Length Code a:45 c:12 b:13 d:16 f:5 e:

Huffman-Code Algorithm a:45 c:12 b:13 d:16 f:5 e:9 f:5 e:

Huffman-Code Algorithm a:45 c:12 b:13 d:16 f:5 e:9 f:5 e: a:45 c:12 b:13 d:16 f:5 e:

Huffman-Code Algorithm a:45 c:12 b:13 d:16 f:5 e: c:12 b:

Huffman-Code Algorithm a:45 c:12 b:13 d:16 f:5 e: c:12 b: a:45 d:16 f:5 e: c:12 b:

Huffman-Code Algorithm a:45 d:16 f:5 e: c:12 b: d:16 01 f:5 e:

Huffman-Code Algorithm a:45 d:16 f:5 e: c:12 b: d:16 01 f:5 e: a:45 c:12 b: d:16 01 f:5 e:

Huffman-Code Algorithm a:45 c:12 b: d:16 01 f:5 e: d:16 01 f:5 e: c:12 b:

Huffman-Code Algorithm a:45 c:12 b: d:16 01 f:5 e: d:16 01 f:5 e: c:12 b: a:45 01 d:16 01 f:5 e: c:12 b:

Huffman-Code Algorithm c:12 b: d:16 01 f:5 e: a:45 01 d:16 01 f:5 e: c:12 b: a: d:16 01 f:5 e: c:12 b:

Huffman-Code Algorithm c:12 b: d:16 01 f:5 e: a:45 01 d:16 01 f:5 e: c:12 b: a: d:16 01 f:5 e: c:12 b: Huffman tree built

Huffman-Code Algorithm Huffman (C) n  |C| Q  C for i  1 to n  1 z  Allocate-Node () x  left[z]  Extract-Min (Q) // least frequent y  right[z]  Extract-Min (Q)// next least f[z]  f[x] + f[y] // update frequency Insert ( Q, z ) return Extract-Min (Q) Huffman (C) n  |C| Q  CQ  C for i  1 to n  1 z  Allocate-Node () x  left[z]  Extract-Min (Q) // least frequent y  right[z]  Extract-Min (Q)// next least f[z]  f[x] + f[y] // update frequency Insert ( Q, z ) return Extract-Min (Q)

Optimality Exercise