Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 1: The Greedy Method 主講人 : 虞台文. Content What is it? Activity Selection Problem Fractional Knapsack Problem Minimum Spanning Tree – Kruskal’s Algorithm.

Similar presentations


Presentation on theme: "Lecture 1: The Greedy Method 主講人 : 虞台文. Content What is it? Activity Selection Problem Fractional Knapsack Problem Minimum Spanning Tree – Kruskal’s Algorithm."— Presentation transcript:

1 Lecture 1: The Greedy Method 主講人 : 虞台文

2 Content What is it? Activity Selection Problem Fractional Knapsack Problem Minimum Spanning Tree – Kruskal’s Algorithm – Prim’s Algorithm Shortest Path Problem – Dijkstra’s Algorithm Huffman Codes

3 Lecture 1: The Greedy Method What is it?

4 The Greedy Method A greedy algorithm always makes the choice that looks best at the moment For some problems, it always give a globally optimal solution. For others, it may only give a locally optimal one.

5 Main Components Configurations – different choices, collections, or values to find Objective function – a score assigned to configurations, which we want to either maximize or minimize

6 Example: Making Change Problem – A dollar amount to reach and a collection of coin amounts to use to get there. Configuration – A dollar amount yet to return to a customer plus the coins already returned Objective function – Minimize number of coins returned. Greedy solution – Always return the largest coin you can Is the solution always optimal?

7 Example: Largest k-out-of-n Sum Problem – Pick k numbers out of n numbers such that the sum of these k numbers is the largest. Exhaustive solution – There are choices. – Choose the one with subset sum being the largest Greedy Solution FOR i = 1 to k pick out the largest number and delete this number from the input. ENDFOR Is the greedy solution always optimal?

8 Example: Shortest Paths on a Special Graph Problem – Find a shortest path from v 0 to v 3 Greedy Solution

9 Example: Shortest Paths on a Special Graph Problem – Find a shortest path from v 0 to v 3 Greedy Solution Is the solution optimal?

10 Example: Shortest Paths on a Multi-stage Graph Problem – Find a shortest path from v 0 to v 3 Is the greedy solution optimal?

11 Example: Shortest Paths on a Multi-stage Graph Problem – Find a shortest path from v 0 to v 3 Is the greedy solution optimal? The optimal path 

12 Example: Shortest Paths on a Multi-stage Graph Problem – Find a shortest path from v 0 to v 3 Is the greedy solution optimal? The optimal path  What algorithm can be used to find the optimum?

13 Advantage and Disadvantage of the Greedy Method Advantage – Simple – Work fast when they work Disadvantage – Not always work  Short term solutions can be disastrous in the long term – Hard to prove correct

14 Lecture 1: The Greedy Method Activity Selection Problem

15 Activity Selection Problem (Conference Scheduling Problem) Input: A set of activities S = {a 1,…, a n } Each activity has a start time and a finish time a i = [s i, f i ) Two activities are compatible if and only if their interval does not overlap Output: a maximum-size subset of mutually compatible activities

16 Example: Activity Selection Problem Assume that f i ’s are sorted.

17 Example: Activity Selection Problem 012345678910111213141516 1 2 3 4 5 6 7 8 9 10 11

18 Example: Activity Selection Problem 012345678910111213141516 1 2 3 4 5 6 7 8 9 10 11 Is the solution optimal?

19 Example: Activity Selection Problem 012345678910111213141516 1 2 3 4 5 6 7 8 9 10 11 Is the solution optimal?

20 Activity Selection Algorithm Greedy-Activity-Selector (s, f) // Assume that f 1  f 2 ...  f n n  length [s] A  { 1 } j  1 for i  2 to n if s i  f j then A  A  { i } j  i return A Greedy-Activity-Selector (s, f) // Assume that f 1  f 2 ...  f n n  length [s] A  { 1 } j  1 for i  2 to n if s i  f j then A  A  { i } j  ij  i return A Is the algorithm optimal?

21 Proof of Optimality Suppose A  S is an optimal solution and the first activity is k  1. If k  1, one can easily show that B = A – {k}  {1} is also optimal. (why?) This reveals that greedy-choice can be applied to the first choice. Now, the problem is reduced to activity selection on S ’ = {2, …, n}, which are all compatible with 1. By the same argument, we can show that, to retain optimality, greedy-choice can also be applied for next choices.

22 Lecture 1: The Greedy Method Fractional Knapsack Problem

23 The Fractional Knapsack Problem Given: A set S of n items, with each item i having – b i - a positive benefit – w i - a positive weight Goal: Choose items, allowing fractional amounts, to maximize total benefit but with weight at most W.

24 The Fractional Knapsack Problem wi :wi : bi :bi : 12345 4 ml8 ml2 ml6 ml1 ml $12$32$40$30$50 Items: 3 Value: ($ per ml ) 420550 10 ml Solution: 1 ml of 5 2 ml of 3 6 ml of 4 1 ml of 2 “knapsack”

25 The Fractional Knapsack Algorithm Greedy choice: Keep taking item with highest value Algorithm fractionalKnapsack(S, W) Input: set S of items w/ benefit b i and weight w i ; max. weight W Output: amount x i of each item i to maximize benefit w/ weight at most W for each item i in S x i  0 v i  b i / w i {value} w  0 {total weight} while w < W remove item i with highest v i x i  min{w i, W  w} w  w + min{w i, W  w} Algorithm fractionalKnapsack(S, W) Input: set S of items w/ benefit b i and weight w i ; max. weight W Output: amount x i of each item i to maximize benefit w/ weight at most W for each item i in S xi  0xi  0 v i  b i / w i {value} w  0 {total weight} while w < W remove item i with highest v i x i  min{w i, W  w} w  w + min{w i, W  w} Does the algorithm always gives an optimum?

26 Proof of Optimality Suppose there is a better solution Then, there is an item i with higher value than a chosen item j, but x i 0 and v i > v j Substituting some i with j, we’ll get a better solution How much of i : min{w i  x i, x j } Thus, there is no better solution than the greedy one

27 Recall: 0-1 Knapsack Problem Which boxes should be chosen to maximize the amount of money while still keeping the overall weight under 15 kg ? Is the fractional knapsack algorithm applicable?

28 Exercise 1. Construct an example show that the fractional knapsack algorithm doesn’t give the optimal solution when applying it to the 0-1 knapsack problem.

29 Lecture 1: The Greedy Method Minimum Spanning Tree

30 What is a Spanning Tree? A tree is a connected undirected graph that contains no cycles A spanning tree of a graph G is a subgraph of G that is a tree and contains all the vertices of G

31 Properties of a Spanning Tree The spanning tree of a n-vertex undirected graph has exactly n – 1 edges It connects all the vertices in the graph A spanning tree has no cycles Undirected GraphSome Spanning Trees A A E E D D C C B B A A E E D D C C B B A A E E D D C C B B

32 What is a Minimum Spanning Tree? A spanning tree of a graph G is a subgraph of G that is a tree and contains all the vertices of G A minimum spanning tree is the one among all the spanning trees with the lowest cost

33 Applications of MSTs Computer Networks – To find how to connect a set of computers using the minimum amount of wire Shipping/Airplane Lines – To find the fastest way between locations

34 Two Greedy Algorithms for MST Kruskal’s Algorithm – merges forests into tree by adding small-cost edges repeatedly Prim’s Algorithm – attaches vertices to a partially built tree by adding small-cost edges repeatedly

35 Kruskal’s Algorithm a b h i c g e d f a b h i c g e d f 4 87 9 10 14 4 2 21 7 11 8 16

36  Kruskal’s Algorithm a b h i c g e d f 4 87 9 10 14 4 2 21 7 11 8 a b h i c g e d f  16

37 Kruskal’s Algorithm MST-Kruksal(G) T ← Ø for each vertex v  V[G] Make-Set(v) // Make separate sets for vertices sort the edges by increasing weight w for each edge (u, v)  E, in sorted order if Find-Set(u) ≠ Find-Set(v) // If no cycles are formed T ← T  {(u, v)} // Add edge to Tree Union(u, v) // Combine Sets return T MST-Kruksal(G) T ← Ø for each vertex v  V[G] Make-Set(v) // Make separate sets for vertices sort the edges by increasing weight w for each edge (u, v)  E, in sorted order if Find-Set(u) ≠ Find-Set(v) // If no cycles are formed T ← T  {(u, v)} // Add edge to Tree Union(u, v) // Combine Sets return T G = (V, E) – Graph w: E  R + – Weight T  Tree G = (V, E) – Graph w: E  R + – Weight T  Tree

38 Time Complexity MST-Kruksal(G, w) T ← Ø for each vertex v  V[G] Make-Set(v) // Make separate sets for vertices sort the edges by increasing weight w for each edge (u, v)  E, in sorted order if Find-Set(u) ≠ Find-Set(v) // If no cycles are formed T ← T  {(u, v)} // Add edge to Tree Union(u, v) // Combine Sets return T MST-Kruksal(G, w) T ← Ø for each vertex v  V[G] Make-Set(v) // Make separate sets for vertices sort the edges by increasing weight w for each edge (u, v)  E, in sorted order if Find-Set(u) ≠ Find-Set(v) // If no cycles are formed T ← T  {(u, v)} // Add edge to Tree Union(u, v) // Combine Sets return T G = (V, E) – Graph w: E  R + – Weight T  Tree G = (V, E) – Graph w: E  R + – Weight T  Tree O(1) O(|V|) O(|E|) O(|V|) O(1) O(|E|log|E|)

39 Prim’s Algorithm a b h i c g e d f a b h i c g e d f 4 87 9 10 14 4 2 21 7 11 8 16

40 Prim’s Algorithm a b h i c g e d f 4 87 9 10 14 4 2 21 7 11 8 a b h i c g e d f a a b b c c i i 16 f f g g h h d d e e

41 Prim’s Algorithm MST-Prim(G, w, r) Q ← V[G]// Initially Q holds all vertices for each u  Q Key[u] ← ∞ // Initialize all Keys to ∞ Key[r] ← 0 // r is the first tree node π[r] ← Nil while Q ≠ Ø u ← Extract_min(Q) // Get the min key node for each v  Adj[u] if v  Q and w(u, v) < Key[v] // If the weight is less than the Key π[v] ← u Key[v] ← w(u, v) MST-Prim(G, w, r) Q ← V[G]// Initially Q holds all vertices for each u  Q Key[u] ← ∞ // Initialize all Keys to ∞ Key[r] ← 0 // r is the first tree node π[r] ← Nil while Q ≠ Ø u ← Extract_min(Q) // Get the min key node for each v  Adj[u] if v  Q and w(u, v) < Key[v] // If the weight is less than the Key π[v] ← u Key[v] ← w(u, v) G = (V, E) – Graph w: E  R + – Weight r – Starting vertex Q – Priority Queue Key[v] – Key of Vertex v π[v] –Parent of Vertex v Adj[v] – Adjacency List of v G = (V, E) – Graph w: E  R + – Weight r – Starting vertex Q – Priority Queue Key[v] – Key of Vertex v π[v] –Parent of Vertex v Adj[v] – Adjacency List of v

42 MST-Prim(G, r) Q ← V[G]// Initially Q holds all vertices for each u  Q Key[u] ← ∞ // Initialize all Keys to ∞ Key[r] ← 0 // r is the first tree node π[r] ← Nil while Q ≠ Ø u ← Extract_min(Q) // Get the min key node for each v  Adj[u] if v  Q and w(u, v) < Key[v] // If the weight is less than the Key π[v] ← u Key[v] ← w(u, v) MST-Prim(G, r) Q ← V[G]// Initially Q holds all vertices for each u  Q Key[u] ← ∞ // Initialize all Keys to ∞ Key[r] ← 0 // r is the first tree node π[r] ← Nil while Q ≠ Ø u ← Extract_min(Q) // Get the min key node for each v  Adj[u] if v  Q and w(u, v) < Key[v] // If the weight is less than the Key π[v] ← u Key[v] ← w(u, v) Time Complexity O(|E|log|V|) G = (V, E) – Graph w: E  R + – Weight r – Starting vertex Q – Priority Queue Key[v] – Key of Vertex v π[v] –Parent of Vertex v Adj[v] – Adjacency List of v G = (V, E) – Graph w: E  R + – Weight r – Starting vertex Q – Priority Queue Key[v] – Key of Vertex v π[v] –Parent of Vertex v Adj[v] – Adjacency List of v

43 Optimality Kruskal’s Algorithm – merges forests into tree by adding small-cost edges repeatedly Prim’s Algorithm – attaches vertices to a partially built tree by adding small-cost edges repeatedly Are the algorithms optimal? Yes

44 Lecture 1: The Greedy Method Shortest Path Problem

45 Shortest Path Problem (SPP) Single-Source SPP – Given a graph G = (V, E), and weight w: E  R +, find the shortest path from a source node s  V to any other node, say, v  V. All-Pairs SPP – Given a graph G = (V, E), and weight w: E  R +, find the shortest path between each pair of nodes in G.

46 Dijkstra's Algorithm Dijkstra's algorithm, named after its discoverer, Dutch computer scientist Edsger Dijkstra, is an algorithm that solves the single-source shortest path problem for a directed graph with nonnegative edge weights.

47 Dijkstra's Algorithm Start from the source vertex, s Take the adjacent nodes and update the current shortest distance Select the vertex with the shortest distance, from the remaining vertices Update the current shortest distance of the Adjacent Vertices where necessary, – i.e. when the new distance is less than the existing value Stop when all the vertices are checked

48 Dijkstra's Algorithm

49 0     s uv x y 9 23 1 5 2 9 46 7

50 0     s uv x y 9 23 1 5 2 9 46 7     

51 0     s uv x y 9 23 1 5 2 9 46 7      0

52 0     s uv x y 9 23 1 5 2 9 46 7      0 9 5

53 0     s uv x y 9 23 1 5 2 9 46 7      0 9 55

54 0     s uv x y 9 23 1 5 2 9 46 7    0 9 5

55 0     s uv x y 9 23 1 5 2 9 46 7    0 9 5 814 7

56 Dijkstra's Algorithm 0     s uv x y 9 23 1 5 2 9 46 7    0 9 5 814 7 7

57 Dijkstra's Algorithm 0     s uv x y 9 23 1 5 2 9 46 7   0 9 5 814 7

58 Dijkstra's Algorithm 0     s uv x y 9 23 1 5 2 9 46 7   0 9 5 814 7 13

59 Dijkstra's Algorithm 0     s uv x y 9 23 1 5 2 9 46 7   0 9 5 814 7 138

60  8 Dijkstra's Algorithm 0    s uv x y 9 23 1 5 2 9 46 7   0 5 14 7 13

61  8 Dijkstra's Algorithm 0    s uv x y 9 23 1 5 2 9 46 7   0 5 14 7 139

62  8 Dijkstra's Algorithm 0    s uv x y 9 23 1 5 2 9 46 7   0 5 14 7 1399

63 Dijkstra's Algorithm Dijkstra(G, w,s) for each vertex v  V[G] d[v]   // Initialize all distances to  π[v]  Nil d[s]  0 // Set distance of source to 0 S   Q  V[G] while Q ≠  u  Extract_Min(Q) // Get the min in Q S  S  {u} // Add it to the already known list for each vertex v  Adj[u] if d[v] > d[u] + w(u, v) // If the new distance is shorter d[v]  d[u] + w(u, v) π[v]  u Dijkstra(G, w,s) for each vertex v  V[G] d[v]   // Initialize all distances to  π[v]  Nil d[s]  0 // Set distance of source to 0 S  S   Q  V[G]Q  V[G] while Q ≠  u  Extract_Min(Q) // Get the min in Q S  S  {u} // Add it to the already known list for each vertex v  Adj[u] if d[v] > d[u] + w(u, v) // If the new distance is shorter d[v]  d[u] + w(u, v) π[v]  u G = (V, E) – Graph w: E  R+ – Weight s – Source d[v] – Current shortest distance from s to v S – Set of nodes whose shortest distance is known Q – Set of nodes whose shortest distance is unknown G = (V, E) – Graph w: E  R+ – Weight s – Source d[v] – Current shortest distance from s to v S – Set of nodes whose shortest distance is known Q – Set of nodes whose shortest distance is unknown

64 Lecture 1: The Greedy Method Huffman Codes

65 Huffman code is a technique for compressing data. – Variable-Length code Huffman's greedy algorithm look at the occurrence of each character and it as a binary string in an optimal way.

66 Example abcdef Frequency45,00013,00012,00016,0009,0005,000 Suppose we have a data consists of 100,000 characters with following frequencies.

67 Fixed vs. Variable Length Codes abcdef Frequency45,00013,00012,00016,0009,0005,000 Suppose we have a data consists of 100,000 characters with following frequencies. Fixed Length Code 000001010011100101 Variable Length Code 010110011111011100 Total Bits: Fixed Length Code Variable Length Code 1  45,000 + 3  13,000 + 3  12,000 + 3  16,000 + 4  9,000 + 4  5,000= 224,000 3  45,000 + 3  13,000 + 3  12,000 + 3  16,000 + 3  9,000 + 3  5,000= 300,000

68 Prefix Codes abcdef Frequency45%13%12%16%9%5% Variable Length Code 010110011111011100 In which no codeword is a prefix of other codeword. a:45 c:12 b:13 d:16 f:5 e:9 01 01 0101 01 Encode Decode aceabfd=0100110101011100111 0100110101011100111 aceabfd

69 Huffman-Code Algorithm abcdef Frequency45%13%12%16%9%5% Variable Length Code 010110011111011100 a:45 c:12 b:13 d:16 f:5 e:9 01 01 0101 01 100 14 3025 55

70 Huffman-Code Algorithm a:45 c:12 b:13 d:16 f:5 e:9 f:5 e:9 01 14

71 Huffman-Code Algorithm a:45 c:12 b:13 d:16 f:5 e:9 f:5 e:9 01 14 a:45 c:12 b:13 d:16 f:5 e:9 01 14

72 Huffman-Code Algorithm a:45 c:12 b:13 d:16 f:5 e:9 01 14 c:12 b:13 01 25

73 Huffman-Code Algorithm a:45 c:12 b:13 d:16 f:5 e:9 01 14 c:12 b:13 01 25 a:45 d:16 f:5 e:9 01 14 c:12 b:13 01 25

74 Huffman-Code Algorithm a:45 d:16 f:5 e:9 01 14 c:12 b:13 01 25 d:16 01 f:5 e:9 01 14 30

75 Huffman-Code Algorithm a:45 d:16 f:5 e:9 01 14 c:12 b:13 01 25 d:16 01 f:5 e:9 01 14 30 a:45 c:12 b:13 01 25 d:16 01 f:5 e:9 01 14 30

76 Huffman-Code Algorithm a:45 c:12 b:13 01 25 d:16 01 f:5 e:9 01 14 30 01 d:16 01 f:5 e:9 01 14 30 c:12 b:13 01 25 55

77 Huffman-Code Algorithm a:45 c:12 b:13 01 25 d:16 01 f:5 e:9 01 14 30 01 d:16 01 f:5 e:9 01 14 30 c:12 b:13 01 25 55 a:45 01 d:16 01 f:5 e:9 01 14 30 c:12 b:13 01 25 55

78 Huffman-Code Algorithm c:12 b:13 01 25 d:16 01 f:5 e:9 01 14 30 a:45 01 d:16 01 f:5 e:9 01 14 30 c:12 b:13 01 25 55 a:45 01 100 01 d:16 01 f:5 e:9 01 14 30 c:12 b:13 01 25 55

79 Huffman-Code Algorithm c:12 b:13 01 25 d:16 01 f:5 e:9 01 14 30 a:45 01 d:16 01 f:5 e:9 01 14 30 c:12 b:13 01 25 55 a:45 01 100 01 d:16 01 f:5 e:9 01 14 30 c:12 b:13 01 25 55 Huffman tree built

80 Huffman-Code Algorithm Huffman (C) n  |C| Q  C for i  1 to n  1 z  Allocate-Node () x  left[z]  Extract-Min (Q) // least frequent y  right[z]  Extract-Min (Q)// next least f[z]  f[x] + f[y] // update frequency Insert ( Q, z ) return Extract-Min (Q) Huffman (C) n  |C| Q  CQ  C for i  1 to n  1 z  Allocate-Node () x  left[z]  Extract-Min (Q) // least frequent y  right[z]  Extract-Min (Q)// next least f[z]  f[x] + f[y] // update frequency Insert ( Q, z ) return Extract-Min (Q)

81 Optimality Exercise


Download ppt "Lecture 1: The Greedy Method 主講人 : 虞台文. Content What is it? Activity Selection Problem Fractional Knapsack Problem Minimum Spanning Tree – Kruskal’s Algorithm."

Similar presentations


Ads by Google