Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Algorithms

Similar presentations


Presentation on theme: "Introduction to Algorithms"— Presentation transcript:

1 Introduction to Algorithms
Greedy Algorithms

2 Greedy Algorithms A greedy algorithm always makes the choice that looks best at the moment My everyday examples Playing cards Invest on stocks Choose a university The hope A locally optimal choice will lead to a globally optimal solution

3 Introduction Similar to Dynamic Programming
It applies to Optimization Problem When we have a choice to make, make the one that looks best right now Make a locally optimal choice in hope of getting a globally optimal solution Greedy algorithms don’t always yield an optimal solution, but sometimes they do For many problems, it provides an optimal solution much more quickly than a dynamic programming approach

4 An Activity Selection Problem
The problem of scheduling several competing activities that require exclusive use of a common resource n activities require exclusive use of a common resource For example, scheduling the use of a classroom Set of activities S={a1, …. , an} ai needs resource during period [si , fi) [ ) is a half-open interval si = start time and fi = finish time Goal Select the largest possible set of non-overlapping (mutually compatible) activities Note: Could have many other objectives Schedule room for longest time Maximize income rental fees

5 An Activity Selection Problem
Here are a set of start and finish times What is the maximum number of activities that can be completed?

6 An Activity Selection Problem
What is the maximum number of activities that can be completed? {a3, a9, a11} can be completed But so can {a1, a4, a8, a11} which is a larger set But it is not unique, consider {a2, a4, a9, a11}

7 a3 a9 a11 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

8 a1 a4 a8 a11 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

9 a2 a4 a9 a11 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

10 An The Optimal Substructure of the A.-S. Problem
Sij = {ak S : fi ≤ sk < f k ≤ s j } = activities that start after ai finishes and finish before a j starts Activities in Si j are compatible with all activities that finish by fi, and all activities that start no earlier than sj To represent the entire problem, add fictitious activities: a0 = [ -∞, 0 ) = [∞, “∞+1” ) an+1 We don’t care about - ∞ in a0 or “∞+1” in an+1 Then S = S 0, n+1 Range for Si j is 0 ≤ i, j ≤ n + 1

11 An The Optimal Substructure of the A.-S. Problem
Assume that activities are sorted by monotonically increasing finish time f 0 ≤ f 1 ≤ f 2 ≤ · · · ≤ f n < f n+1 Then i ≥ j → S i j = ∅ If there exists ak Si j : fi ≤ sk < f k ≤ sj < fj → fi < fj  But i ≥ j → fi ≥ fj . Contradiction ! So only need to worry about Sij with 0 ≤ i < j ≤ n + 1 All other Sij are ∅

12 An The Optimal Substructure of the A.-S. Problem
Suppose that a solution to Sij includes ak. Have 2 subproblems: Sik Skj (start after ai finishes, finish before ak starts) (start after ak finishes, finish before aj starts) Solution to Sij { solution to Sik } {ak } { solution to Skj } Since ak is in neither subproblem the subproblems are disjoint | solution to S | = | solution to Sik | | solution to Skj |

13 An The Optimal Substructure of the A.-S. Problem
If an optimal solution to Sij includes ak , then the solutions to Sik and Skj used within this solution must be optimal as well Let Aij = optimal solution to Sij So Aij = Aik Akj, assuming: {ak} Sij is nonempty we know ak

14 An The Optimal Substructure of the A.-S. Problem
c[i, j] : size of maximum-size subset of mutually compatible activities in Sij i ≥ j → S i j = ∅ → c[i, j] = 0 If Sij ≠ ∅, suppose we know that ak is in the subset c [i, j] = c [i, k] c [k, j] . But, we don’t know which k to use, and so

15 Early Finish Greedy Select the activity with the earliest finish
Eliminate the activities that could not be scheduled Repeat!

16 A Recursive Greedy Algorithm
Assumes activities already sorted by monotonically increasing finish time  If not, then sort in O(n lg n) time  Return an optimal solution for Si,n+1 Initial call: REC-ACTIVITY-SELECTOR(s, f, 0, n) Time: Θ(n) — each activity examined exactly once

17 Example

18 Example

19 An Iterative Greedy Algorithm
Time Θ(n)

20 Elements of the greedy strategy
Determine the optimal substructure of the problem. Develop a recursive solution. (For the activity-selection problem, we formulated recurrence (16.2), but we bypassed developing a recursive algorithm based on this recurrence.) Show that if we make the greedy choice, then only one sub- problem remains. Prove that it is always safe to make the greedy choice. (Steps 3 and 4 can occur in either order.) Develop a recursive algorithm that implements the greedy strategy. Convert the recursive algorithm to an iterative algorithm.

21 Greedy versus dynamic programming
The 0-1 knapsack problem is the following. A thief robbing a store finds n items. The ith item is worth i dollars and weighs wi pounds, where i and wi are integers. The thief wants to take as valuable a load as possible, but he can carry at most W pounds in his knapsack, for some integer W . Which items should he take? In the fractional knapsack problem, the setup is the same, but the thief can take fractions of items, rather than having to make a binary (0-1) choice for each item. Calculate value per pound of an item

22 Greedy Strategy for 0-1?

23 Minimum Spanning Tree Model as a graph: » Undirected graph G = (V, E)
» Weight w(u, v) on each edge (u, v) ∈ E » Find T ⊆ E such that T connects all vertices (T is a spanning tree) w(T ) = ∑ w(u, v) is minimized ( u ,v )∈T

24 Minimum Spanning Tree A spanning tree whose weight is minimum over all spanning trees is called a Minimum Spanning Tree, or MST Example: In this example, there is more than one MST Replace edge (b,c) by (a,h) Get a different spanning tree with the same weight

25 Minimum Spanning Tree Which edges form the Minimum Spanning Tree (MST) of the below graph? A 5 6 4 9 H B C 14 2 10 15 G E D 3 8 F

26 Minimum Spanning Tree MSTs satisfy the optimal substructure property: an optimal tree is composed of optimal subtrees » Let T be an MST of G with an edge (u,v) in the middle » Removing (u,v) partitions T into two trees T1 and T2 » Claim: T1 is an MST of G1 = (V1,E1), and T2 is an MST of G2 = (V2,E2) ( Do V1 and V2 share vertices? Why? ) » Proof: w(T) = w(u,v) + w(T1) + w(T2) (There can’t be a better tree than T1 or T2, or T would be suboptimal)

27 Some definitions A cut (S, V – S) of an undirected graph G =(V,E) is a partition of V We say that an edge (u,v) ϵ E crosses the (S, V – S) if one of its endpoints is in S and the other is in V - S.

28 Some definitions We say that a cut respects a set A of edges if no edge in A crosses the cut. An edge is a light edge crossing a cut if its weight is the minimum of any edge crossing the cut. Note that there can be more than one light edge crossing a cut in the case of ties. More generally, we say that an edge is a light edge satisfying a given property if its weight is the minimum of any edge satisfying the property Theorem 23.1 Let G = (V,E) be a connected, undirected graph with a real-valued weight function w defined on E. Let A be a subset of E that is included in some minimum spanning tree for G, let (S, V – S) be any cut of G that respects A, and let (u, v) be a light edge crossing (S, V – S). Then, edge (u, v) is safe for A.

29 Proof of theorem Except for the dashed edge (u, v), all edges shown are in T. A is some subset of the edges of T, but A cannot contain any edges that cross the cut (S, V − S), since this cut respects A. Shaded edges are the path p.

30 Proof of theorem (1) Since the cut respects A, edge (x, y) is not in A. To form T‘ from T : • Remove (x, y). Breaks T into two components. • Add (u, v). Reconnects. So T‘ = T − {(x, y)} ∪ {(u, v)}. T’ is a spanning tree. w(T’ ) = w(T ) − w(x, y) + w(u, v) ≤ w(T) , since w(u, v) ≤ w(x, y). Since T is a spanning tree, w(T’) ≤ w(T ), and T is an MST, then T’ must be an MST. Need to show that (u, v) is safe for A: • A ⊆ T and (x, y) ∉ A ⇒ A ⊆ T’. • A ∪ {(u, v)} ⊆ T’ . • Since T’ is an MST, (u, v) is safe for A.

31 Generic-MST So, in GENERIC-MST:
A is a forest containing connected components. Initially, each component is a single vertex. Any safe edge merges two of these components into one. Each component is a tree. Since an MST has exactly |V| − 1 edges, the for loop iterates |V| − 1 times. Equivalently, after adding |V|−1 safe edges, we.re down to just one component. Corollary If C = (VC, EC) is a connected component in the forest GA = (V, A) and (u, v) is a light edge connecting C to some other component in GA (i.e., (u, v) is a light edge crossing the cut (VC, V − VC)), then (u, v) is safe for A. Proof Set S = VC in the theorem.

32 Some properties of an MST:
Growing An MST Some properties of an MST:  It has |V| -1 edges  It has no cycles  It might not be unique Building up the solution » We will build a set A of edges » Initially, A has no edges » As we add edges to A, maintain a loop invariant: •Loop invariant: A is a subset of some MST » Add only edges that maintain the invariant If A is a subset of some MST, an edge (u, v) is safe for A if and only if A υ {(u, v)} is also a subset of some MST So we will add only safe edges

33 Growing An MST

34 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

35 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

36 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1? MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

37 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

38 Kruskal’s Algorithm 2? 19 Run the algorithm: Kruskal() { 9 14 17
2? 19 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

39 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

40 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5? 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

41 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

42 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8? 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

43 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

44 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9? 14 17
2 19 Kruskal() { 9? 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

45 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

46 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13? 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

47 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

48 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14? 17
2 19 Kruskal() { 9 14? 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

49 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

50 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17?
2 19 Kruskal() { 9 14 17? T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

51 Kruskal’s Algorithm 2 19? Run the algorithm: Kruskal() { 9 14 17
2 19? Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

52 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21? 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

53 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25? 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

54 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

55 Kruskal’s Algorithm 2 19 Run the algorithm: Kruskal() { 9 14 17 T = ∅;
2 19 Kruskal() { 9 14 17 T = ∅; for each v ∈ V 8 25 5 21 13 1 MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted if FindSet(u) ≠ FindSet(v) order) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

56 Kruskal’s Algorithm

57 Kruskal’s Algorithm

58 Kruskal’s Algorithm Spring 2006 Algorithm
Networking Laboratory /62

59 Kruskal’s Algorithm What will affect the running time?
1 Sort O(V) MakeSet() calls O(E) FindSet() calls O(V) Union() calls (Exactly how many Union()s?) Kruskal() { T = ∅; for each v ∈ V MakeSet(v); sort E by increasing edge weight w for each (u,v) ∈ E (in sorted order) if FindSet(u) ≠ FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

60 Prim’s Algorithm Run on example graph 6 4 9 2 15 5 14 10 3 8
MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] if (v ∈ Q and w(u,v) П[v] = u; key[v] = w(u,v); 6 4 9 2 15 5 14 10 3 8 Run on example graph < key[v])

61 Prim’s Algorithm ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ Run on example graph 5 6 4 9 14 2 10
MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); 6 4 9 14 2 10 15 3 8 Run on example graph key[v])

62 Prim’s Algorithm ∞ ∞ ∞ ∞ r ∞ ∞ ∞ Pick a start vertex r 5 6 4 9 14 2 10
MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); 6 4 9 14 2 10 15 r 3 8 Pick a start vertex r key[v])

63 Prim’s Algorithm ∞ ∞ ∞ ∞ u ∞ ∞ ∞
MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 5 6 4 9 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); 15 u 3 8 for each v ∈ Adj[u] Black vertices have been removed from Q if (v ∈ Q and w(u,v) < key[v]) П[v] = u; key[v] = w(u,v);

64 Prim’s Algorithm ∞ ∞ ∞ ∞ u ∞ ∞ 3 Black arrows indicate parent pointers
5 MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); 6 4 9 14 2 10 15 u 3 3 8 Black arrows indicate parent pointers key[v])

65 Prim’s Algorithm ∞ 14 ∞ ∞ u ∞ ∞ 3 5 6 4 9 14 2 10 15 3 8
MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 5 6 4 9 14 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); 15 u 3 3 8 for each v ∈ Adj[u] if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

66 Prim’s Algorithm ∞ 14 ∞ ∞ ∞ ∞ 3 u 5 6 4 9 14 2 10 15 3 8
MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 5 6 4 9 14 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 3 3 8 u if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

67 Prim’s Algorithm ∞ 14 ∞ ∞ 8 ∞ 3 u 5 6 4 9 14 2 10 15 3 8
MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 5 6 4 9 14 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 3 3 8 u if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

68 Prim’s Algorithm ∞ 10 ∞ ∞ 8 ∞ 3 u 5 6 4 9 14 2 10 15 3 8
MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 5 6 4 9 10 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 3 3 8 u if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

69 Prim’s Algorithm ∞ 10 ∞ ∞ 8 ∞ 3 u 5 6 4 9 14 2 10 15 3 8
MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 5 6 4 9 10 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 3 3 8 u if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

70 Prim’s Algorithm MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 5 6 4 9 10 2 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 3 3 8 u if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

71 Prim’s Algorithm MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 5 6 4 9 10 2 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 15 3 3 8 u if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

72 Prim’s Algorithm u MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 5 6 4 9 10 2 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 15 3 3 8 if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

73 Prim’s Algorithm u MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 5 6 4 9 10 2 9 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 15 3 3 8 if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

74 Prim’s Algorithm u MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 4 5 6 4 9 10 2 9 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 15 3 3 8 if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

75 Prim’s Algorithm u MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 4 5 6 4 9 5 2 9 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 15 3 3 8 if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

76 Prim’s Algorithm u MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 4 5 6 4 9 5 2 9 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 15 3 3 8 if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

77 Prim’s Algorithm u MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 4 5 6 4 9 5 2 9 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 15 3 3 8 if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

78 Prim’s Algorithm u MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 4 5 6 4 9 5 2 9 14 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 15 3 3 8 if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

79 Prim’s Algorithm MST-Prim(G, w, r) Q = V[G]; for each u ∈ Q key[u] = ∞; 4 5 6 4 9 5 2 9 14 u 15 2 10 key[r] = 0; П[r] = NULL; while (Q not empty) u = ExtractMin(Q); for each v ∈ Adj[u] 15 8 3 3 8 if (v ∈ Q and w(u,v) < П[v] = u; key[v] = w(u,v); key[v])

80 Prim’s Algorithm Spring 2006 Algorithm

81 Prim’s Algorithm Spring 2006 Algorithm

82 Huffman codes Suppose we have a 100,000-character data file that we wish to store compactly. We observe that the characters in the file occur with the frequencies given by the figure. That is, only 6 different characters appear, and the character a occurs 45,000 times. Variable length codeword We consider here only codes in which no codeword is also a prefix of some other codeword. Such codes are called prefix codes.

83 Huffman codes For example, with the variable-length prefix code of Figure 16.3, we code the 3-character file abc as = , where “.” denotes concatenation.

84 Huffman codes An optimal code for a file is always represented by a full binary tree, we can say that if C is the alphabet from which the characters are drawn and all character frequencies are positive, then the tree for an optimal prefix code has exactly |C| leaves, one for each letter of the alphabet, and exactly |C| - 1 internal nodes The number of bits required to encode a file is thus

85 Constructing a Huffman code

86 Huffman Code Construction
Char Freq E 125 T 93 Character count in text. A 80 O 76 I 73 N 71 S 65 R 61 H 55 L 41 D 40 C 31 U 27

87 Huffman Code Construction
Char Freq Huffman Code Construction E 125 T 93 A 80 O 76 I 73 N 71 S 65 R 61 H 55 L 41 D 40 C 31 U 27 C U 31 27

88 Huffman Code Construction
Char Freq Huffman Code Construction E 125 T 93 A 80 O 76 I 73 N 71 S 65 R 61 58 H 55 L 41 D 40 C 31 U 27 58 C U 31 27

89 Huffman Code Construction
Char Freq Huffman Code Construction E 125 T 93 81 A 80 O 76 I 73 N 71 S 65 R 61 58 H 55 L 41 D 40 81 D L 40 41 58 C U 31 27

90 Huffman Code Construction
Char Freq Huffman Code Construction E 125 113 T 93 81 A 80 O 76 I 73 N 71 S 65 R 61 58 H 55 81 113 D L H 40 41 58 55 C U 31 27

91 Huffman Code Construction
Char Freq Huffman Code Construction 126 E 125 113 T 93 81 A 80 O 76 I 73 N 71 S 65 R 61 81 126 113 D L R S H 40 41 61 65 58 55 C U 31 27

92 Huffman Code Construction
Char Freq Huffman Code Construction 144 126 E 125 113 T 93 81 A 80 O 76 I 73 N 71 81 126 144 113 D L R S N I H 40 41 61 65 71 73 58 55 C U 31 27

93 Huffman Code Construction
Char Freq Huffman Code Construction 156 144 126 E 125 113 T 93 81 A 80 O 76 156 A O 80 76 81 126 144 113 D L R S N I H 40 41 61 65 71 73 58 55 C U 31 27

94 Huffman Code Construction
Char Freq Huffman Code Construction 174 156 144 126 E 125 113 T 93 81 156 174 A O T 80 76 81 93 126 144 113 D L R S N I H 40 41 61 65 71 73 58 55 C U 31 27

95 Huffman Code Construction
Char Freq Huffman Code Construction 238 174 156 144 126 E 125 113 156 174 238 A O T E 80 76 81 93 126 144 125 113 D L R S N I H 40 41 61 65 71 73 58 55 C U 31 27

96 Huffman Code Construction
Char Freq Huffman Code Construction 270 238 174 156 144 126 156 174 270 238 A O T E 80 76 81 93 126 144 125 113 D L R S N I H 40 41 61 65 71 73 58 55 C U 31 27

97 Huffman Code Construction
Char Freq Huffman Code Construction 330 270 238 174 156 330 156 174 270 238 A O T E 80 76 81 93 126 144 125 113 D L R S N I H 40 41 61 65 71 73 58 55 C U 31 27

98 Huffman Code Construction
Char Freq Huffman Code Construction 508 330 270 238 330 508 156 174 270 238 A O T E 80 76 81 93 126 144 125 113 D L R S N I H 40 41 61 65 71 73 58 55 C U 31 27

99 Huffman Code Construction
Char Freq Huffman Code Construction 838 508 330 838 330 508 156 174 270 238 A O T E 80 76 81 93 126 144 125 113 D L R S N I H 40 41 61 65 71 73 58 55 C U 31 27

100 Huffman Code Construction
125 Freq 93 80 76 73 71 61 55 41 40 E Char T A O I N R H L D 31 27 C U 65 S 0000 Fixed 0001 0010 0011 0100 0101 0111 1000 1001 1010 1011 1100 0110 110 Huff 011 000 001 1111 11100 11101 838 Total 4.00 3.62 1 1 1 1 1 1 1 A O T E 1 1 1 1 D L R S N I H 1 C U

101 Correctness of Huffman’s algorithm
Proof Idea Step 1: Show that this problem satisfies the greedy choice property, that is, if a greedy choice is made by Huffman's algorithm, an optimal solution remains possible. Step 2: Show that this problem has an optimal substructure property, that is, an optimal solution to Huffman's algorithm contains optimal solution to subproblems. Step 3: Conclude correctness of Huffman's algorithm using step 1 and step 2.

102 Greedy Choice Property
Lemma : Let c be an alphabet in which each character c has frequency f[c]. Let x and y be two characters in C having the lowest frequencies. Then there exists an optimal prefix code for C in which the codewords for x and y have the same length and differ only in the last bit. Proof: The idea of the proof is to take the tree T representing an arbitrary optimal prefix code and modify it to make a tree representing another optimal prefix code such that the characters x and y appear as sibling leaves of maximum depth in the new tree. If we can construct such a tree, then the codewords for x and y will have the same length and differ only in the last bit.

103 Proof contd Let a and b be two characters that are sibling leaves of maximum depth in T . Without loss of generality, we assume that a,freq ≤  b.freq and x.freq ≤  y.freq. Since x.freq and y.freq are the two lowest leaf frequencies, in order, and a.freq and b.freq are two arbitrary frequencies, in order, we have x.freq ≤  a.freq and y.freq ≤  b.freq. As Figure shows, we exchange the positions in T of a and x to produce a tree T’, and then we exchange the positions in T’ of b and y to produce a tree T’’

104 Proof contd The cost of a tree is
The difference in cost between T and T’ is

105 Optimal substructure

106 Proof

107 Proof contd


Download ppt "Introduction to Algorithms"

Similar presentations


Ads by Google