Presentation is loading. Please wait.

Presentation is loading. Please wait.

Greedy Algorithms Technique Dr. M. Sakalli, modified from Levitin and CLSR.

Similar presentations


Presentation on theme: "Greedy Algorithms Technique Dr. M. Sakalli, modified from Levitin and CLSR."— Presentation transcript:

1 Greedy Algorithms Technique Dr. M. Sakalli, modified from Levitin and CLSR

2 Greedy Technique b The first key ingredient is the greedy-choice property: a globally optimal solution can be arrived at by making a locally optimal (greedy) choice. b In a greedy algorithm, choice is determined on fly at each step, (while algorithm progresses), may seem to be best at the moment and then solves the subproblems after the choice is made. The choice made by a greedy algorithm may be depend on choices so far, but it cannot depend on any future choices or on the solutions to subproblems. Thus, unlike dynamic programming, which solves the subproblems bottom up, a greedy strategy usually progresses in a top-down fashion, making one greedy choice after another, iteratively reducing each given problem instance to a smaller one.

3 Algorithms for optimization problems typically go through a sequence of steps, using dynamic programming to determine the best choices, (bottom-up) for subproblem solutions. But in many cases much simpler, more efficient algorithms are possible. A greedy algorithm always makes the choice for a locally optimal solution which seems the best at the current moment with the hope of a globally optimal solution. In the most cases does not always yield optimal solutions, but for many problems. The activity-selection problem, for which a greedy algorithm efficiently computes a solution, works well for a wide range of problems, i.e, minimum-spanning-tree algorithms, Dijkstra's algorithm for shortest paths form a single source, and Chvátal's greedy set-covering heuristic. Greedy algorithms constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible b locally optimal b Irrevocable (binding and abiding)

4 Applications of the Greedy Strategy b Optimal solutions: change making for normal coin denominationschange making for normal coin denominations minimum spanning tree (MST)minimum spanning tree (MST) single-source shortest pathssingle-source shortest paths simple scheduling problemssimple scheduling problems Huffman codesHuffman codes b Approximations: traveling salesman problem (TSP)traveling salesman problem (TSP) knapsack problemknapsack problem other combinatorial optimization problemsother combinatorial optimization problems

5 Change-Making Problem Given unlimited amounts of coins of denominations d 1 > … > d m, give change for amount n with the least number of coins Example: d 1 = 25c, d 2 =10c, d 3 = 5c, d 4 = 1c and n = 48c Greedy solution: d1+ 2 d2+ 3 d1 Greedy solution is b optimal for any amount and normal set of denominations b may not be optimal for arbitrary coin denominations

6 b Let a graph be G = {V, E}, we say that H is a subgraph of G and we write H G if the vertices and edges of H = {V, E} is a subset of the graph G, V' V, E' E. b H need not to accommodate all the edges of G. b If the vertices of H are connected and if its all the connecting edges overlay with the edges in G connecting the same adjacent vertices then, H is called induced subgraph. Edge induced, vertex induced, or neither edge nor vertex induced. b Note: if at least one path does exist between every pair of vertices, then this is a connected (but not directed yet) graph, directed one is the one whose every edge can only be followed from one vertex to another. Induced subgraph

7 Directed acyclic grph, DAG b Cycle definition seems to be ambiguous, two or three vertices, (only one vertex), if there is a path returning back to the same initial vertex again then there should be a cyclic case too. But!! b A cyclic graph is a graph that has at least one cycle (through another vertex -at least one for bi-directional and at least two for uni-directional edged graphs); an acyclic graph is the one that contains no cycles. The girth is the number of the edges involved in the shortest cycle, g>3 is triangle free graph. b A source is a vertex with having no incident edges, while a sink is a vertex with no diverging edges from itself. A directed acyclic graph (DAG) is defined from a source to a vertex or from a vertex to a sink, or from a source to a sink, with no directed cycles. b A finite DAG must have at least one source and at least one sink. b The depth of a vertex in a finite DAG is the length of the longest path from a source to that vertex, while its height is the length of the longest path from that vertex to a sink. b The length of a finite DAG is the length (number of edges) of a longest directed path. It is equal to the maximum height of all sources and equal to the maximum depth of all sinks. Wikipedia.

8 Minimum Weight Spanning Tree c d b a O(E lg V) using binary heaps. O(E + V lg V), using fib heap. b A spanning subgraph is a subgraph that contains all the vertices of the original graph. A spanning tree is a spanning subgraph that is of a acyclic graph of G: (a connected acyclic subgraph) that includes all of vertices of G. b Minimum spanning tree of a weighted, connected graph G: a spanning tree of G of minimum total weight. Graph G = {V, E} and the weight function w of a spanning tree T, that connects all V. b w(T)= (E, V)ЄT w(u, v) b All weights are distinct, injective

9 Partitioning V of G into A and V-A GENERIC-MST(G, w) 1. A Empty set //Initialize 2. while A does not form a spanning tree //Terminate 3. do if edge (u, v) is safe edge for A 4. A A {(u, v)} //Add safe edges 5. return A

10 How to decide to the light edge b Definitions: a cut (S, V - S) of an undirected graph G = (V, E) is a partition of V, and a light edge is the one, one of its endpoints (vertices) resides in S, and the other in V - S, with minimum possible weight, and a cut respects to A ifthere is no any edge of A crossing the cut. ….… do if edge (u, v) is safe for A b In line 3 of the pseudo code given for generic mst(g, w), there must be a spanning tree T such that A T, and if there is an edge of (u, v) T such that (u, v) A, then (u, v) is the one said safe for A. The rule: Theorem

11 T T u v x y How to decide to the light edge Theorem: Suppose G = (V, E) is a connected, undirected graph with a real- valued weight function w defined on E, and let A be a subset of E that is included in some MST in G, (S, V - S) be a cut that respects A, and (u, v) be a light edge crossing (S, V - S). Then, edge (u, v) is safe to be united to A. Proof by contradiction: Let T be a mst of G, that A T and suppose another mst T, with A T and a light edge of {u, v} T such that (u, v) T. T' (of G) having A {(u, v)} by using a cut-and-paste technique is possible, then (u, v) must be a safe edge for A. But we have another path of T, {x, y} crossing the cut {S, V-S}, (in which one side includes A), This generates a cycle (for which we are paying for two crossings and contradicts to the definition of mst). To minimize the cost we have to exclude the one with the heavier edge, and include the light one. T = T - {x, v} {(u, v)} w(u, v) w(x, y). Therefore, w(T') = w(T) - w(x, y) + w(u, v) w(T) V-S S S Overlapping subproblems! DP. but MST leads to an efficient algorithm.

12 Both MST algorithms Kruskal, and Prim, determine a safe edge in line 3 of generic-mst. In Kruskal's algorithm, the set A is a forest. - The safe edge added to A is always a least-weight edge connecting two distinct components - many induced subtrees, and gradually merge into each other. In Prim's algorithm, the set A forms a single tree grows like a snowball as one mass. The safe edge added to A is always a least-weighted edge connecting the tree to a vertex not in the tree. Corollary:Let G = (V, E); be a connected, undirected graph, weight function w on E, let A be a subset of E that is included in some MST for G, and C be a connected component (tree) in the forest G A = (V, A). If (u, v) is a light edge connecting C to some other component in G A, then edge (u, v) is safe for A.

13 Kruskal Algorithm b It finds a safe edge to add to the growing forest: A safe edge (u, v):A safe edge (u, v): –connecting any two trees in the forest and –having the least weight. –Let C1 and C2 denote the two trees that are connected by (u, v). Since (u,v) must be a light edge connecting C1 to some other tree, from corollary it must be a safe edge for C1. –Then the next step is union of two disjoint trees C1 and C2. b Kruskal's algorithm is a greedy algorithm, because at each step it adds to the forest an edge of the least possible weight. b Implementation of Kruskal's algorithm employs a disjoint-set data structure to maintain several disjoint sets of elements. Each set contains the vertices in a tree of the current forest. The operation FIND-SET(u) returns a representative element from the set that contains u. b Thus, FIND-SET(u) == FIND-SET(v), then vertices u and v belong to the same tree otherwise combine two trees, if {u, v} is a light edge - UNION procedure.

14 MST-KRUSKAL(G, w) b 1 A Empty set b 2 for each v V[G] // for each vertex b 3 do MAKE-SET (v) //create |V| trees, b 4 sort the edges of E by nondecreasing weight w b 5 for each e(u, v) E, in order by nondecreasing weight b 6 do if FIND-SET(u) FIND-SET(v) //Check if b 7 then A A {(u, v)} //if not in the same set b 8 UNION (u, v) //merge two components b 9 return A b set A to the empty set and The running time for a G = (V, E) depends on the implementation of the disjoint-set data structure. Assume the disjoint-set-forest implementation with the union-by-rank and path-compression heuristics, since it is the asymptotically fastest implementation known. Initialization: O(V), time to sort the edges in line 4: O(E lg E). There are O(E) operations on the disjoint-set forest, which in total take O(E α(E, V)) time, where α is the functional inverse of Ackermann's function defined. Since (E, V) = O(lg E), the total running time of Kruskal's algorithm is O(E lg E).

15

16

17 Prims MST algorithm b Operates much like Dijkstra's algorithm for finding shortest paths in a graph. Prim's algorithm has the property that the edges in the set A always form a single tree. b Starting from an arbitrary root vertex r (tree A) and expanding one vertex at a time, until the tree spans all the vertices in V. At each turn, a light edge connecting a vertex in A to a vertex in V - A is added to the tree. b On each iteration, construct T i+1 from T i by adding vertex not in T i that is closest to those already in T i (this is a greedy step!) b On each iteration, construct T i+1 from T i by adding vertex not in T i (A) that is closest to those already in T i (this is a greedy step!) b The same Corollary: The rule allows merging of the edges that are safe for A; therefore, Terminates when the all vertices are included in A, there the edges in A form a minimum spanning tree. b This strategy is "greedy" since the tree is augmented at each step with an edge that contributes the minimum amount possible to the tree's weight. b Needs priority queue for locating closest fringe vertex b Next analysis rom the notes of CLRS, and listen from MIT

18 Prim Algorithm Example

19 Notes about Kruskals algorithm b Algorithm looks easier than Prims but is harder to implement (checking for cycles!) b Cycle checking: a cycle is created iff added edge connects vertices in the same connected component b Union-find algorithms

20 Shortest paths – Dijkstras algorithm Single Source Shortest Paths Problem: Given a weighted connected graph G, find shortest paths from source vertex s to each of the other vertices Dijkstras algorithm: Similar to Prims MST algorithm, with a different way of computing numerical labels: Among vertices not already in the tree, it finds vertex u with the smallest sum d v + w(v,u) d v + w(v,u)where v is a vertex for which shortest path has been already found on preceding iterations (such vertices form a tree) v is a vertex for which shortest path has been already found on preceding iterations (such vertices form a tree) d v is the length of the shortest path form source to v w(v,u) is the length (weight) of edge from v to u d v is the length of the shortest path form source to v w(v,u) is the length (weight) of edge from v to u

21 Example d 4 Tree vertices Remaining vertices a(-,0) b(a,3) c(-,) d(a,7) e(-,) a b 4 e c a b d 4 c e a b d 4 c e a b d 4 c e b(a,3) c(b,3+4) d(b,3+2) e(-,) d(b,5) c(b,7) e(d,5+4) c(b,7) e(d,9) e(d,9) d a b d 4 c e

22 Notes on Dijkstras algorithm b Doesnt work for graphs with negative weights b Applicable to both undirected and directed graphs b Efficiency O(|V| 2 ) for graphs represented by weight matrix and array implementation of priority queueO(|V| 2 ) for graphs represented by weight matrix and array implementation of priority queue O(|E+V|log|V|) for graphs represented by adj. lists and min-heap implementation of priority queueO(|E+V|log|V|) for graphs represented by adj. lists and min-heap implementation of priority queue Fib heap O(E+VlogV, amertized.)Fib heap O(E+VlogV, amertized.) b Dont mix up Dijkstras algorithm with Prims algorithm!

23 Coding Problem Coding: assignment of bit strings to alphabet characters Codewords: bit strings assigned for characters of alphabet Two types of codes: b fixed-length encoding (e.g., ASCII) b variable-length encoding (e,g., Morse code) Prefix-free codes: no codeword is a prefix of another codeword Problem: If frequencies of the character occurrences are known, what is the best binary prefix-free code? known, what is the best binary prefix-free code?

24 Huffman codes b Any binary tree with edges labeled with 0s and 1s yields a prefix-free code of characters assigned to its leaves b Optimal binary tree minimizing the expected (weighted average) length of a codeword can be constructed as follows Huffmans algorithm Initialize n one-node trees with alphabet characters and the tree weights with their frequencies. Repeat the following step n-1 times: join two binary trees with smallest weights into one (as left and right subtrees) and make its weight equal the sum of the weights of the two trees. Mark edges leading to left and right subtrees with 0s and 1s, respectively.

25 Example character AB C D _ frequency codeword average bits per character: 2.25 for fixed-length encoding: 3 compression ratio: (3-2.25)/3*100% = 25%

26


Download ppt "Greedy Algorithms Technique Dr. M. Sakalli, modified from Levitin and CLSR."

Similar presentations


Ads by Google