알고리즘 설계 및 분석 Foundations of Algorithm 유관우. Digital Media Lab. 2 Chap4. Greedy Approach Grabs data items in sequence, each time with “best” choice, without.

Slides:



Advertisements
Similar presentations
Unit-iv.
Advertisements

Chapter 9 Greedy Technique. Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible - b feasible.
Lecture 15. Graph Algorithms
Chapter 23 Minimum Spanning Tree
CHAPTER 7 Greedy Algorithms.
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
CSCE 411H Design and Analysis of Algorithms Set 8: Greedy Algorithms Prof. Evdokia Nikolova* Spring 2013 CSCE 411H, Spring 2013: Set 8 1 * Slides adapted.
Greedy Algorithms Greed is good. (Some of the time)
Greed is good. (Some of the time)
The Greedy Approach Chapter 8. The Greedy Approach It’s a design technique for solving optimization problems Based on finding optimal local solutions.
Chapter 4 The Greedy Approach. Minimum Spanning Tree A tree is an acyclic, connected, undirected graph. A spanning tree for a given graph G=, where E.
Chapter 4 The Greedy Method.
Chapter 3 The Greedy Method 3.
1 Greedy 2 Jose Rolim University of Geneva. Algorithmique Greedy 2Jose Rolim2 Examples Greedy  Minimum Spanning Trees  Shortest Paths Dijkstra.
Chapter 23 Minimum Spanning Trees
Minimum Spanning Tree Algorithms
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
CSE 421 Algorithms Richard Anderson Dijkstra’s algorithm.
Design and Analysis of Algorithms - Chapter 91 Greedy algorithms Optimization problems solved through a sequence of choices that are: b feasible b locally.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Chapter 9: Greedy Algorithms The Design and Analysis of Algorithms.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Chapter 9 Greedy Technique Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
TECH Computer Science Graph Optimization Problems and Greedy Algorithms Greedy Algorithms  // Make the best choice now! Optimization Problems  Minimizing.
Greedy Algorithms Intuition: At each step, make the choice that is locally optimal. Does the sequence of locally optimal choices lead to a globally optimal.
Shortest Path Algorithms. Kruskal’s Algorithm We construct a set of edges A satisfying the following invariant:  A is a subset of some MST We start with.
The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each decision is locally optimal. These.
Analysis of Algorithms
Algorithmic Foundations COMP108 COMP108 Algorithmic Foundations Greedy methods Prudence Wong
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 3: Greedy algorithms Phan Th ị Hà D ươ ng 1.
2IL05 Data Structures Fall 2007 Lecture 13: Minimum Spanning Trees.
Spring 2015 Lecture 11: Minimum Spanning Trees
Lecture 19 Greedy Algorithms Minimum Spanning Tree Problem.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 9 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Minimum spanning trees (MST) Def: A spanning tree of a graph G is an acyclic subset of edges of G connecting all vertices in G. A sub-forest of G is an.
CSCE350 Algorithms and Data Structure Lecture 19 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
1 Prim’s algorithm. 2 Minimum Spanning Tree Given a weighted undirected graph G, find a tree T that spans all the vertices of G and minimizes the sum.
Finding Minimum Spanning Trees Algorithm Design and Analysis Week 4 Bibliography: [CLRS]- Chap 23 – Minimum.
1 Greedy Technique Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible b locally optimal.
Unit-iii. Greedy Method Greedy algorithm obtains an optimal solution by making a sequence of decisions. Decisions are made one by one in some order. Each.
Design and Analysis of Algorithms - Chapter 91 Greedy algorithms Optimization problems solved through a sequence of choices that are: b feasible b locally.
Greedy Algorithms General principle of greedy algorithm
CSCE 411 Design and Analysis of Algorithms
COMP108 Algorithmic Foundations Greedy methods
Greedy Technique.
Greedy function greedy { S <- S0 //Initialization
Design & Analysis of Algorithm Greedy Algorithm
Greedy Algorithms / Minimum Spanning Tree Yin Tat Lee
The Greedy Approach Winter-2004 Young CS 331 D&A of Algo. Greedy.
Autumn 2016 Lecture 11 Minimum Spanning Trees (Part II)
Minimum-Cost Spanning Tree
CSCE350 Algorithms and Data Structure
Unit 3 (Part-I): Greedy Algorithms
ICS 353: Design and Analysis of Algorithms
روش حریصانه(Greedy Approach)
Autumn 2015 Lecture 11 Minimum Spanning Trees (Part II)
Course Contents: T1 Greedy Algorithm Divide & Conquer
روش حریصانه(Greedy Approach)
Autumn 2015 Lecture 10 Minimum Spanning Trees
Minimum Spanning Tree Algorithms
Graph Searching.
Autumn 2016 Lecture 10 Minimum Spanning Trees
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
Winter 2019 Lecture 10 Minimum Spanning Trees
Greedy Algorithms (I) Greed, for lack of a better word, is good! Greed is right! Greed works! - Michael Douglas, U.S. actor in the role of Gordon Gecko,
Minimum-Cost Spanning Tree
Presentation transcript:

알고리즘 설계 및 분석 Foundations of Algorithm 유관우

Digital Media Lab. 2 Chap4. Greedy Approach Grabs data items in sequence, each time with “best” choice, without thinking future. Efficient, but can not solve many problems Dynamic programming G.A : Choose locally optimal choice 1-by-1 D.P : solve smaller problems optimal solution. (Eg) coin change problem: Goal: correct change + as few coins as possible. Approach : choose largest possible coin. Locally optimal choice optimal solution. No reconsideration! Q?: Is the solution really optimal? ( proof required!!!) (Eg) 우리나라 동전 system : Greedy Approach O.K. 이상한 나라 : ⑧⑥② 12 ? Dynamic Programming.

Digital Media Lab. 3 Minimum Spanning Trees (MST) Connected weighted undirected graph Disconnected graph Tree : connected acyclic graph  n vertices, (n-1) edges, connected  Unique path between 2 vertices Rooted tree, spanning tree G = ( V, E) A conn. Weighted undirected G v1v1 v3v3 v4v4 v5v5 v2v A S.T An MST Spanning forest

Digital Media Lab. 4 Weight of a spanning tree T : M.S.T : a S.T. with min. such weight Input : Adjacency Matrix W of G=(V,E) connected, weighted, undirected Output : An MST T=(V,F). (F ⊆ E) Uniqueness? (|F|=|V|-1) Greedy Approach : F= Ø {initialization} While not solved yet do { Select an edge (L.O.C). {selection} if create not cycle then add. {feasibility} if T=(V,F) is a S.T. then exit. {solution check} } Prim’s Alg., Kruskal’s Alg. different locally optimal choice.

Digital Media Lab. 5 Prim’s Algorithm F=Ø; Y={v 1 }; //Initialization// While not solved do { Select v ∈ V-Y nearest to Y; //selection feasibility Add v to Y; add the edge to F; if Y=V then exit; } v1v1 v3v3 v4v4 v5v5 v2v v1v1 v3v3 v4v4 v5v5 v2v v1v1 v3v3 v4v4 v5v5 v2v v1v1 v3v3 v4v4 v5v5 v2v v1v1 v3v3 v4v4 v5v5 v2v v1v1 v3v3 v4v4 v5v5 v2v Input GraphF=Ø, Y={v 1 } F={(v 1,v 2 )} Y={v 1,v 2 } {(v 1,v 2 ),(v 2,v 3 )} {v 1,v 2,v 3 } (v 3,v 5 )  F v 5  Y (v 3,v 4 )  F v 4  Y //no negative weight edge // v1v1 v3v3 v4v4 v5v5 v2v

Digital Media Lab. 6 Data Structures : n ⅹ n adjacency Matrix W. nearest[i]=index of vertex in Y nearest to v i distance[i]= weight of edge (v i, nearest[i]) procedure Prim(n, W, var F : set_of_edges) var i, near, min, e (edge), nearest : array [2..n] of index; distance : array [2..n] of number; { F=Ø; for i=2 to n do { nearest[i]=1; distance[i]=W[1,i];} repeat n-1 times min=∞; for i=2 to n do if 0≤distance[i]<min then { min=distance[i]; near=i; } e=(near, nearest[near]); add e to F distance[near]=-1; for i=2 to n do if W[i,near]<distance[i] then { distance[i]=W[i,near]; nearest[i]=near;} }

Digital Media Lab. 7 (Every-case) time complexity Analysis : T(n) = 2(n-1)(n-2) ∈ θ(n 2 ) Q1: Spanning tree ? : Yes. Easy. Q2: Minimum S.T. ? Formal proof required Q3: How to implement F? var parent : array [2..n] of index; (parameter) e=(near,..) ⇒ parent[near]=nearest[near] Better way array nearest( local var ⅹ, parameter O) holds the information!!! Proof for MST (proof for Prim’s Alg.) F ⊆ E is promising if F ⊆ T (an MST) (Eg) {(v 1,v 2 ),(v 1,v 3 )} : promising {(v 2,v 4 )} : not promising

Digital Media Lab. 8 Lemma F : promising subset of E. Y : set of vertices connected by F e : Then F ∪ {e} is promising (proof) F is promising ⇒∃ MST(V,F´) s.t. F ⊆ F´. If e ∈ F´ (F ∪ {e} ⊆ F´) then done. Otherwise (e F’) : F´ ∪ {e} ∋ exactly one cycle ∃ e´ ∈ F´ that connects vertex in Y and vertex in V-Y weight (e´)≥ weight (e) F´ ∪ {e}- {e´} : an MST F ∪ {e} ⊆ F´ ∪ {e}- {e´} ⇒ F ∪ {e} : promising Theorem : Prim’s Algorithm produces an MST (Proof) Basis : Ø is promising I. H. : Assume F is promising I. S. : e : edge selected in the next iteration F ∪ {e} is promising by Lemma YV-Y Min-weight edge YV-Y e e´e´

Digital Media Lab. 9 Kruskal’s Algorithm F=Ø Create n disjoint subsets {v 1 }, {v 2 },…,{v n }; Sort the edges in E; While not solved do { select next edge; if the edge connects 2 disjoint subsets{ merge the subsets; add the edge to F;} if all the subsets are merged then exit; } v1v1 v3v3 v4v4 v5v5 v2v v1v1 v3v3 v4v4 v5v5 v2v2 v1v1 v3v3 v4v4 v5v5 v2v2 v1v1 v3v3 v4v4 v5v5 v2v2 ① ③ ② v1v1 v3v3 v4v4 v5v5 v2v2 Input Graph Disjoint sets (v 1,v 2 ) 1 (v 3,v 5 ) 2 ( 1, 3) 3 ( 2, 3) 3 ( 3, 4) 4 ( 4, 5) 5 ( 2, 4) 6 ⅹ

Digital Media Lab. 10 Procedural kruskal(n, m: integers; E :set_of_edges; var F : set_of_edges); Var i, j : index; p, q : set_pointer; e : edge { sort m edges of E; F=Ø; initial(n); //Initialize n disjoint sets.// repeat e=next edge; (i, j ) =e; p=find(i); q=find(j); if p≠q then {merge (p, q); add e to F; } until |F|=n-1; } Worst-case time complexity Analysis : θ(m log m)  Sorting : θ(m log m)  Initialization :θ(n)  Total time for find, merge : θ(mα( m, n))  Remaining op’s : O(m)

Digital Media Lab. 11 Lemma :F ⊆ E promising e : min weight edge in E-F s.t. F ∪ {e} has no cycle Then F ∪ {e} is promising (Proof) F is promising ⇒∃ (V,F´) MST & F ⊆ F´. If e ∈ F´, then F ∪ {e} ⊆ F´, done. Otherwise :F´ ∪ {e} ∋ exactly one cycle F ∪ {e} has no cycle ⇒∃ e´ ∈ F´ in the cycle s.t. e´ F ⇒ e´ ∈ E-F. F ∪ {e´} ⊆ F´ ⇒ F ∪ {e´} has no cycle Therefore, weight(e) ≤ weight(e´) F´ ∪ {e}-{e´} is also an MST, and F ∪ {e} ⊆ F´ ∪ {e}-{e´} ⇒ F ∪ {e} is promising Theorem : Kruskal’s Alg. Produces an MST Prim’s A. : θ(n 2 ), Kruskal’s A. : θ(m log m) (Note : n-1 ≤ m ≤ n(n-1)/2) Prim’s Alg. : θ(m log n) if binary heap θ(m + n log n) if Fibonacci heap

Digital Media Lab. 12 Dijkstra’s Algorithm for S.S.S.P. All-pairs shortest prob. :θ(n 3 ) Floyd Alg.(D.P.) (no negative-weight cycle) Single-Source Shortest Path Prob. Dijkstra’s Alg. Θ(n 2 ), Greedy Approach Similar to Prim’s Alg. Start with {v 1 }; //source// Choose nearest vertex from v 1,… Y={v 1 }; F=Ø; //Shortest paths tree // While not solved do { Select v from V-Y nearest from v 1 using only Y as intermediates; add v to Y add the edge touching v to F; if Y=V then Exit; } Weighted directed graph (no negative-weight edge) Adjacency Matrix : Input

Digital Media Lab. 13 Touch[i]=index of v ∈ Y s.t. is the last edge from v 1 to v i using only Y as intermediates Length[i]=length of such shortest path Every-case Time Complexity T(n)=2(n-1) 2 ∈ θ(n 2 ) Binary heap : θ(m log n) Fibonacci heap : θ(m + n log n) v1v1 v2v2 v5v5 v4v4 v3v3 v1v1 v2v2 v5v5 v4v4 v3v v1v1 v2v2 v5v5 v4v4 v3v V 5 is selected v1v1 v2v2 v5v5 v4v4 v3v v4v4 v3v3 v2v2 v1v1 v2v2 v5v5 v4v4 v3v3 5 5

Digital Media Lab. 14 Procedural dijkstra ( n : integer; W, F); Var i, near, e, touch[2...n], length[2…n] { F=Ø; for i=2 to n do { touch[i]=1; length[i]=W[1,i];} repeat n-1 times { min=∞; for i=2 to n do if 0≤length[i]<min then { min=length[i]; near=i; } e=(touch[near], near); add e to F; for i=2 to n do if length[near]+W[near,i]< length[i] then { length[i]=length[near]+W[near, i]; touch[i]=near;} length[near]=-1; } } Note : touch[2...n] constructs the shortest Path Tree.

Digital Media Lab. 15 Scheduling Minimization Total Time in the System (Eg) t 1 =5, t 2 =10, t 3 =4 Schedule Total time in the system [1, 2, 3] 5 + (5+10) + (5+10+4) = 39 [1, 3, 2] 5 + (5+4) + (5+4+10) = 33 [2, 1, 3] 10 + (10+5) + (10+5+4) = 44 [2, 3, 1] 10 + (10+4) + (10+4+5) = 43 [3, 1, 2] 4 + (4+5) + (4+5+10) = 32 [3, 2, 1] 4 + (4+10) + (4+10+5) = 37 Algorithm : Sort in non-decreasing order, and schedule Scheduling with Deadlines Job deadline profit Schedule Total Profit [1, 3] = [2, 1] = [2, 3] = [3, 1] = 55 [4, 1] = 70 [4, 3] = 65

Digital Media Lab. 16 Strategy : Sort in non-increasing order by profit. Schedule next job as late as possible. (Eg) Job Deadline Profit : optimal Time Complexity : θ(n log n) Disjoint set forest op.’s necessary!

Digital Media Lab. 17 G.A. versus D.P. : The knapsack problem G.A. : more efficient, simpler (difficult proof) D.P. : more powerful (easy proof) (Eg) G.A. : 0/1 knapsack ⅹ D.P.: 0/1 knapsack O G.A to 0/1 knapsack problem S = {item 1, item 2,…,item n} w i = weight of item i p i = profit of item i W = maximum weight the knapsack can hold. (w i, p i, W : positive integers) Determine A ⊆ S s.t. is maximized s.t. Brute Force Approach : consider all subsets 2 n subsets exponential time Alg.

Digital Media Lab. 18 Greedy Strategy :  Largest profit first : incorrect (eg) (w 1, w 2, w 3 ) = (25, 10, 10) (P 1, P 2, P 3 ) = (10, 9, 9) W = 30 Greedy : 10 optimal : 18  Lightest item first : ⅹ  Largest profit per unit weight first ⅹ (eg) (w 1, w 2, w 3 ) = (5, 10, 20) (P 1, P 2, P 3 ) = (50, 60, 140) W = 30 Greedy : 190 optimal : 200 G.A. to fractional knapsack problem ③ : optimal solution guaranteed (Eg) /10(60)=220 (proof necessary)

Digital Media Lab. 19 D.P. Approach to 0/1-knapsack Prob. Principle of optimality ? Yes A : optimal subset of S ① item n ∈ A : A is opt. for S-{item n } ② item n ∈ A : A-{item n } is optimal for S-{item n } P[i, w]:optimal profit for {item 1, …,item i } with w being left in K.S. Maximum profit :P[n, W] P : array [0..n, 0..W] of integer; Time : θ(nW) space : θ(nW) ⇒ pseudo-polynomial time alg. (Note : 0/1 knapsack prob. Is NP-complete) ⇒ If W is big, terrible performance!!!

Digital Media Lab. 20 Refinement Θ(nW) time & space may be too much ! ⇒ improve to θ(2 n ) Idea : All i-th row of P[0..n, 0..W] need not be computed n-th row : P[n, W] only (n-1)th row : P[n-1,W], P[n-1,W-w n ) (2) (n-2)th row : (4) stop when n=1 or w≤0 ( ∵ P[i, w] is computed from P[i-1,w] and P[i-1,w-w i ]) Total # entries = …+2 n-1 =2 n -1 θ(2 n ) time ∴ Worst-case time complexity for 0/1-KS problem using D.P.: O(min (2 n, nW))