Download presentation
Presentation is loading. Please wait.
Published byFelicia Georgina Lynch Modified over 9 years ago
1
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms
2
Outline Review of DP Greedy algorithms –Similar to DP, not an actual algorithm, but a meta algorithm
3
Two steps to dynamic programming Formulate the solution as a recurrence relation of solutions to subproblems. Specify an order of evaluation for the recurrence so you always have what you need.
4
Restaurant location problem You work in the fast food business Your company plans to open up new restaurants in Texas along I-35 Many towns along the highway, call them t 1, t 2, …, t n Restaurants at t i has estimated annual profit p i No two restaurants can be located within 10 miles of each other due to regulation Your boss wants to maximize the total profit You want a big bonus 10 mile
5
A DP algorithm Suppose you’ve already found the optimal solution It will either include t n or not include t n Case 1: t n not included in optimal solution –Best solution same as best solution for t 1, …, t n-1 Case 2: t n included in optimal solution –Best solution is p n + best solution for t 1, …, t j, where j < n is the largest index so that dist(t j, t n ) ≥ 10
6
Recurrence formulation Let S(i) be the total profit of the optimal solution when the first i towns are considered (not necessarily selected) –S(n) is the optimal solution to the complete problem S(n-1) S(j) + p n j < n & dist (t j, t n ) ≥ 10 S(n) = max S(i-1) S(j) + p i j < i & dist (t j, t i ) ≥ 10 S(i) = max Generalize Number of sub-problems: n. Boundary condition: S(0) = 0. Dependency: i i-1 j S
7
Example 5226663107 6798 3 324125 Distance (mi) Profit (100k) 6799 10 12 1426 S(i) S(i-1) S(j) + p i j < i & dist (t j, t i ) ≥ 10 S(i) = max 100 0 73412 dummy Optimal: 26
8
Complexity Time: O(nk), where k is the maximum number of towns that are within 10 miles to the left of any town –In the worst case, O(n 2 ) –Can be reduced to O(n) by pre-processing Memory: Θ(n)
9
Knapsack problem Three versions: 0-1 knapsack problem: take each item or leave it Fractional knapsack problem: items are divisible Unbounded knapsack problem: unlimited supplies of each item. Which one is easiest to solve? Each item has a value and a weight Objective: maximize value Constraint: knapsack has a weight limitation We studied the 0-1 problem.
10
Formal definition (0-1 problem) Knapsack has weight limit W Items labeled 1, 2, …, n (arbitrarily) Items have weights w 1, w 2, …, w n –Assume all weights are integers –For practical reason, only consider w i < W Items have values v 1, v 2, …, v n Objective: find a subset of items, S, such that i S w i W and i S v i is maximal among all such (feasible) subsets
11
A DP algorithm Suppose you’ve find the optimal solution S Case 1: item n is included Case 2: item n is not included Total weight limit: W wnwn Total weight limit: W Find an optimal solution using items 1, 2, …, n-1 with weight limit W - w n wnwn Find an optimal solution using items 1, 2, …, n-1 with weight limit W
12
Recursive formulation Let V[i, w] be the optimal total value when items 1, 2, …, i are considered for a knapsack with weight limit w => V[n, W] is the optimal solution V[n, W] = max V[n-1, W-w n ] + v n V[n-1, W] Generalize V[i, w] = max V[i-1, w-w i ] + v i item i is taken V[i-1, w] item i not taken V[i-1, w] if w i > w item i not taken Boundary condition: V[i, 0] = 0, V[0, w] = 0. Number of sub-problems = ?
13
Example n = 6 (# of items) W = 10 (weight limit) Items (weight, value): 2 4 3 3 5 6 2 4 6 9
14
0 0 0 0 0 0 00000000000 w012345678910 425 6 4 3 2 1 i 96 65 33 34 22 vivi wiwi max V[i-1, w-w i ] + v i item i is taken V[i-1, w] item i not taken V[i-1, w] if w i > w item i not taken V[i, w] = V[i, w] V[i-1, w]V[i-1, w-w i ] 6 wiwi 5
15
107400 1310764400 9633200 8653200 555532200 2222222200 00000000000 w0123456789 iwiwi vivi 122 243 333 456 524 669 max V[i-1, w-w i ] + v i item i is taken V[i-1, w] item i not taken V[i-1, w] if w i > w item i not taken V[i, w] = 2 4 3 6 5 6 7 5 9 6 8 10 911 8 3 1213 15
16
107400 1310764400 9633200 8653200 555532200 2222222200 00000000000 w0123456789 iwiwi vivi 122 243 333 456 524 669 2 4 3 6 5 6 7 5 9 6 8 911 8 3 1213 15 Item: 6, 5, 1 Weight: 6 + 2 + 2 = 10 Value: 9 + 4 + 2 = 15 Optimal value: 15
17
Time complexity Θ (nW) Polynomial? –Pseudo-polynomial –Works well if W is small Consider following items (weight, value): (10, 5), (15, 6), (20, 5), (18, 6) Weight limit 35 –Optimal solution: item 2, 4 (value = 12). Iterate: 2^4 = 16 subsets –Dynamic programming: fill up a 4 x 35 = 140 table entries What’s the problem? –Many entries are unused: no such weight combination –Top-down may be better
18
Events scheduling problem A list of events to schedule –e i has start time s i and finishing time f i –Indexed such that f i < f j if i < j Each event has a value v i Schedule to make the largest value –You can attend only one event at any time Time e1e1 e2e2 e3e3 e4e4 e5e5 e6e6 e7e7 e8e8 e9e9 s9s9 f9f9 s8s8 f8f8 s7s7 f7f7
19
Events scheduling problem Time e1e1 e2e2 e3e3 e4e4 e5e5 e6e6 e7e7 e8e8 e9e9 V(i) is the optimal value that can be achieved when the first i events are considered V(n) = V(n-1) e n not selected e n selected V(j) + v n max { j < n and f j < s n s9s9 f9f9 s8s8 f8f8 s7s7 f7f7
20
Restaurant location problem 2 Now the objective is to maximize the number of new restaurants (subject to the distance constraint) –In other words, we assume that each restaurant makes the same profit, no matter where it is opened 10 mile
21
A DP Algorithm Exactly as before, but p i = 1 for all i S(i-1) S(j) + 1 j < i & dist (t j, t i ) ≥ 10 S(i) = max S(i-1) S(j) + p i j < i & dist (t j, t i ) ≥ 10 S(i) = max
22
Example Natural greedy 1: 1 + 1 + 1 + 1 = 4 Maybe greedy is ok here? Does it work for all cases? 5226663107 1111 1 11111 Distance (mi) Profit (100k) S(i) S(i-1) S(j) + 1j < i & dist (t j, t i ) ≥ 10 S(i) = max 100 0 23 4 1 dummy Optimal: 4 111224
23
Comparison 5226663107 1111 1 11111 Dist(mi) Profit (100k) S(i) 100 0 23 4 1111224 5226663107 6798 3 324125Profit (100k) 6799 10 12 1426 S(i) 100 0 Benefit of taking t 1 rather than t 2 ? Benefit of waiting to see t 2 ? Dist(mi) t 2 may have a bigger profit t 1 gives you more choices for the future None! t 1 gives you more choices for the future Benefit of taking t 1 rather than t 2 ? Benefit of waiting to see t 2 ?
24
Moral of the story If a better opportunity may come out next, you may want to hold on your decision Otherwise, grasp the current opportunity immediately because there is no reason to wait …
25
Greedy algorithm For certain problems, DP is an overkill –Greedy algorithm may guarantee to give you the optimal solution –Much more efficient
26
Formal argument Claim 1: if A = [m 1, m 2, …, m k ] is the optimal solution to the restaurant location problem for a set of towns [t 1, …, t n ] –m 1 < m 2 < … < m k are indices of the selected towns –Then B = [m 2, m 3, …, m k ] is the optimal solution to the sub-problem [t j, …, t n ], where t j is the first town that are at least 10 miles to the right of t m 1 Proof by contradiction: suppose B is not the optimal solution to the sub-problem, which means there is a better solution B’ to the sub-problem –A’ = m i || B’ gives a better solution than A = m i || B => A is not optimal => contradiction => B is optimal m1m1 B’ (imaginary) A’ B m1m1 A m2m2 mkmk
27
Implication of Claim 1 If we know the first town that needs to be chosen, we can reduce the problem to a smaller sub-problem –This is similar to dynamic programming –Optimal substructure
28
Formal argument (cont’d) Claim 2: for the uniform-profit restaurant location problem, there is an optimal solution that chooses t 1 Proof by contradiction: suppose that no optimal solution can be obtained by choosing t 1 –Say the first town chosen by the optimal solution S is t i, i > 1 –Replace t i with t 1 will not violate the distance constraint, and the total profit remains the same => S’ is an optimal solution –Contradiction –Therefore claim 2 is valid S S’
29
Implication of Claim 2 We can simply choose the first town as part of the optimal solution –This is different from DP –Decisions are made immediately By Claim 1, we then only need to repeat this strategy to the remaining sub-problem
30
Greedy algorithm for restaurant location problem select t 1 d = 0; for (i = 2 to n) d = d + dist(t i, t i-1 ); if (d >= min_dist) select t i d = 0; end 5226663107 d 0 57915 0 69 0 10 0 7
31
Complexity Time: Θ(n) Memory: –Θ(n) to store the input –Θ(1) for greedy selection
32
Events scheduling problem Objective: to schedule the maximal number of events Let v i = 1 for all i and solve by DP, but overkill Greedy strategy: choose the first-finishing event that is compatible with previous selection (1, 2, 4, 6, 8 for the above example) Why is this a valid strategy? –Claim 1: optimal substructure –Claim 2: there is an optimal solution that chooses e 1 –Proof by contradiction: Suppose that no optimal solution contains e 1 –Say the first event chosen is e i => other chosen events start after e i finishes –Replace e i by e 1 will result in another optimal solution ( e 1 finishes earlier than e i ) –Contradiction Simple idea: attend the event that will left you with the most amount of time when finished Time e1e1 e2e2 e3e3 e4e4 e5e5 e6e6 e7e7 e8e8 e9e9
33
Knapsack problem Three versions: 0-1 knapsack problem: take each item or leave it Fractional knapsack problem: items are divisible Unbounded knapsack problem: unlimited supplies of each item. Which one is easiest to solve? Each item has a value and a weight Objective: maximize value Constraint: knapsack has a weight limitation We can solve the fractional knapsack problem using greedy algorithm
34
Greedy algorithm for fractional knapsack problem Compute value/weight ratio for each item Sort items by their value/weight ratio into decreasing order –Call the remaining item with the highest ratio the most valuable item (MVI) Iteratively: –If the weight limit can not be reached by adding MVI Select MVI –Otherwise select MVI partially until weight limit
35
Example Weight limit: 10 1.5 2 1.2 1 0.75 1 $ / LB 966 425 654 333 342 221 Value ($) Weight (LB) item
36
Example Weight limit: 10 Take item 5 –2 LB, $4 Take item 6 –8 LB, $13 Take 2 LB of item 4 –10 LB, 15.4 itemWeight (LB) Value ($) $ / LB 5242 6691.5 4561.2 1221 3331 2430.75
37
Why is greedy algorithm for fractional knapsack problem valid? Claim: the optimal solution must contain the MVI as much as possible (either up to the weight limit or until MVI is exhausted) Proof by contradiction: suppose that the optimal solution does not use all available MVI (i.e., there is still w (w < W) pounds of MVI left while we choose other items) –We can replace w pounds of less valuable items by MVI –The total weight is the same, but with value higher than the “optimal” –Contradiction w w w w
38
Elements of greedy algorithm 1.Optimal substructure 2.Locally optimal decision leads to globally optimal solution For most optimization problems, greedy algorithm will not guarantee an optimal solution But may give you a good starting point to use other optimization techniques Starting from next week, we’ll study several problems in graph theory that can actually be solved by greedy algorithm
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.