Lecture 7 Paradigm #5 Greedy Algorithms

Slides:



Advertisements
Similar presentations
Introduction to Algorithms
Advertisements

Unit-iv.
Dynamic Programming 25-Mar-17.
Dynamic Programming ACM Workshop 24 August Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime.
Chapter 9 continued: Quicksort
COMP 482: Design and Analysis of Algorithms
Copyright © Cengage Learning. All rights reserved.
1 Chapter 4 Greedy Algorithms Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
CS 332: Algorithms NP Completeness David Luebke /2/2017.
Algorithm Design Methods Spring 2007 CSE, POSTECH.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
MCA 301: Design and Analysis of Algorithms
Graphs, Planar graphs Graph coloring
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
The Greedy Method1. 2 Outline and Reading The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1) Task Scheduling (§5.1.2) Minimum Spanning.
1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Chapter 5 Fundamental Algorithm Design Techniques.
Analysis of Algorithms
Greedy Algorithms Basic idea Connection to dynamic programming
Greedy Algorithms Basic idea Connection to dynamic programming Proof Techniques.
Cs333/cutler Greedy1 Introduction to Greedy Algorithms The greedy technique Problems explored –The coin changing problem –Activity selection.
Merge Sort 4/15/2017 6:09 PM The Greedy Method The Greedy Method.
Lecture 7: Greedy Algorithms II Shang-Hua Teng. Greedy algorithms A greedy algorithm always makes the choice that looks best at the moment –My everyday.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Greedy Algorithms Reading Material: –Alsuwaiyel’s Book: Section 8.1 –CLR Book (2 nd Edition): Section 16.1.
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Week 2: Greedy Algorithms
Lecture 7: Greedy Algorithms II
1 The Greedy Method CSC401 – Analysis of Algorithms Lecture Notes 10 The Greedy Method Objectives Introduce the Greedy Method Use the greedy method to.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
David Luebke 1 8/23/2015 CS 332: Algorithms Greedy Algorithms.
Called as the Interval Scheduling Problem. A simpler version of a class of scheduling problems. – Can add weights. – Can add multiple resources – Can ask.
IT 60101: Lecture #201 Foundation of Computing Systems Lecture 20 Classic Optimization Problems.
David Luebke 1 10/24/2015 CS 332: Algorithms Greedy Algorithms Continued.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
5-1-1 CSC401 – Analysis of Algorithms Chapter 5--1 The Greedy Method Objectives Introduce the Brute Force method and the Greedy Method Compare the solutions.
The Greedy Method. The Greedy Method Technique The greedy method is a general algorithm design paradigm, built on the following elements: configurations:
CSC 201: Design and Analysis of Algorithms Greedy Algorithms.
Greedy Algorithms. What is “Greedy Algorithm” Optimization problem usually goes through a sequence of steps A greedy algorithm makes the choice looks.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Algorithm Design Methods 황승원 Fall 2011 CSE, POSTECH.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Greedy Algorithms Interval Scheduling and Fractional Knapsack These slides are based on the Lecture Notes by David Mount for the course CMSC 451 at the.
1 Chapter 16: Greedy Algorithm. 2 About this lecture Introduce Greedy Algorithm Look at some problems solvable by Greedy Algorithm.
Spring 2008The Greedy Method1. Spring 2008The Greedy Method2 Outline and Reading The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1)
CS 361 – Chapter 10 “Greedy algorithms” It’s a strategy of solving some problems –Need to make a series of choices –Each choice is made to maximize current.
Divide and Conquer. Problem Solution 4 Example.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
Greedy Algorithms Prof. Kharat P. V. Department of Information Technology.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
Algorithm Design Methods
Merge Sort 7/29/ :21 PM The Greedy Method The Greedy Method.
Greedy Algorithms / Interval Scheduling Yin Tat Lee
Algorithm Design Methods
CS 3343: Analysis of Algorithms
Merge Sort 11/28/2018 2:18 AM The Greedy Method The Greedy Method.
The Greedy Method Spring 2007 The Greedy Method Merge Sort
Merge Sort 11/28/2018 2:21 AM The Greedy Method The Greedy Method.
Merge Sort 11/28/2018 8:16 AM The Greedy Method The Greedy Method.
Advanced Algorithms Analysis and Design
Greedy Algorithms.
Merge Sort 1/17/2019 3:11 AM The Greedy Method The Greedy Method.
Algorithm Design Methods
Algorithm Design Methods
Merge Sort 5/2/2019 7:53 PM The Greedy Method The Greedy Method.
Week 2: Greedy Algorithms
Advance Algorithm Dynamic Programming
Algorithm Design Methods
Presentation transcript:

Lecture 7 Paradigm #5 Greedy Algorithms Talk at U of Maryland Lecture 7 Paradigm #5 Greedy Algorithms Ref. CLRS, Chap. 16 Example 1 (Making change) Suppose you buy something of cost less than $5, and you get your change in Canadian coins. How can you minimize the number of coins returned to you? Formally: Given x, where 0 ≤ x < 500, we wish to minimize t + l + q + d + n + p subject to x = 200t + 100l + 25q + 10d + 5n + p Greedy algorithm: choose as many toonies as possible (such that 200t < x); then for x-200t, choose as many loonies as possible, etc.

Coin changing Example: make change for $4.87. We take, in turn, t = 2, l = 0, q = 3, d = 1, n = 0, p = 2. Theorem. In the Canadian system of coinage, the greedy algorithm always produces a solution where the total number of coins is minimized. I will give proof ideas in class, and you prove this formally in your assignment 3. Note: this theorem does not necessarily hold for arbitrary coin systems. For example, consider a system of denominations (12, 5, 1) (instead of (200,100,25,10,5,1)). Then the greedy algorithm provides the solution 15 = 1*12 + 0*5 + 3*1, using a total of four coins, but there is a better solution: 15 = 3*5, using only three coins. Moral of the story: the greedy algorithm doesn't always give the best solution.

How can you tell if the greedy algorithm always gives the optimal solution? It turns out that for the change-making problem, it can be done efficiently. See D. Pearson, A polynomial time algorithm for the change-making problem, Operations Research Letters, 33(3) 2005, 231-234.

Scheduling competing activities CLRS, Section 16.1 We have n "activities". Activity i has a start time s(i) and a finish time f(i); activities take up the half-open interval [ s(i), f(i) ). We say activity i is compatible with activity j if s(i) ≥ f(j) OR if s(j) ≥ f(i). The activity scheduling problem: produce a schedule containing the maximum number of compatible activities. Also called "activity selection", "interval scheduling".

Attempts … This problem has lots of possible "greedy" strategies, e.g. select the activity that starts earliest and is compatible with previous choices. select the activity of shortest duration from remaining ones. select the activity with the smallest number of conflicts. None of these produce the optimal schedule. Can you find counterexamples?

A greedy algorithm that works First, sort the activities by finish time. Then, starting with the first, choose the next possible activity that is compatible with previous ones. For example, suppose the activity numbers and start and finish times are as follows: act. # start finish 1 1 4 2 3 5 3 0 6 4 5 7 5 3 8 6 5 9 7 6 10 8 8 11 9 8 12 10 2 13 11 12 14 The greedy alg first chooses 1, then 2,3 are out, so 4 is chosen, then 5,6,7 not compatible with {1,4} so 8 is chosen, then 9,10 not good, 11 is chosen. Finally: 1,4,8,11.

Theorem. The greedy algorithm always produces a schedule with the maximum number of compatible activities. Proof: Suppose the greedy schedule is, as above, (1,4,8,11), and there exists a schedule with more activities, say (a1, a2, a3, a4, a5). We show how to modify this (supposed) longer schedule to get one that (essentially) coincides with the greedy one, but retaining the same number of activities. I claim (1, a2, a3, a4, a5) is a valid schedule. For activity a1 either equals 1, or finishes after 1. Since all activities a2, a3, a4, a5 begin after the finish time of a1, they must begin after the finish time of 1. Now I claim (1, 4, a3, a4, a5) is a valid schedule. For a2 must finish after 4 (since 4 was chosen greedily), so 4 is compatible with a3, a4, a5. Continuing in this fashion, we see (1, 4, 8, 11, a5) is a valid schedule. But this is impossible, since if a5 were compatible with the others, the greedy algorithm would have chosen it after activity 11. This completes the proof.

Example 3: The knapsack problem Talk at U of Maryland Example 3: The knapsack problem In the knapsack problem, we are given a bunch of items numbered from 1 to n. Each item has a weight in kilos, say, wi , and a value in dollars, vi . For simplicity, let's assume all these quantities are integers. Your knapsack has a maximum weight it can hold, say W. Which items should we choose to maximize the value we can hold in our knapsack. So the knapsack problem could also be called the shoplifter's problem!

Knapsack example Suppose W = 100 and suppose there are 6 items with values and weights as follows item # 1 2 3 4 5 6 value 80 70 85 40 75 65 weight 25 40 70 15 20 5 value/weight 3.20 1.75 1.21 2.67 3.75 13 You can think of many possible greedy strategies: 1. choose the most valuable item first 2. choose the heaviest item first 3. choose the lightest item first 4. choose the item with the highest ratio of value to weight first Nothing works.

Greedy strategies do not work for this knapsack problem The best solution is to take items 1, 2, 5, and 6, for a total weight of 90 and a total value of 290. However, strategy 1 (most valuable first) chooses 3, then 1, then 6 for a total weight of 100 and a total value of 230. Strategy 2 (heaviest first) chooses 3, 1, then 6, which is the same as strategy 1. Strategy 3 chooses 6, then 4, then 5, then 1, for a total weight of 65 and a total value of 260. Strategy 4 chooses 6, 5, 1, 4, which is the same as strategy 3, but in a different order.

Now let's change our problem so that we are allowed fractional amounts of each item. Instead of each item representing a single physical item, it represents a sack of some substance which can be arbitrarily divided, such as a sack of sugar, salt, gold dust, etc. This is called the fractional knapsack problem. Now there is a greedy strategy that works. Not surprisingly, it involves considering the items ordered by the ratio of value/weight, and taking as much of each item in order as possible that will fit in our knapsack. So for the example above we would take all of item 6, all of item 5, all of item 1, all of item 4. At this point we have used up 65 kilos and there are 35 left. Item 2 has the next highest ratio, and we take 35/40 of it (since it weighs 40).This gives us a value of 61.25, so our final result weighs 100 kilos and is worth 321.25.

Here is the algorithm greedy(n, v[1..n], w[1..n], W) /* n is number of items, v is the array of values of each item w is the array of weights of each item, W is the knapsack capacity */ sort both v and w by the ratio v[i]/w[i]; free := W; sum := 0; for i := 1 to n do x := min(w[i], free); sum := sum + v[i]*(x/w[i]); free := free - x; return(sum);

Why does it work? Suppose the optimal solution S is better than our greedy solution G. Then S must agree for some number of items (perhaps 0) with G and then differ. To make life simpler, let's assume that the ratios v[i]/w[i] are all distinct. Consider the first place where S differs from G, say when G chooses a certain amount of item k and S chooses a different amount. (Here we are assuming the items have been ordered by decreasing order of value/weight.) If S chooses more of item k, then choosing more would have been possible without exceeding W, so G would have done that too. So S must choose less of item k. Say G chooses g of item k and S chooses s of item k. Then we can exchange (g-s) of later items that S chooses with (g-s) of item k. Doing so would only increase the total value of S, since v[k]/w[k] > v[l]/w[l] for l > k. So S was not optimal after all, a contradiction.