Chapter 5 Fundamental Algorithm Design Techniques.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

Lecture 7 Paradigm #5 Greedy Algorithms
Algorithm Design Methods Spring 2007 CSE, POSTECH.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
Greedy Algorithms.
Introduction to Algorithms Greedy Algorithms
The Greedy Method1. 2 Outline and Reading The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1) Task Scheduling (§5.1.2) Minimum Spanning.
1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Greed is good. (Some of the time)
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Chapter 4 The Greedy Approach. Minimum Spanning Tree A tree is an acyclic, connected, undirected graph. A spanning tree for a given graph G=, where E.
Greedy Algorithms Basic idea Connection to dynamic programming
Greedy Algorithms Basic idea Connection to dynamic programming Proof Techniques.
Cs333/cutler Greedy1 Introduction to Greedy Algorithms The greedy technique Problems explored –The coin changing problem –Activity selection.
Merge Sort 4/15/2017 6:09 PM The Greedy Method The Greedy Method.
© 2004 Goodrich, Tamassia Greedy Method and Compression1 The Greedy Method and Text Compression.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 2 Monday, 9/13/06 Design Patterns for Optimization Problems.
1 Greedy methods: Lecture 1 Jose Rolim University of Geneva.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2002 Lecture 1 (Part 3) Tuesday, 1/29/02 Design Patterns for Optimization.
ASC Program Example Part 3 of Associative Computing Examining the MST code in ASC Primer.
Fundamental Techniques
Lecture 7: Greedy Algorithms II
1 The Greedy Method CSC401 – Analysis of Algorithms Lecture Notes 10 The Greedy Method Objectives Introduce the Greedy Method Use the greedy method to.
Lecture 1: The Greedy Method 主講人 : 虞台文. Content What is it? Activity Selection Problem Fractional Knapsack Problem Minimum Spanning Tree – Kruskal’s Algorithm.
Greedy Algorithms Fundamental Data Structures and Algorithms Peter Lee March 19, 2004.
IT 60101: Lecture #201 Foundation of Computing Systems Lecture 20 Classic Optimization Problems.
Dr. Naveed Ahmad Assistant Professor Department of Computer Science University of Peshawar.
Algorithmics - Lecture 101 LECTURE 10: Greedy technique.
5-1-1 CSC401 – Analysis of Algorithms Chapter 5--1 The Greedy Method Objectives Introduce the Brute Force method and the Greedy Method Compare the solutions.
Gary Sham HKOI 2010 Greedy, Divide and Conquer. Greedy Algorithm Solve the problem by the “BEST” choice. To find the global optimal through local optimal.
Lecture 19 Greedy Algorithms Minimum Spanning Tree Problem.
GREEDY ALGORITHMS UNIT IV. TOPICS TO BE COVERED Fractional Knapsack problem Huffman Coding Single source shortest paths Minimum Spanning Trees Task Scheduling.
The Greedy Method. The Greedy Method Technique The greedy method is a general algorithm design paradigm, built on the following elements: configurations:
CSC 201: Design and Analysis of Algorithms Greedy Algorithms.
Greedy Algorithms. What is “Greedy Algorithm” Optimization problem usually goes through a sequence of steps A greedy algorithm makes the choice looks.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Greedy Algorithms Interval Scheduling and Fractional Knapsack These slides are based on the Lecture Notes by David Mount for the course CMSC 451 at the.
Greedy algorithms David Kauchak cs302 Spring 2012.
Spring 2008The Greedy Method1. Spring 2008The Greedy Method2 Outline and Reading The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1)
CS 361 – Chapter 10 “Greedy algorithms” It’s a strategy of solving some problems –Need to make a series of choices –Each choice is made to maximize current.
Divide and Conquer. Problem Solution 4 Example.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
Problem solving sequence
Greedy Method 6/22/2018 6:57 PM Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015.
Algorithm Design Methods
Merge Sort 7/29/ :21 PM The Greedy Method The Greedy Method.
The Greedy Method and Text Compression
Robbing a House with Greedy Algorithms
The Greedy Method and Text Compression
Greedy Algorithms / Interval Scheduling Yin Tat Lee
Merge Sort 11/28/2018 2:18 AM The Greedy Method The Greedy Method.
The Greedy Method Spring 2007 The Greedy Method Merge Sort
Merge Sort 11/28/2018 2:21 AM The Greedy Method The Greedy Method.
Merge Sort 11/28/2018 8:16 AM The Greedy Method The Greedy Method.
Exam 2 LZW not on syllabus. 73% / 75%.
Lecture 11 Overview Self-Reducibility.
Advanced Algorithms Analysis and Design
Merge Sort 1/17/2019 3:11 AM The Greedy Method The Greedy Method.
Algorithm Design Methods
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Algorithm Design Methods
Merge Sort 5/2/2019 7:53 PM The Greedy Method The Greedy Method.
Algorithm Design Methods
Presentation transcript:

Chapter 5 Fundamental Algorithm Design Techniques

Overview The Greedy Method (5.1) Divide and Conquer (5.2) Dynamic Programming (5.3)

Greedy Outline The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1) Task Scheduling (§5.1.2) Minimum Spanning Trees (§7.3) [future lecture]

A New Vending Machine? A new Coke machine in the lounge behind the ulab? Being efficient computer scientists, we want to return the smallest possible number of coins in change

The Greedy Method

The greedy method is a general algorithm design paradigm, built on the following elements: configurations: different choices, collections, or values to find objective function: a score assigned to configurations, which we want to either maximize or minimize It works best when applied to problems with the greedy-choice property: a globally-optimal solution can always be found by a series of local improvements from a starting configuration.

Making Change Problem: A dollar amount to reach and a collection of coin amounts to use to get there. Configuration: A dollar amount yet to return to a customer plus the coins already returned Objective function: Minimize number of coins returned. Greedy solution: Always return the largest coin you can Example 1: Coins are valued $.32, $.08, $.01 Has the greedy-choice property, since no amount over $.32 can be made with a minimum number of coins by omitting a $.32 coin (similarly for amounts over $.08, but under $.32). Example 2: Coins are valued $.30, $.20, $.05, $.01 Does not have greedy-choice property, since $.40 is best made with two $.20’s, but the greedy solution will pick three coins (which ones?)

How to rob a bank…

The Fractional Knapsack Problem Given: A set S of n items, with each item i having b i - a positive benefit w i - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. If we are allowed to take fractional amounts, then this is the fractional knapsack problem. In this case, we let x i denote the amount we take of item i Objective: maximize Constraint:

Example Given: A set S of n items, with each item i having b i - a positive benefit w i - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. Weight: Benefit: ml8 ml2 ml6 ml1 ml $12$32$40$30$50 Items: Value: 3 ($ per ml) ml Solution: 1 ml of 5 2 ml of 3 6 ml of 4 1 ml of 2 “knapsack”

The Fractional Knapsack Algorithm Greedy choice: Keep taking item with highest value (benefit to weight ratio) Since Run time: O(n log n). Why? Correctness: Suppose there is a better solution there is an item i with higher value than a chosen item j, but x i 0 and v i <v j If we substitute some i with j, we get a better solution How much of i: min{w i -x i, x j } Thus, there is no better solution than the greedy one Algorithm fractionalKnapsack(S, W) Input: set S of items w/ benefit b i and weight w i ; max. weight W Output: amount x i of each item i to maximize benefit w/ weight at most W for each item i in S x i  0 v i  b i / w i {value} w  0 {total weight} while w < W remove item i w/ highest v i x i  min{w i, W - w} w  w + min{w i, W - w}

Task Scheduling Given: a set T of n tasks, each having: A start time, s i A finish time, f i (where s i < f i ) Goal: Perform all the tasks using a minimum number of “machines.”

Task Scheduling Algorithm Greedy choice: consider tasks by their start time and use as few machines as possible with this order. Run time: O(n log n). Why? Correctness: Suppose there is a better schedule. We can use k-1 machines The algorithm uses k Let i be first task scheduled on machine k Machine i must conflict with k-1 other tasks But that means there is no non- conflicting schedule using k-1 machines Algorithm taskSchedule(T) Input: set T of tasks w/ start time s i and finish time f i Output: non-conflicting schedule with minimum number of machines m  0 {no. of machines} while T is not empty remove task i w/ smallest s i if there’s a machine j for i then schedule i on machine j else m  m + 1 schedule i on machine m

Example Given: a set T of n tasks, each having: A start time, s i A finish time, f i (where s i < f i ) [1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8] (ordered by start) Goal: Perform all tasks on min. number of machines Machine 1 Machine 3 Machine 2

Divide-and-Conquer 7 2  9 4   2  2 79  4   72  29  94  4

Outline and Reading Divide-and-conquer paradigm (§5.2) Review Merge-sort (§4.1.1) Recurrence Equations (§5.2.1) Iterative substitution Recursion trees Guess-and-test The master method Integer Multiplication (§5.2.2)

Divide-and-Conquer Divide-and conquer is a general algorithm design paradigm: Divide: divide the input data S in two or more disjoint subsets S 1, S 2, … Recur: solve the subproblems recursively Conquer: combine the solutions for S 1, S 2, …, into a solution for S The base case for the recursion are subproblems of constant size Analysis can be done using recurrence equations

Binary Search I have a number X between c 1 and c 2 … Divide and conquer algorithm? Analysis: Called a Recurrence Equation How to solve?

Solving the Binary Search Recurrence T(n) = T(n/2) + c T(n/2) = T(n/4) + c T(n/4) = T(n/8) + c … T(n) = T(n/2) + c = [T(n/4) + c] + c = T(n/2 2 ) + 2 c = T(n/2 3 ) + 3 c [telescope] = T(n/2 k ) + k c done when k=lg n; then T(n/2 k )=T(1) is base case = b + c lg n

Merge Sort Recurrence Equation Analysis Recurrence Equation? We can analyze by finding a closed form solution. Method: expand and telescope

Solving the Merge Sort recurrence T(n) = 2T(n/2) + b n T(n/2) = 2T(n/4) + b n/2 T(n/4) = 2T(n/8) + b n/4 … T(n) = 2T(n/2) + b n = 2[2T(n/4) + b n/2] + b n = 2 2 T(n/2 2 ) + 2 b n = 2 3 T(n/2 3 ) + 3 b n [telescope] = 2 k T(n/2 k ) + k b n done when k=lg n; then T(n/2 k )=T(1) is base case = b n + b n lg n

Min & Max Given a set of n items, find the min and max. How many comparisons needed? Simple alg: n-1 comparisons to find min, n-1 for max Recursive algorithm: void minmax(int i, int j, int& min0, int& max0) { if (j<=i+1) [handle base case; return] m=(i+j+1)/2; minmax(i,m-1,min1,max1); minmax(m,j,min2,max2); min0 = min(min1,min2); max0 = max(max1,max2); } Recurrence:T(n)=2T(n/2) + 2; T(2)=1 Solution?1.5 n - 2

Solving the Min&Max Recurrence T(n) = 2T(n/2) + 2; T(2)=1 T(n/2) = 2T(n/4) + 2 T(n/4) = 2T(n/8) + 2 … T(n) = 2T(n/2) + 2 = 2[2T(n/4) + 2] + 2 = 2 2 T(n/2 2 ) = 2 3 T(n/2 3 ) [telescope] = 2 k T(n/2 k ) + = 2 k T(n/2 k ) + 2 k done when k=(lg n) - 1; then T(n/2 k )=T(2) is base case = n/2 + n – 2 = 1.5n - 2

Hanoi runtime? void hanoi(n, from, to, spare { if (n > 0) { hanoi(n-1,from,spare,to); cout << from << “ – “ << to << endl; hanoi(n-1,spare,to,from); } T(n) = 2T(n-1) + c T(0) = b Look it up: T(n) = O(2 n )

Master Method Another way to solve recurrences: “Look it up” The Master Theorem solves many of them: If T(n)=c, n<d, and T(n)=aT(n/b)+f(n), n≥d, and for small constants e>0, k>=0, and  <1 then If f(n) is O( ) then T(n) is  ( ) If f(n) is  ( ) then T(n) is  ( ) If f(n) is  ( ) and a f(n/b) ≤  f(n), for n ≤ d, then T(n) is  (f(n))

An Important use of Recurrence Relations by Donald KnuthDonald Knuth

More Divide and Conquer Examples pow(x,n) log(n,base) Point-in-convex-polygon Fractals

Creating Fractals Koch Snowflake Fractal Star Your own…

Integer Multiplication Algorithm: Multiply two n-bit integers I and J. Divide step: Split I and J into high-order and low-order bits We can then define I*J by multiplying the parts and adding: So, T(n) = 4T(n/2) + n, which implies T(n) is O(n 2 ). But that is no better than the algorithm we learned in grade school.

An Improved Integer Multiplication Algorithm Algorithm: Multiply two n-bit integers I and J. Divide step: Split I and J into high-order and low-order bits Observe that there is a different way to multiply parts: So, T(n) = 3T(n/2) + n, which implies T(n) is O(n log 2 3 ) [by master theorem]. Thus, T(n) is O(n ).

Practice Analyzing Recursive Functions 1. int fun1(int n) { if (n==1) return 50; return fun1(n/2) * fun1(n/2) + 97; } 2. int fun2(int n) { if (n==1) return 1; int sum=0; for (int i=1; i<=n; i++) sum += fun(n-1); return sum; }

Dynamic Programming

Outline and Reading Fibonacci Making Change Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)

Opinions? public class Fib1 { public static BigInteger fibonacci(long n) { if (n <= 2) return new BigInteger("1"); return fibonacci(n-1).add(fibonacci(n-2)); } public static void main( String args[] ) { long n = Integer.parseInt(args[0]); System.out.println("Fib(" + n + ") = " + fibonacci(n).toString()); }

Bottom up… public class Fib2 { static BigInteger[] fibs; public static BigInteger fibonacci(int n) { BigInteger result; fibs[1]=new BigInteger("1"); fibs[2]=new BigInteger("1"); for (int i=3; i<=n; i++) fibs[i] = fibs[i-1].add(fibs[i-2]); return fibs[n]; } public static void main( String args[] ) { int n = Integer.parseInt(args[0]); fibs = new BigInteger[n+1]; System.out.println("Fib(" + n + ") = " + fibonacci(n).toString()); }

Memoize… public class Fib3 { static BigInteger[] fibs; public static BigInteger fibonacci(int n) { long result; if (fibs[n] != null) return fibs[n]; else { fibs[n] = fibonacci(n-1).add(fibonacci(n-2)); return fibs[n]; } public static void main( String args[] ) { int n = Integer.parseInt(args[0]); fibs = new BigInteger [n+1]; fibs[1] = new BigInteger("1"); fibs[2] = new BigInteger("1"); System.out.println("Fib("+n+") = " + fibonacci(n)); }

Dynamic Programming Applies to problems that can be solved recursively, but the subproblems overlap (Typically optimization problems) Method: Write definition of value you want to compute Express solution in terms of sub-problem solutions Compute bottom-up or “memoize”

Making Change Being good Computer Scientists, we want to have a General Solution for making change in our vending machine. Suppose that our coin values were 40, 30, 5, and 1 cent? How would me make change using the smallest number of coins?

Making Change – DP solution Suppose we have coin values c 1, c 2, …, c k and we want to make change for value V using the smallest number of coins? What sub-problem solutions would give us solution to the whole problem? Define NC(V) = smallest number of coins that will add up to V. Base case? Run time? Some kind of exponential, if you’re not careful… How can we implement this efficiently?

Dynamic Programming: Two methods for efficiency Build a table bottom-up Compute NC(1), NC(2), …, NC(V), storing sub- problem results in a table as you go up Memoize Compute NC(V) recursively, but store new sub- problem results when you first compute them

Matrix Chain-Products Dynamic Programming is a general algorithm design paradigm. Rather than give the general structure, let us first give a motivating example: Matrix Chain-Products Review: Matrix Multiplication. C = A*B A is d × e and B is e × f O(def ) time AC B dd f e f e i j i,j

Matrix Chain-Products Matrix Chain-Product: Compute A=A 0 *A 1 *…*A n-1 A i is d i × d i+1 Problem: How to parenthesize? Example B is 3 × 100 C is 100 × 5 D is 5 × 5 (B*C)*D takes = 1575 ops B*(C*D) takes = 4000 ops

An Enumeration Approach Matrix Chain-Product Alg.: Try all possible ways to parenthesize A=A 0 *A 1 *…*A n-1 Calculate number of ops for each one Pick the one that is best Running time: The number of parethesizations is equal to the number of binary trees with n nodes This is exponential! It is called the Catalan number, and it is almost 4 n. This is a terrible algorithm!

A Greedy Approach Idea #1: repeatedly select the product that uses (up) the most operations. Counter-example: A is 10 × 5 B is 5 × 10 C is 10 × 5 D is 5 × 10 Greedy idea #1 gives (A*B)*(C*D), which takes = 2000 ops A*((B*C)*D) takes = 1000 ops

Another Greedy Approach Idea #2: repeatedly select the product that uses the fewest operations. Counter-example: A is 101 × 11 B is 11 × 9 C is 9 × 100 D is 100 × 99 Greedy idea #2 gives A*((B*C)*D)), which takes = ops (A*B)*(C*D) takes = ops The greedy approach is not giving us the optimal value.

A “Recursive” Approach Define subproblems: Find the best parenthesization of A i *A i+1 *…*A j. Let N i,j denote the number of operations done by this subproblem. The optimal solution for the whole problem is N 0,n-1. Subproblem optimality: The optimal solution can be defined in terms of optimal subproblems There has to be a final multiplication (root of the expression tree) for the optimal solution. Say, the final multiply is at index i: (A 0 *…*A i )*(A i+1 *…*A n-1 ). Then the optimal solution N 0,n-1 is the sum of two optimal subproblems, N 0,i and N i+1,n-1 plus the time for the last multiply. If the global optimum did not have these optimal subproblems, we could define an even better “optimal” solution.

A Characterizing Equation The global optimal has to be defined in terms of optimal subproblems, depending on where the final multiply is at. Let us consider all possible places for that final multiply: Recall that A i is a d i × d i+1 dimensional matrix. So, a characterizing equation for N i,j is the following: Note that subproblems are not independent--the subproblems overlap.

A Dynamic Programming Algorithm Since subproblems overlap, we don’t use recursion. Instead, we construct optimal subproblems “bottom-up.” N i,i ’s are easy, so start with them Then do length 2,3,… subproblems, and so on. Running time: O(n 3 ) Algorithm matrixChain(S): Input: sequence S of n matrices to be multiplied Output: number of operations in an optimal paranethization of S for i  1 to n-1 do N i,i  0 for b  1 to n-1 do for i  0 to n-b-1 do j  i+b N i,j  +infinity for k  i to j-1 do N i,j  min{N i,j, N i,k +N k+1,j +d i d k+1 d j+1 }

A Dynamic Programming Algorithm Visualization The bottom-up construction fills in the N array by diagonals N i,j gets values from pervious entries in i-th row and j-th column Filling in each entry in the N table takes O(n) time. Total run time: O(n 3 ) Getting actual parenthesization can be done by remembering “k” for each N entry answer N … n-1 … j i

The General Dynamic Programming Technique Applies to a problem that at first seems to require a lot of time (possibly exponential), provided we have: Simple subproblems: the subproblems can be defined in terms of a few variables, such as j, k, l, m, and so on. Subproblem optimality: the global optimum value can be defined in terms of optimal subproblems Subproblem overlap: the subproblems are not independent, but instead they overlap (hence, should be constructed bottom-up).

The 0/1 Knapsack Problem Given: A set S of n items, with each item i having b i - a positive benefit w i - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. If we are not allowed to take fractional amounts, then this is the 0/1 knapsack problem. In this case, we let T denote the set of items we take Objective: maximize Constraint:

Given: A set S of n items, with each item i having b i - a positive benefit w i - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. Example Weight: Benefit: in2 in 6 in2 in $20$3$6$25$80 Items: 9 in Solution: 5 (2 in) 3 (2 in) 1 (4 in) “knapsack”

A 0/1 Knapsack Algorithm, First Attempt S k : Set of items numbered 1 to k. Define B[k] = best selection from S k. Problem: does not have subproblem optimality: Consider S={(3,2),(5,4),(8,5),(4,3),(10,9)} weight-benefit pairs Best for S 4 : Best for S 5 :

A 0/1 Knapsack Algorithm, Second Attempt S k : Set of items numbered 1 to k. Define B[k,w] = best selection from S k with weight exactly equal to w Good news: this does have subproblem optimality: I.e., best subset of S k with weight exactly w is either the best subset of S k-1 w/ weight w or the best subset of S k-1 w/ weight w-w k plus item k.

(Weight,Value): (4,6) (3,8) (5,10) (8,13) (7,15) B S S S S4 S5 How to tell which objects were used for each box? Runtime?

The 0/1 Knapsack Algorithm Recall def. of B[k,w]: Since B[k,w] is defined in terms of B[k-1,*], we can reuse the same array Running time: O(nW). Not a polynomial-time algorithm if W is large This is a pseudo-polynomial time algorithm Algorithm 01Knapsack(S, W): Input: set S of items w/ benefit b i and weight w i ; max. weight W Output: benefit of best subset with weight at most W for w  0 to W do B[w]  0 for k  1 to n do for w  W downto w k do if B[w-w k ]+b k > B[w] then B[w]  B[w-w k ]+b k