Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.

Similar presentations


Presentation on theme: "Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples."— Presentation transcript:

1 Dynamic Programming

2  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples of Dynamic Programming:  Coins  Binomial  Scheduling  Knapsack Problem  Viterbi Algorithm  Sequence Alignment

3 3  DP is a method for solving certain kind of problems  DP can be applied when the solution of a problem includes solutions to subproblems  We need to find a recursive formula for the solution  We can recursively solve subproblems, starting from the trivial case, and save their solutions in memory  In the end we’ll get the solution of the whole problem

4 4  Simple Subproblems  We should be able to break the original problem to smaller subproblems that have the same structure  Optimal Substructure of the problems  The solution to the problem must be a composition of subproblem solutions  Subproblem Overlap  Optimal subproblems to unrelated problems can contain subproblems in common

5  How can we find the minimum number of coins to make up a specific denomination  We could use the greedy method to find the minimum number of US coins to make any amount  At each step, just choose the largest coin that does not overshoot the desired amount: 31¢=25  What if the nickel (5¢) did not exist  Greedy method would produce 7 coins (25+1+1+1+1+1+1)  The optimal solution is actually 4 coins (10+10+10+1)  How can we find the minimum number of coins for any given coin set?

6  For the following examples, we will assume coins in the following denominations: 1¢ 5¢ 10¢ 21¢ 25¢  We’ll use 63¢ as our goal

7  To make K cents:  If there is a K-cent coin, then that one coin is the minimum  Otherwise, for each value i < K,  Find the minimum number of coins needed to make i cents  Find the minimum number of coins needed to make K - i cents  Choose the i that minimizes this sum  This algorithm can be viewed as divide-and-conquer, or as brute force  This solution is very recursive  It requires exponential work  It is infeasible to solve for 63¢

8  We can reduce the problem recursively by choosing the first coin, and solving for the amount that is left  For 63¢:  One 1¢ coin plus the best solution for 62¢  One 5¢ coin plus the best solution for 58¢  One 10¢ coin plus the best solution for 53¢  One 21¢ coin plus the best solution for 42¢  One 25¢ coin plus the best solution for 38¢  Choose the best solution from among the 5 given above  Instead of solving 62 recursive problems, we solve 5  This is still a very expensive algorithm

9  Idea: Solve first for one cent, then two cents, then three cents, etc., up to the desired amount  Save each answer in an array !  For each new amount N, compute all the possible pairs of previous answers which sum to N  For example, to find the solution for 13¢,  First, solve for all of 1¢, 2¢, 3¢,..., 12¢  Next, choose the best solution among:  Solution for 1¢ + solution for 12¢  Solution for 2¢ + solution for 11¢  Solution for 3¢ + solution for 10¢  Solution for 4¢ + solution for 9¢  Solution for 5¢ + solution for 8¢  Solution for 6¢ + solution for 7¢

10  Suppose coins are 1¢, 3¢, and 4¢  There’s only one way to make 1¢ (one coin)  To make 2¢, try 1¢+1¢ (one coin + one coin = 2 coins)  To make 3¢, just use the 3¢ coin (one coin)  To make 4¢, just use the 4¢ coin (one coin)  To make 5¢, try  1¢ + 4¢ (1 coin + 1 coin = 2 coins)  2¢ + 3¢ (2 coins + 1 coin = 3 coins)  The first solution is better, so best solution is 2 coins  To make 6¢, try  1¢ + 5¢ (1 coin + 2 coins = 3 coins)  2¢ + 4¢ (2 coins + 1 coin = 3 coins)  3¢ + 3¢ (1 coin + 1 coin = 2 coins) – best solution  Etc.

11  The first algorithm is recursive, with a branching factor of up to 62  Possibly the average branching factor is somewhere around half of that (31)  The algorithm takes exponential time, with a large base  The second algorithm is much better—it has a branching factor of 5  This is exponential time, with base 5  The dynamic programming algorithm is O(N*K), where N is the desired amount and K is the number of different kinds of coins

12 12  (x + y) 2 = x 2 + 2xy + y 2, coefficients are 1,2,1  (x + y) 3 = x 3 + 3x 2 y + 3xy 2 + y 3, coefficients are 1,3,3,1  (x + y) 4 = x 4 + 4x 3 y + 6x 2 y 2 + 4xy 3 + y 4, coefficients are 1,4,6,4,1  (x + y) 5 = x 5 + 5x 4 y + 10x 3 y 2 + 10x 2 y 3 + 5xy 4 + y 5, coefficients are 1,5,10,10,5,1  The n+1 coefficients can be computed for (x + y) n according to the formula n C x = n! / (x! * (n – x)!)  The repeated computation of all the factorials gets to be expensive  We can use dynamic programming to save the factorials as we go

13 n n C 0 n C 1 n C 2 n C 3 n C 4 n C 5 n C 6 0 1 1 1 1 2 1 3 1 3 3 1 4 1 4 6 4 1 5 1 5 10 10 5 1 6 1 6 15 20 15 6 1  Each row depends only on the preceding row  Only linear space and quadratic time are needed  This algorithm is known as Pascal’s Triangle

14  Dynamic programming is a technique for finding an optimal solution  The principle of optimality applies if the optimal solution to a problem always contains optimal solutions to all subproblems  Example: Consider the problem of making N¢ with the fewest number of coins  Either there is an N¢ coin, or  The set of coins making up an optimal solution for N¢ can be divided into two nonempty subsets, n 1 ¢ and n 2 ¢  If either subset, n 1 ¢ or n 2 ¢, can be made with fewer coins, then clearly N¢ can be made with fewer coins

15  The principle of optimality holds if  Every optimal solution to a problem contains optimal solutions to all subproblems  The principle of optimality does not say  If you have optimal solutions to all subproblems then you can combine them to get an optimal solution  Example: In US coinage,  The optimal solution to 7¢ is 5¢ + 1¢ + 1¢, and  The optimal solution to 6¢ is 5¢ + 1¢, but  The optimal solution to 13¢ is not 5¢ + 1¢ + 1¢ + 5¢ + 1¢  But there is some way of dividing up 13¢ into subsets with optimal solutions (say, 11¢ + 2¢) that will give an optimal solution for 13¢  Hence, the principle of optimality holds for this problem

16  Dynamic programming relies on working “from the bottom up” and saving the results of solving simpler problems  These solutions to simpler problems are then used to compute the solution to more complex problems  Dynamic programming solutions can often be quite complex and tricky  Dynamic programming is used for optimization problems, especially ones that would otherwise take exponential time  Only problems that satisfy the principle of optimality are suitable for dynamic programming solutions  Since exponential time is unacceptable for all but the smallest problems, dynamic programming is sometimes essential

17  Problem  scheduling a lecture hall  each request has start time, end time, value  goal: maximize value  Basic idea  sort requests by end time  calculate p( request )  the next previous request that does not overlap  optimal schedule for requests 1..n:  if n part of optimal solution: optimal schedule for p( n ) and n  else: optimal schedule for n-1  use sub solutions to build up complete solution

18  Weighted interval scheduling problem.  Given a collection of intervals I 1,…,I n with weights w 1,…,w n, choose a maximum weight set of non-overlapping intervals Time 01234567891011 f g h e a b c d

19  Weighted interval scheduling problem.  Given a collection of intervals I 1,…,I n with weights w 1,…,w n, choose a maximum weight set of non-overlapping intervals  p(j) = largest index i < j such that job i is compatible with j Time p(1) = 0 p(2) = 0 p(3) = 1 p(4) = 0 p(5) = 3 p(6) = 3 Previous job 01234567891011 w 6 = 1 w 5 = 2 w 1 = 2 w 2 = 4 w 3 = 4 w 4 = 7 1 2 3 4 5 6

20  Assume we know optimal solution O: either contains or doesn’t contain the last request n  Opt(j) = max(w j + Opt(p(j)), Opt(j-1)) Time 01234567891011 w 6 = 1 w 5 = 2 w 1 = 2 w 2 = 4 w 3 = 4 w 4 = 7 1 2 3 4 5 6 p(1) = 0 p(2) = 0 p(3) = 1 p(4) = 0 p(5) = 3 p(6) = 3

21  Opt(j) = max(w j + Opt(p(j)), Opt(j-1))  Opt(6) = max(w 6 + Opt(3), Opt(5)) = max(1 + 6, 8) = 8  Opt(3) = max(w 3 + Opt(1), Opt(2)) = max(4 + 2, 4) = 6  Opt(5) = max(w 5 + Opt(3), Opt(4)) = max(2 + 6, 7) = 8  Opt(1) = max(w 1 + Opt(0), Opt(0)) = max(2 + 0, 0) = 2  Opt(2) = max(w 2 + Opt(0), Opt(1)) = max(4 + 0, 2) = 4  Opt(4) = max(w 4 + Opt(0), Opt(3)) = max(7 + 0, 6) = 7 Time 01234567891011 w 6 = 1 w 5 = 2 w 1 = 2 w 2 = 4 w 3 = 4 w 4 = 7 1 2 3 4 5 6 p(1) = 0 p(2) = 0 p(3) = 1 p(4) = 0 p(5) = 3 p(6) = 3

22  A thief is robbing a house.  He has a bag to fill with items. This bag has a maximum weight capacity of W.  Each item he is considering stealing has a value v i and a weight w i.  His objective is to steal the items with maximum total value while staying within the weight limit of his bag

23  In this case, what would a sub-problem be? Does a solution for item n include the solution for item n-1?  Assume we know optimal solution O: either contains or doesn’t contain the last item n  It Does  Add value of item n (vn) to the total value  O must contain an optimal solution to the problem consisting of item n- 1 and total weight W – wn  It Doesn’t  O is equal to the optimal solution to the problem consisting of item n-1 and total weight W  If w < w i then Opt(i, w) = Opt(i – 1, w) otherwise Opt(i, w) = max(Opt(i – 1, w), v i + Opt(i -1, w – w i ))

24  i = n = 4, w = W = 5  5 < 5 ? No Opt(4, 5) = max(Opt(3, 5), 6 + Opt(3, 0))  i = 3, w = 5  5 < 4? No Opt(3, 5) = max(Opt(2, 5), 5 + Opt(3, 1))  i = 3, w = 0  0 < 4? Yes Opt(3, 0) = Opt(2, 0)....

25  Knapsack problem.  Given n objects and a "knapsack."  Item i weighs w i > 0 kilograms and has value v i > 0.  Knapsack has capacity of W kilograms.  Goal: fill knapsack so as to maximize total value.  Ex: { 3, 4 } has value 40. 1 Value 18 22 28 1 Weight 5 6 62 7 Item 1 3 4 5 2 W = 11

26  Def. OPT(i) = max profit subset of items 1, …, i.  Case 1: OPT does not select item i.  OPT selects best of { 1, 2, …, i-1 }  Case 2: OPT selects item i.  accepting item i does not immediately imply that we will have to reject other items  without knowing what other items were selected before i, we don't even know if we have enough room for I  Conclusion. Need more sub-problems!

27  Def. OPT(i, w) = max profit subset of items 1, …, i with weight limit w.  Case 1: OPT does not select item i.  OPT selects best of { 1, 2, …, i-1 } using weight limit w  Case 2: OPT selects item i.  new weight limit = w – w i  OPT selects best of { 1, 2, …, i–1 } using this new weight limit

28 n + 1 1 Value 18 22 28 1 Weight 5 6 62 7 Item 1 3 4 5 2  { 1, 2 } { 1, 2, 3 } { 1, 2, 3, 4 } { 1 } { 1, 2, 3, 4, 5 } 0 0 0 0 0 0 0 1 0 1 1 1 1 1 2 0 6 6 6 1 6 3 0 7 7 7 1 7 4 0 7 7 7 1 7 5 0 7 18 1 6 0 7 19 22 1 7 0 7 24 1 28 8 0 7 25 28 1 29 9 0 7 25 29 1 34 10 0 7 25 29 1 34 11 0 7 25 40 1 W + 1 W = 11 OPT: { 4, 3 } value = 22 + 18 = 40


Download ppt "Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples."

Similar presentations


Ads by Google