1 Dynamic Programming Optimization Problems Dynamic Programming Paradigm Example: Matrix multiplicationPrinciple of OptimalityExercise: Trading post problem
2 Optimization Problems In an optimization problem, there are typically many feasible solutions for any input instance IFor each solution S, we have a cost or value function f(S)Typically, we wish to find a feasible solution S such that f(S) is either minimized or maximizedThus, when designing an algorithm to solve an optimization problem, we must prove the algorithm produces a best possible solution.
3 Example ProblemYou have six hours to complete as many tasks as possible, all of which are equally important.Task A - 2 hours Task D hoursTask B - 4 hours Task E - 2 hoursTask C - 1/2 hour Task F - 1 hourHow many can you get done?Is this a minimization or a maximization problem?Give one example of a feasible but not optimal solution along with its associated value.Give an optimal solution and its associated value.
4 Dynamic ProgrammingDynamic programming is a divide-and-conquer technique at heartThat is, we solve larger problems by patching together solutions to smaller problemsDynamic programming can achieve efficiency by storing solutions to subproblems to avoid redundant computationsWe typically avoid redundant computations by computing solutions in a bottom-up fashion
5 Efficiency Example: Fibonacci numbers F(n) = F(n-1) + F(n-2)F(0) = 0F(1) = 1Top-down recursive computation is very inefficientMany F(i) values are computed multiple timesBottom-up computation is much more efficientCompute F(2), then F(3), then F(4), etc. using stored values for smaller F(i) values to compute next valueEach F(i) value is computed just once
6 Recursive Computation F(n) = F(n-1) + F(n-2) ; F(0) = 0, F(1) = 1Recursive Solution:F(6) = 8F(1)F(0)F(2)F(3)F(4)F(5)Implementing it as a recursive procedure is easy but slow!We keep calculating the same value over and over!
7 Bottom-up computation We can calculate F(n) in linear time by storing small values.F = 0F = 1for i = 2 to nF[i] = F[i-1] + F[i-2]return F[n]Moral: We can sometimes trade space for time.Dynamic programming is a technique for efficiently computing recurrences by storing partial results.Once you understand dynamic programming, it is usually easier to reinvent certain algorithms than try to look them up!Dynamic programming is best understood by looking at a bunch of different examples.
8 Key implementation steps Identify subsolutions that may be useful in computing whole solutionOften need to introduce parametersDevelop a recurrence relation (recursive solution)Set up the table of values/costs to be computedThe dimensionality is typically determined by the number of parametersThe number of values should be polynomialDetermine the order of computation of valuesBacktrack through the table to obtain complete solution (not just solution value)
9 Example: Matrix Multiplication InputList of n matrices to be multiplied together using traditional matrix multiplicationThe dimensions of the matrices are sufficientTaskCompute the optimal ordering of multiplications to minimize total number of scalar multiplications performedObservations:Multiplying an X Y matrix by a Y Z matrix takes X Y Z multiplicationsMatrix multiplication is associative but not commutative
10 Example Input Input: Feasible solutions and their values M1, M2, M3, M4M1: 13 x 5M2: 5 x 89M3: 89 x 3M4: 3 x 34Feasible solutions and their values((M1 M2) M3) M4:10,582 scalar multiplications(M1 M2) (M3 M4): 54,201 scalar multiplications(M1 (M2 M3)) M4: 2856 scalar multiplicationsM1 ((M2 M3) M4): 4055 scalar multiplicationsM1 (M2 (M3 M4)): 26,418 scalar multiplications
11 Identify subsolutions Often need to introduce parametersDefine dimensions to be (d0, d1, …, dn) where matrix Mi has dimensions di-1 x diLet M(i,j) be the matrix formed by multiplying matrices Mi through MjDefine C(i,j) to be the minimum cost for computing M(i,j)
12 Develop a recurrence relation DefinitionsM(i,j): matrices Mi through MjC(i,j): the minimum cost for computing M(i,j)Recurrence relation for C(i,j)C(i,i) = ???C(i,j) = ???Want to express C(i,j) in terms of “smaller” C terms
13 Set up table of values Table The dimensionality is typically determined by the number of parametersThe number of values should be polynomialC1234Recurrence relation for C(i,j)C(i,j) = mink=i to j-1 ( C(i,k)+ C(k+1,j) + di-1dkdj)The last multiplication is between matrices M(i,k) and M(k+1,j)C(i,i) = 0
14 Order of Computation of Values Many orders are typically ok.Just need to obey some constraintsWhat are valid orders for this table?C123456Diagonals order ok
15 Representing optimal solution C1234578515302856133518459078P1234P(i,j) records the intermediate multiplication k used to computeM(i,j). That is, P(i,j) = k if last multiplication was M(i,k) M(k+1,j)
16 Pseudocode int MatrixOrder() forall i, j C[i, j] = 0; for j = 2 to n for i = j-1 to 1C(i,j) = mini<=k<=j-1 (C(i,k)+ C(k+1,j) + di-1dkdj)P[i, j]=k;return C[1, n];
17 Backtracking Procedure ShowOrder(i, j) if (i=j) write ( “Ai”) ; else k = P [ i, j ] ;write “ ( ” ;ShowOrder(i, k) ;write “ ” ;ShowOrder (k+1, j) ;write “)” ;
18 Principle of Optimality In book, this is termed “Optimal substructure”An optimal solution contains within it optimal solutions to subproblems.More detailed explanationSuppose solution S is optimal for problem P.Suppose we decompose P into P1 through Pk and that S can be decomposed into pieces S1 through Sk corresponding to the subproblems.Then solution Si is an optimal solution for subproblem Pi
19 Example 1 Matrix Multiplication In our solution for computing matrix M(1,n), we have a final step of multiplying matrices M(1,k) and M(k+1,n).Our subproblems then would be to compute M(1,k) and M(k+1,n)Our solution uses optimal solutions for computing M(1,k) and M(k+1,n) as part of the overall solution.
20 Example 2 Shortest Path Problem Suppose a shortest path from s to t visits uWe can decompose the path into s-u and u-t.The s-u path must be a shortest path from s to u, and the u-t path must be a shortest path from u to tConclusion: dynamic programming can be used for computing shortest paths
21 Example 3 Longest Path Problem Conclusion? Suppose a longest path from s to t visits uWe can decompose the path into s-u and u-t.Is it true that the s-u path must be a longest path from s to u?Conclusion?
22 Example 4: The Traveling Salesman Problem What recurrence relation will return the optimal solution to the Traveling Salesman Problem?If T(i) is the optimal tour on the first i points, will this help us in solving larger instances of the problem?Can we set T(i+1) to be T(i) with the additional point inserted in the position that will result in the shortest path?
24 Summary of bad examples There almost always is a way to have the optimal substructure if you expand your subproblems enoughFor longest path and TSP, the number of subproblems grows to exponential sizeThis is not useful as we do not want to compute an exponential number of solutions
25 When is dynamic programming effective? Dynamic programming works best on objects that are linearly ordered and cannot be rearrangedcharacters in a stringfiles in a filing cabinetpoints around the boundary of a polygonthe left-to-right order of leaves in a search tree.Whenever your objects are ordered in a left-to-right way, dynamic programming must be considered.
26 Efficient Top-Down Implementation We can implement any dynamic programming solution top-down by storing computed values in the tableIf all values need to be computed anyway, bottom up is more efficientIf some do not need to be computed, top-down may be faster
27 Trading Post Problem Input Task n trading posts on a river R(i,j) is the cost for renting at post i and returning at post j for i < jNote, cannot paddle upstream so i < jTaskOutput minimum cost route to get from trading post 1 to trading post n
28 Longest Common Subsequence Problem Given 2 strings S and T, a common subsequence is a subsequence that appears in both S and T.The longest common subsequence problem is to find a longest common subsequence (lcs) of S and Tsubsequence: characters need not be contiguousdifferent than substringCan you use dynamic programming to solve the longest common subsequence problem?
29 Longest Increasing Subsequence Problem Input: a sequence of n numbers x1, x2, …, xn.Task: Find the longest increasing subsequence of numberssubsequence: numbers need not be contiguousCan you use dynamic programming to solve the longest common subsequence problem?
30 Book Stacking Problem Input Task n books with heights hi and thicknesses tilength of shelf LTaskAssignment of books to shelves minimizing sum of heights of tallest book on each shelfbooks must be stored in order to conform to catalog system (i.e. books on first shelf must be 1 through i, books on second shelf i+1 through k, etc.)