Dynamic Programming (Ch. 15) Not a specific algorithm, but a technique (like divide- and-conquer). Developed back in the day when “programming” meant “tabular.

Slides:



Advertisements
Similar presentations
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
Advertisements

Overview What is Dynamic Programming? A Sequence of 4 Steps
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Unit 4: Dynamic Programming
Dynamic Programming.
Introduction to Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming CIS 606 Spring 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Code
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
November 7, 2005Copyright © by Erik D. Demaine and Charles E. Leiserson Dynamic programming Design technique, like divide-and-conquer. Example:
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Lecture 7 Topics Dynamic Programming
Longest Common Subsequence
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 Dynamic Programming 2012/11/20. P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems.
Algorithms and Data Structures Lecture X
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
Dynamic Programming UNC Chapel Hill Z. Guo.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Introduction to Algorithms Jiafen Liu Sept
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CSC5101 Advanced Algorithms Analysis
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Typically applied to optimization problems
Dynamic Programming (DP)
David Meredith Dynamic programming David Meredith
Chapter 8 Dynamic Programming.
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Data Structure and Algorithms
Lecture 8. Paradigm #6 Dynamic Programming
Ch. 15: Dynamic Programming Ming-Te Chi
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Introduction to Algorithms: Dynamic Programming
DYNAMIC PROGRAMMING.
Longest Common Subsequence
Analysis of Algorithms CS 477/677
Longest Common Subsequence
Dynamic Programming.
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Dynamic Programming (Ch. 15) Not a specific algorithm, but a technique (like divide- and-conquer). Developed back in the day when “programming” meant “tabular method”. Doesn’t really refer to computer programming.

Dynamic Programming Dynamic programming can provide a good solution for problems that take exponential time to solve by brute-force methods. Typically applied to optimization problems, where there are many possible solutions, each solution has a particular value, and we wish to find the solution with an optimal (minimal or maximal) value. For many of these problems, we must consider all subsets of a possibly very large set, so there are 2 n possible solutions – too many to consider sequentially for large n. Divide-and-conquer algorithms find an optimal solution by partitioning a problem into independent subproblems, solving the subproblems recursively, and then combining the solutions to solve the original problem. Dynamic programming is applicable when the subproblems are not indepen- dent, i.e. when they share subsubproblems.

Dynamic Programming This process takes advantage of the fact that subproblems have optimal solutions that combine to form an overall optimal solution. DP is often useful for problems with overlapping subproblems. The technique involves solving each subproblem once, recording the result in a table and using the information from the table to solve larger problems. For example, computing the nth Fibonacci number is an example of a non- optimization problem to which dynamic programming can be applied. F(n) = F(n-1) + F(n-2) for n >= 2 F(0) = 0 and F(1) = 1.

Fibonacci Numbers A straightforward, but inefficient algorithm to compute the nth Fibonacci number would use a top-down approach: RecFibonacci (n) 1. if n = 0 return 0 2. else if n = 1 return 1 3. else return RecFibonacci (n-1) + RecFibonacci (n-2) A more efficient, bottom-up approach starts with 0 and works up to n, requiring only n values to be computed (because the rest are stored in an array): Fibonacci(n) 1. create array f[0…n] 2. f[0] = 0 3. f[1] = 1 4. for i = 2 … n 5. f[i] = f[i-1] + f[i-2] 6. return f[n]

Dynamic Programming Development of a dynamic-programming algorithm can be broken into 4 steps: 1) Characterize the structure of an optimal solution (in words) 2) Recursively define the value of an optimal solution 3) Compute the value of an optimal solution from the bottom up 4) Construct an optimal solution from computed information

Rod Cutting Problem Problem: Find most profitable way to cut a rod of length n Given: rod of length n and an array of prices Assume that each length rod has a price p i Find best set of cuts to get maximum price, where Each cut is integer length in inches Can use any number of cuts, from 0 to n - 1 No cost for a cut Example cuts for n = 4 (prices shown above each example)

Rod Cutting Problem Problem: Find most profitable way to cut a rod of length n Given: rod of length n and an array of prices Assume that each length rod has a price p i Output: The maximum revenue obtainable for rods whose lengths sum to n, computed as the sum of the prices for the individual rods. If p n is large enough, an optimal solution might require no cuts, i.e., just leave the rod as n inches long. Example cuts for n = 4 (prices shown above each example)

Rod Cutting Problem Example rod lengths and values: Can cut rod in 2 n-1 ways since each inch can have a cut or no cut and there are at most n-1 places to cut. Example: rod of length 4: Best = two 2-inch pieces = p 2 + p 2 = = 10 Length (in inches) i Price p i , = 9 2, =10 3, = 9 1,1, = 7 1,2, = 7 2,1, = 7 1,1,1, = 4

Calculating Maximum Revenue Compute the maximum revenue (r i ) for rods of length i Top-down approach: 1: r i = 1 2: Compare 2, 1+1 3: Compare 3, 2+1, : Compare 4, 1+3, 2+2, 3+1, 1+1+2, 1+2+1, 2+1+1, … Length (inches) i Price p i iriri optimal solution 111 (no cuts) 252 (no cuts) 383 (no cuts) (no cuts) or

Calculating Maximum Revenue Optimal revenue r n can be determined by taking the maximum of: p n : The revenue for the entire rod, with no cuts, r 1 + r n-1 : the max revenue from a rod of 1 and a rod of n-1, r 2 + r n-2 : the max revenue from a rod of 2 and a rod of n-2, … r n-1 + r 1 That is, Length (inches) i Price p i Example: For n=7, one of the optimal solutions makes a cut at 3 inches, giving 2 subproblems of lengths 3 and 4. We need to solve both of the subproblems optimally. The optimal solution for a problem of length 4, cutting into 2 pieces, each of length 2, is used in the optimal solution to the problem of length 7.

Simpler way to decompose the problem Every optimal solution has a leftmost cut. In other words, there’s some cut that gives a first piece of length i cut off the left end, and a remaining piece of length n – i on the right. Need to divide only the remainder, not the first piece. Leaves only one subproblem to solve, rather than 2 subproblems. Say that the solution with no cuts has first piece size i = n with revenue p n and remainder size 0 with revenue r 0 = 0. Gives a simpler version of the equation for r n :

Input: price array p, length n Output: maximum revenue for rod of length n Cut-Rod(p, n) 1.if n == 0 return 0 2.q = 3.for i = 1 to n 4. q = max(q, p(i) + Cut-Rod(p, n-i)) 5.return q This procedure works, but it is very inefficient. If you code it up and run it, it could take more than an hour for n=40. The running time almost doubles each time n increases by 1. Recursive, Top-down Solution to Rod Cutting

Recursion tree Input: price array p, length n Output: maximum revenue for rod of length n Cut-Rod(p, n) 1.if n == 0 return 0 2.q = 3.for i = 1 to n 4. q = max(q, p(i) + Cut-Rod(p, n-i)) 5.return q Why so inefficient? Cut-Rod calls itself repeatedly, even on subproblems it has already solved. Here’s a tree of recursive calls for n = 4. Inside each node is the value of n for the call represented by the node. The subproblem for size 2 is solved twice, the one for size 1 four times, and the one for size 0 eight times. Recursive, Top-down Solution to Rod Cutting

Exponential growth: Let T(n) equal the number of calls to Cut-Rod with second parameter equal to n. Then if n = 0, if n > 1 The summation counts calls where second parameter is j = n – i. The solution to this recurrence is T(n) = 2 n. Recursive, Top-down Solution to Rod Cutting

Dynamic-programming Solution Instead of solving the same subproblems repeatedly, arrange to solve each subproblem just once. Save the solution to a subproblem in a table, and refer back to the table whenever the subproblem is revisited. “Store, don’t recompute” = a time/memory trade-off. Turns an exponential-time solution into a polynomial-time solution. Two basic approaches: top-down with memoization, and bottom-up. We will focus on the bottom-up solutions because they are just as efficient and are non-recursive.

Rod Cutting Problem The bottom-up approach uses the natural ordering of sub-problems: a problem of size i is “smaller” than a problem of size j if i < j. Solve subproblems j=0,1,…,n in that order. Running time? T(n) = θ (n 2 ) Input: price array p, length n Output: maximum revenue for rod of length n Bottom-Up-Cut-Rod(p, n) 1.let r[0…n] be new array 2.r[0] = 0 3.for j = 1 to n 4. q = 5. for i = 1 to j 6. q = max(q, p[i]+r[j–i]) 7. r[j] = q 8.return r[n]

Rod Cutting Problem The following pseudocode for bottom-up rod cutting saves the optimal choices in a separate array so that these choices can be printed. Input: price array p, length n Output: maximum revenue for rod of length n Extended-Bottom-Up-Cut-Rod(p, n) 1.let r[0…n] and s[0…n] be new arrays 2.r[0] = 0 3.for j = 1 to n 4. q = 5. for i = 1 to j 6. if q < p[i] + r[j-i] 7. q = p[i]+r[j–i] 8. s[j] = i 9. r[j] = q 10.return r and s Saves the first cut made in an optimal solution for a problem of size i in s[i]

Printing Optimal Solution The following pseudocode uses the s array to print the optimal solution Input: price array p, length n Output: side-effect printing of optimal cuts Print-Cut-Rod-solution(p, n) 1.(r,s) = Extended-Bottom-Up-Cut-Rod(p,n) 2.while n > 0 3. print s[n] 4. n = n – s[n] Running time? T(n) = θ (n)

Matrix-Chain Product If A is an m  n matrix and B is an n  p matrix, then A  B = C is m  p matrix and the time needed to compute C is O(mnp). there are mp elements of C each element of C requires n scalar multiplications and n-1 scalar additions Matrix-Chain Multiplication Problem: Given matrices A 1, A 2, A 3,..., A n, where the dimension of A i is p i-1  p i, determine the minimum number of multiplications needed to compute the product A 1  A 2 ...  A n. This involves finding the optimal way to parenthesize the matrices. Since multiplication is associative, for more than 2 matrices, there exists more than one order of multiplication to get the same end result.

Matrix-Chain Product The running time of a brute-force solution (exhaustively checking all possible ways to parenthesize) is: T(n) = 1 if n=1, = if n ≥ 2 Hopefully, we can do better using Dynamic Programming.

Matrix-Chain Product Example A 1 (4  2) A 2 (2  5) A 3 (5  1)   (A 1  A 2 )  A 3 M 1 = A 1  A 2 : requires 4  2  5 = 40 multiplications, M 1 is 4  5 matrix M 2 = M 1  A 3 : requires 4  5  1 = 20 multiplications, M 2 is 4  1 matrix –> total multiplications = = 60 A 1  (A 2  A 3 ) M 1 = A 2  A 3 : requires 2  5  1 = 10 multiplications, M 1 is 2  1 matrix M 2 = A 1  M 1 : requires 4  2  1 = 8 multiplications, M 2 is 4  1 matrix –> total multiplications = = 18 Two ways to parenthesize this product: p = (4, 2, 5, 1) Dimension Array

Matrix-Chain Product The optimal substructure of this problem can be given with the following argument: Suppose an optimal way to parenthesize A i A i+1 …A j splits the product between A k and A k+1. Then the way the prefix subchain A i A i+1 …A k is parenthesized must be optimal. Why? If there were a less costly way to parenthesize A i A i+1 …A k, substituting that solution as the way to parenthesize A i A i+1 …A j gives a solution with lower cost, contradicting the assumption that the way the original group of matrices was parenthesized was optimal. Therefore, the structure of the subproblems must be optimal.

Matrix-Chain Product – Recursive Solution A 3 (5  1) A 1 (4  2) A 2 (2  5)   p = (4, 2, 5, 1) Let M[i,j] = min number of multiplications to compute A i  A i+1  …  A j, where dimension of A i  A i+1 ...  A j is p i-1  p j, M[i,i] = 0 for i = 1 to n, and M[1,n] is the solution we want. p is indexed from 0 to n and p i = p[i]. M[i,j] can be determined as follows: M[i,j] = min(M[i,k] + M[k+1, j] + p i-1 p k p j ), where i ≤ k < j, where M[i,j] equals the minimum cost for computing subproducts A i…k A k+1…j plus the cost of multiplying these two matrices together. Each matrix A i is dimension p i-1 x p i, so computing matrix product A i…k A k+1…j takes p i-1 p k p j scalar multiplications.

Matrix-Chain Product – Recursive Solution A 3 (5  1) A 1 (4  2) A 2 (2  5)   p = (4, 2, 5, 1) RMP(p, i, j) 1. if i = j return 0 2. M[i,j] =  3. for k = i to j-1 4. q = RMP(p, i, k) + RMP(p, k+1, j) + p i-1 p k p j 5. if q < M[i,j] then M[i,j] = q 6. return M[i,j] M[1,3] = min{(M[1,1]+M[2,3]+p 0 p 1 p 3 = 0+p 1 p 2 p 3 +p 0 p 1 p 3 ) (M[1,2]+M[3,3]+p 0 p 2 p 3 = p 0 p 1 p 2 +0+p 0 p 2 p 3 )} = min {(2*5*1+4*2*1) = 18, (4*2*5+4*5*1) = 60} 1:3 1:1 2:3 3:31:2 2:23:3 1:1 2:2 There is redundant computation because no intermediate results are stored.

Matrix-Chain Product – Recursive Solution RMC(p, i, j) 1. if i = j return 0 2. M[i,j] =  3. for k = i to j-1 4. q = RMP(p, i, k) + RMP(p, k+1, j) + p i-1 p k p j 5. if q < M[i,j] then M[i,j] = q 6. return M[i,j]

Matrix-Chain-Order (p) // p is array of dimensions 1. n = p.length – 1 2. let M[1…n, 1…n] and s[1…n-1,2…n] be new tables 3. for i = 1 to n 4. M[i,i] = 0 /** fill in main diagonal with 0s **/ 5. for d = 2 to n /** d is chain length **/ 6. for i = 1 to n – d j = i + d M[i,j] =  9. for k = i to j q = M[i,k]+ M[k+1,j] + p i-1 p k p j ) 11. if q < M[i,j] 12. M[i,j] = q 13. s[i,j] = k 14. return M and s Matrix-Chain Product – Bottom-up solution The M matrix holds the lowest number of multiplications for each sequence and s holds the split point for that sequence. Matrix s is used to print the optimal solution.

Matrix-Chain-Order (p) // p is array of dimensions 1. n = p.length – 1 2. let M[1…n, 1…n] and s[1…n-1,2…n] be new tables 3. for i = 1 to n 4. M[i,i] = 0 /** fill in main diagonal with 0s **/ 5. for d = 2 to n /** d is chain length**/ 6. for i = 1 to n – d j = i + d M[i,j] =  9. for k = i to j q = M[i,k]+ M[k+1,j] + p i-1 p k p j ) 11. if q < M[i,j] 12. M[i,j] = q 13. s[i,j] = k 14. return M and s p = [30, 35, 15, 5, 10, 20, 25] Matrix-Chain Product – Bottom-up solution

Input: s array, dimensions of M matrix Output: side-effect printing of optimal parenthesization Print-Optimal-Parens(s, i, j) // initially, i = 1 and j = n 1. if i == j 2. print “A” i 3. else 4. print “(” 5. Print-Optimal-Parens(s, i, s[i, j]) 6. Print-Optimal-Parens(s, s[i, j] + 1, j) 7. print “)” “((A1 (A2 A3))((A4 A5)A6))”

Matrix-Chain Product – Bottom-Up Solution Complexity: O(n 3 ) time because of the nested for loops with each of d, i, and k taking on at most n-1 values. O(n 2 ) space for two n x n matrices M and s

Longest Common Subsequence Problem (§ 15.4) Problem: Given X = and Y =, find a longest common subsequence (LCS) of X and Y. Example: X =  A, B, C, B, D, A, B  Y =  B, D, C, A, B, A  LCS XY =  B, C, B, A  (or also LCS XY =  B, D, A, B  ) Brute-Force solution: 1.Enumerate all subsequences of X and check to see if they appear in the correct order in Y (chars in sequences are not necessarily consecutive). 2.Each subsequence of X corresponds to a subset of the indices {1,2,...,m} of the elements of X, so there are 2 m subsequences of X. The time to check whether each of these is a subsequence of Y is θ (n2 m ) 3.Clearly, this is not a good approach...time to try dynamic programming!

Notation Notation: X i = Y i = For example, if X =  A, C, G, T, T, G, C, A , then X 4 =  A, C, G, T  Example LCSs:

Optimal Substructure of an LCS Theorem: Let X =  x 1, x 2,..., x m  and Y =  y 1, y 2,..., y n  be sequences and let Z =  z 1, z 2,..., z k  be any LCS of X and Y. Case 1: If x m = y n, then z k = x m = y n and Z k-1 is an LCS of X m-1 and Y n-1 Case 2: If x m ≠ y n, then if z k ≠ x m then Z is an LCS of X m-1 and Y Case 3: If x m ≠ y n, then if z k ≠ y n then Z is an LCS of X and Y n-1 So if x m = y n, then we must find an LCS of X m-1 and Y n-1. If x m ≠ y n, then we must solve 2 subproblems, finding an LCS of X m-1 and Y and an LCS of X and Y n-1

Proof: First, show that z k = x m = y n : Suppose not. Then make a subsequence Z’ =  z 1, z 2,..., z k, x m . It’s a common subsequence of X and Y and has length k+1. Thus Z’ is a longer sequence than Z. Contradiction to Z being an LCS. Now show that Z k-1 is an LCS of X m-1 and Y n-1 : Clearly, it’s a common sub- sequence. Now suppose there exists a common subsequence W of X m-1 and Y n-1 that’s longer than Z k-1. Make subsequence W’ by appending x m to W. W’ is common subsequence of X and Y, has length ≥ k+1. Contradiction to Z being LCS. Optimal Substructure of an LCS Theorem: Let X =  x 1, x 2,..., x m  and Y =  y 1, y 2,..., y n  be sequences and let Z =  z 1, z 2,..., z k  be any LCS of X and Y. Case 1: If x m = y n, then z k = x m = y n and Z k-1 is an LCS of X m-1 and Y n-1

Optimal Substructure of an LCS Theorem: Let X =  x 1, x 2,..., x m  and Y =  y 1, y 2,..., y n  be sequences and let Z =  z 1, z 2,..., z k  be any LCS of X and Y. Case 2: If x m ≠ y n, then if z k ≠ x m then Z is an LCS of X m-1 and Y. Proof: If zk xm, then Z is a common subsequence of Xm-1 and Y. Suppose there exists a subsequence W of Xm-1 and Y with length > k. Then W is a common subsequence of X and Y. Contradiction to Z being an LCS. Case 3: If x m ≠ y n, then if z k ≠ y n then Z is an LCS of X and Y n-1. Proof: Symmetric to 2.

Recursive Solution to LCS Problem The recursive LCS Formulation Let C[i,j] = length of the LCS of X i and Y j, where X i =  x 1, x 2,..., x i  and Y j =  y 1, y 2,..., y j  Our goal: C[m,n] (consider entire X and Y strings) Basis: C[0,j] = 0 and C[i,0] = 0 C[i,j] is calculated as shown below (two cases): Case 1: x i = y j (i, j > 0) In this case, we can increase the size of the LCS of X i-1 and Y j-1 by one by appending x i = y j to the LCS of X i-1 and Y j-1, i.e., C[i, j] = C[i-1, j-1] + 1 Case 2: x i ≠ y j (i, j > 0) In this case, we take the LCS to be the longer of the LCS of X i-1 and Y j, and the LCS of X i and Y j-1, i.e., C[i, j] = max(C[i, j-1], C[i-1, j])

Top-Down Recursive Solution to LCS initialize C[i, 0] = C[0,j] = 0 for i = 0...m and j = 1...n initialize C[i, j] = NIL for i = 1...m and j = 1...n LCS(i, j) //input is index into 2 strings of length i and j 1. if C[i, j] == NIL 2. if x i == y j 3. C[i, j] = LCS(i-1, j-1) else 5. C[i, j] = max( LCS (i, j-1), LCS (i-1, j)) 6. return C[i, j] C is a two-dimensional array holding the solutions to subproblems. To find an LCS of X and Y, we may need to find the LCS's of X and Y n-1 and of X m-1 and Y. But both of these solutions contain the subsubproblem of finding the LCS of X m-1 and Y n-1. Overlapping subproblems!

Top-Down Recursive Solution to LCS LCS(i, j) //input is index into 2 strings 1. if C[i, j] == NIL 2. if x i == y j 3. C[i, j] = LCS(i-1, j-1) else 5. C[i, j] = max( LCS (i, j-1), LCS (i-1, j)) 6. return C[i, j]

Bottom-Up DP Solution to LCS Problem To compute c[i, j], we need the solutions to: c[i-1, j-1] (when x i = y j ) c[i-1, j] and c[i, j-1] (when x i ≠ y j ) LCS(X, Y), where X and Y are strings 1. m = length[X] 2. n = length[Y] 3. let b[1...m,1…n] and c[0…m,0…n] be new tables 4. for i = 0 to m do c[i, 0] = 0 5. for j = 0 to n do c[0, j]=0 6. for i = 1 to m 7. for j = 1 to n 8. if x i == y j 9. c[i, j] = c[i-1, j-1] b[i, j] = Up&Left 11. else if c[i - 1, j] ≥ c[i, j – 1] 12. c[i, j] = c[i - 1, j] 13. b[i, j] = Up 14. else 15. c[i, j] = C[i, j - 1] 16. b[i, j] = Left 17. return b and c

Bottom-Up LCS DP Running time = O(mn) (constant time for each entry in c) This algorithm finds the value of the LCS, but how can we keep track of the characters in the LCS? We need to keep track of which neighboring table entry gave the optimal solution to a sub-problem (break ties arbitrarily). if x i = y j the answer came from the upper left (diagonal) if x i ≠ y j the answer came from above or to the left, whichever value is larger (if equal, default to above). Array b keeps “pointers” indicating how to traverse the c table.

Bottom-Up LCS (arrays b & c) b a b abbaabba j  i01234i             X = abba Y = bab

Print-LCS (b,X,i,j) 1. if i == 0 or j == 0 then return 2. if b[i,j] = Up&Left 3. Print-LCS(b,X,i-1,j-1) 4. print x i 5. else if b[i,j] = Up 6. Print-LCS(b,X,i-1,j) 7. else Print-LCS(b,X,i,j-1) Constructing the LCS Initial call is Print-LCS(b,X,len(X),len(Y)), where b is the table of pointers

Complexity of LCS Algorithm The running time of the LCS algorithm is O(mn), since each table entry takes O(1) time to compute. The running time of the Print-LCS algorithm is O(m + n), since one of m or n is decremented in each step of the recursion.

End of Dynamic Programming Lectures

Binomial Coefficient Bottom-up dynamic programming can be applied to this problem for a polynomial-time solution. C(n,k) = C(n-1, k-1) + C(n-1,k) for n > k > 0 and C(n, 0) = C(n,n) = 1 Binomial(n,k) 1. for i  0 to n 2. for j  0 to min(i,k) 3. if j = 0 or j = i 4 C[i,j] = 1 5. else C[i,j] = C[i-1, j-1] + C[i-1,j] 6. return C[n,k]

Rod Cutting Problem Dynamic Programming (dynamic table lookup) has both a top-down and a bottom-up solution. The top-down approach uses memoization (recording a value so we can look it up later)

Input: price array p[1…n], length n Output: maximum revenue for rod of length n Memoized-Cut-Rod(p, n) 1.let r[0…n] be new array 2.for i = 0 to n 3. r[i] = MinInt // negative infinity 4.return Memoized-Cut-Rod-Aux(p,n,r) Memoized-Cut-Rod-Aux(p, n, r) 1.if r[n] ≥ 0 return r[n] 2.if n==0 q = 0 3.else q = MinInt // negative infinity 4.for i = 1 to n 5. q = max(q, p[i]+ Memoized-Cut-Rod-Aux(p,n-i,r)) 6.r[n] = q 7.return q Top-down approach