Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.

Similar presentations


Presentation on theme: "Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from."— Presentation transcript:

1 Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from the web.

2 20071214 chap15Hsiu-Hui Lee2 Introduction Dynamic programmingDynamic programming is typically applied to optimization problems. In such problem there can be many solutions. Each solution has a value, and we wish to find a solution with the optimal value.

3 20071214 chap15Hsiu-Hui Lee3 The development of a dynamic programming algorithm can be broken into a sequence of four steps: 1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution in a bottom up fashion. 4. Construct an optimal solution from computed information.

4 20071214 chap15Hsiu-Hui Lee4 Assembly-line scheduling

5 20071214 chap15Hsiu-Hui Lee5 a manufacturing problem to find the fast way through a factory a i,j : the assembly time required at station S i,j t i,j : the time to transfer a chassis away from assembly line i after having gone through station S i,j

6 20071214 chap15Hsiu-Hui Lee6

7 20071214 chap15Hsiu-Hui Lee7 An instance of the assembly-line problem with costs

8 20071214 chap15Hsiu-Hui Lee8 Step 1: The structure of the fastest way through the factory

9 20071214 chap15Hsiu-Hui Lee9 Optimal substructure An optimal solution to a problem (fastest way through S 1, j ) contains within it an optimal solution to subproblems. (fastest way through S 1, j−1 or S 2, j−1 ). Use optimal substructure to construct optimal solution to problem from optimal solutions to subproblems. To solve problems of finding a fastest way through S 1, j and S 2, j, solve subproblems of finding a fastest way through S 1, j-1 and S 2, j-1.

10 20071214 chap15Hsiu-Hui Lee10 Step 2: A recursive solution f i [j]: the fastest possible time to get a chassis from the starting point through station S i, j f*: the fastest time to get a chassis all the way through the factory.

11 20071214 chap15Hsiu-Hui Lee11 l i [ j ] = line # (1 or 2) whose station j − 1 is used in fastest way through S i, j. S li [ j ], j−1 precedes S i, j. Defined for i = 1,2 and j = 2,..., n. l ∗ = line # whose station n is used.

12 20071214 chap15Hsiu-Hui Lee12 step 3: computing an optimal solution Let r i (j)be the number of references made to f i [j] in a recursive algorithm. r 1 (n)=r 2 (n)=1 r 1 (j) = r 2 (j)=r 1 (j+1)+r 2 (j+1) The total number of references to all f i [j] values is  (2 n ). We can do much better if we compute the f i [j] values in different order from the recursive way. Observe that for j  2, each value of f i [j] depends only on the values of f 1 [j-1] and f 2 [j-1].

13 20071214 chap15Hsiu-Hui Lee13 F ASTEST -W AY procedure F ASTEST -W AY (a, t, e, x, n) 1 f 1 [1]  e 1 + a 1,1 2 f 2 [1]  e 2 + a 2,1 3 for j  2 to n 4do if f 1 [j-1] + a 1,j  f 2 [j-1] + t 2,j-1 +a 1,j 5 then f 1 [j]  f 1 [j-1] + a 1,j 6 l 1 [j]  1 7 else f 1 [j]  f 2 [j-1] + t 2,j-1 +a 1,j 8 l 1 [j]  2 9 if f 2 [j-1] + a 2, j  f 1 [j-1] + t 1,j-1 +a 2,j

14 20071214 chap15Hsiu-Hui Lee14 10 then f 2 [j]  f 2 [j – 1] + a 2,j 11 l2[j]  2 12 else f 2 [j]  f 1 [j – 1] + t 1,j-1 + a 2,j 13 l 2 [j]  1 14 if f 1 [n] + x 1  f 2 [n] + x 2 15 then f * = f 1 [n] + x 1 16 l * = 1 17else f * = f 2 [n] + x 2 18 l * = 2

15 20071214 chap15Hsiu-Hui Lee15 step 4: constructing the fastest way through the factory P RINT -S TATIONS (l, l*, n) 1i  l* 2print “ line ” i “,station ” n 3for j  n downto 2 4do i  l i [j] 5 print “ line ” i “,station ” j – 1 output line 1, station 6 line 2, station 5 line 2, station 4 line 1, station 3 line 2, station 2 line 1, station 1

16 20071214 chap15Hsiu-Hui Lee16 15.2 Matrix-chain multiplication A product of matrices is fully parenthesized if it is either a single matrix, or a product of two fully parenthesized matrix product, surrounded by parentheses.

17 20071214 chap15Hsiu-Hui Lee17 How to compute where is a matrix for every i. Example:

18 20071214 chap15Hsiu-Hui Lee18 MATRIX MULTIPLY MATRIX MULTIPLY(A, B) 1if columns[A] column[B] 2then error “ incompatible dimensions ” 3else for to rows[A] 4do for to columns[B] 5do 6for to columns[A] 7do 8return C

19 20071214 chap15Hsiu-Hui Lee19 Complexity: Let A be a matrix B be a matrix. Then the complexity is

20 20071214 chap15Hsiu-Hui Lee20 Example: is a matrix, is a matrix, and is a matrix Then takes time. However takes time.

21 20071214 chap15Hsiu-Hui Lee21 The matrix-chain multiplication problem: Given a chain of n matrices, where for i=0,1, …,n, matrix A i has dimension p i-1  p i, fully parenthesize the product in a way that minimizes the number of scalar multiplications.

22 20071214 chap15Hsiu-Hui Lee22 Counting the number of parenthesizations: [Catalan number]

23 20071214 chap15Hsiu-Hui Lee23 Step 1: The structure of an optimal parenthesization

24 20071214 chap15Hsiu-Hui Lee24 Step 2: A recursive solution Define m[i, j]= minimum number of scalar multiplications needed to compute the matrix goal m[1, n]

25 Step 3: Computing the optimal costs

26 20071214 chap15Hsiu-Hui Lee26 MATRIX_CHAIN_ORDER MATRIX_CHAIN_ORDER(p) 1 n  length[p] – 1 2 for i  1 to n 3do m[i, i]  0 4 for l  2 to n 5do for i  1 to n – l + 1 6do j  i + l – 1 7 m[i, j]   8 for k  i to j – 1 9do q  m[i, k] + m[k+1, j]+ p i-1 p k p j 10 if q < m[i, j] 11 then m[i, j]  q 12 s[i, j]  k 13 return m and s Complexity:

27 20071214 chap15Hsiu-Hui Lee27 Example:

28 20071214 chap15Hsiu-Hui Lee28 the m and s table computed by MATRIX-CHAIN-ORDER for n=6

29 20071214 chap15Hsiu-Hui Lee29 m[2,5]= min{ m[2,2]+m[3,5]+p 1 p 2 p 5 =0+2500+35  15  20=13000, m[2,3]+m[4,5]+p 1 p 3 p 5 =2625+1000+35  5  20=7125, m[2,4]+m[5,5]+p 1 p 4 p 5 =4375+0+35  10  20=11374 } =7125

30 Step 4: Constructing an optimal solution

31 20071214 chap15Hsiu-Hui Lee31 PRINT_OPTIMAL_PARENS(s, i, j) 1 if j = i 2 then print “ A ” i 3 else print “ ( “ 4 PRINT_OPTIMAL_PARENS(s, i, s[i,j]) 5 PRINT_OPTIMAL_PARENS(s, s[i,j]+1, j) 6 print “ ) “

32 20071214 chap15Hsiu-Hui Lee32 example: PRINT_OPTIMAL_PARENS(s, 1, 6) Output: [1,6] [4,6] [1,3] [4,5] A5A5 A4A4 A2A2 A3A3 [6,6] A6A6 [1,1] A1A1 [2,3] [2,2] [3,3] [4,4][5,5]

33 16.3 Elements of dynamic programming

34 20071214 chap15Hsiu-Hui Lee34 Optimal substructure We say that a problem exhibits optimal substructure if an optimal solution to the problem contains within its optimal solution to subproblems. Example: Matrix-multiplication problem

35 20071214 chap15Hsiu-Hui Lee35 1. You show that a solution to the problem consists of making a choice. Making this choice leaves one or more subproblems to be solved. 2. You suppose that for a given problem, you are given the choice that leads to an optimal solution. 3. Given this choice, you determine which subproblems ensue and how to best characterize the resulting space of subproblems. 4. You show that the solutions to the subproblems used within the optimal solution to the problem must themselves be optimal by using a “ cut-and-paste ” technique.

36 20071214 chap15Hsiu-Hui Lee36 Optimal substructure varies across problem domains in two ways: 1. how many subproblems are used in an optimal solutiion to the original problem, and 2. how many choices we have in determining which subproblem(s) to use in an optimal solution.

37 Overlapping subproblems: example: MAXTRIX_CHAIN_ORDER

38 20071214 chap15Hsiu-Hui Lee38 RECURSIVE_MATRIX_CHAIN RECURSIVE_MATRIX_CHAIN(p, i, j) 1 if i = j 2 then return 0 3 m[i, j]   4 for k  i to j – 1 5 do q  RMC(p,i,k) + RMC(p,k+1,j) + p i-1 p k p j 6 if q < m[i, j] 7 then m[i, j]  q 8 return m[i, j]

39 20071214 chap15Hsiu-Hui Lee39 The recursion tree for the computation of RECURSUVE-MATRIX-CHAIN(P, 1, 4)

40 20071214 chap15Hsiu-Hui Lee40 We can prove that T(n) =  (2 n ) using substitution method.

41 20071214 chap15Hsiu-Hui Lee41 Solution: 1. bottom up 2. memorization (memorize the natural, but inefficient)

42 20071214 chap15Hsiu-Hui Lee42 MEMORIZED_MATRIX_CHAIN MEMORIZED_MATRIX_CHAIN(p) 1 n  length[p] – 1 2 for i  1 to n 3 do for j  1 to n 4 do m[i, j]   5 return LOOKUP_CHAIN (p,1,n)

43 20071214 chap15Hsiu-Hui Lee43 LOOKUP_CHAIN LOOKUP_CHAIN(p, i, j) 1if m[i, j] <  2then return m[i, j] 3if i = j 4then m[i, j]  0 5else for k  i to j – 1 6 do q  LC(p, i, k) +LC(p, k+1, j)+p i-1 p k p j 7 if q < m[i, j] 8 then m[i, j]  q 9return m[i, j] Time Complexity:

44 20071214 chap15Hsiu-Hui Lee44 16.4 Longest Common Subsequence X = Y = is a common subsequence of both X and Y. or is the longest common subsequence of X and Y.

45 20071214 chap15Hsiu-Hui Lee45 Longest-common- subsequence problem: Given two sequences X = Y = To find a maximum length common subsequence (LCS) of X and Y. Define X i : the ith prefix of X X i = e.g. X= X 4 = X 0 = empty sequence

46 20071214 chap15Hsiu-Hui Lee46 Theorem 15.1 (Optimal substructure of LCS) Let X = and Y = and let Z = be any LCS of X and Y. 1. If x m = y n then z k = x m = y n and Z k-1 is an LCS of X m-1 and Y n-1. 2. If x m  y n then z k  x m implies Z is an LCS of X m-1 and Y. 3. If x m  y n then z k  y n implies Z is an LCS of X and Y n-1.

47 20071214 chap15Hsiu-Hui Lee47 A recursive solution to subproblem Define c [i, j] is the length of the LCS of X i and Y j.

48 20071214 chap15Hsiu-Hui Lee48 Computing the length of an LCS LCS_LENGTH(X,Y) 1 m  length[X] 2 n  length[Y] 3 for i  1 to m 4 do c[i, 0]  0 5 for j  1 to n 6 do c[0, j]  0 7 for i  1 to m 8 do for j  1 to n

49 20071214 chap15Hsiu-Hui Lee49 9 do if x i = y j 10 then c[i, j]  c[i-1, j-1]+1 11 b[i, j]  “  ” 12 else if c[i – 1, j]  c[i, j-1] 13 then c[i, j]  c[i-1, j] 14 b[i, j]  “  ” 15 else c[i, j]  c[i, j-1] 16 b[i, j]  “  ” 17 return c and b

50 20071214 chap15Hsiu-Hui Lee50 Complexity: O(mn)

51 20071214 chap15Hsiu-Hui Lee51 PRINT_LCS PRINT_LCS(b, X, i, j ) 1if i = 0 or j = 0 2then return 3if b[i, j] = “  ” 4then PRINT_LCS(b, X, i-1, j-1) 5 print x i 6elseif b[i, j] = “  ” 7 then PRINT_LCS(b, X, i-1, j) 8then PRINT_LCS(b, X, i, j-1) Complexity: O(m+n)

52 20071214 chap15Hsiu-Hui Lee52 15.5 Optimal Binary search trees cost:2.80 cost:2.75 optimal!!

53 20071214 chap15Hsiu-Hui Lee53 expected cost the expected cost of a search in T is

54 20071214 chap15Hsiu-Hui Lee54 For a given set of probabilities, our goal is to construct a binary search tree whose expected search is smallest. We call such a tree an optimal binary search tree.

55 20071214 chap15Hsiu-Hui Lee55 Step 1: The structure of an optimal binary search tree Consider any subtree of a binary search tree. It must contain keys in a contiguous range k i,...,k j, for some 1  i  j  n. In addition, a subtree that contains keys k i,..., k j must also have as its leaves the dummy keys d i-1,..., d j. If an optimal binary search tree T has a subtree T' containing keys k i,..., k j, then this subtree T' must be optimal as well for the subproblem with keys k i,..., k j and dummy keys d i-1,..., d j.

56 20071214 chap15Hsiu-Hui Lee56 Step 2: A recursive solution e[i,j]: the expected cost of searching an optimal binary search tree containing the keys k i, … k j

57 20071214 chap15Hsiu-Hui Lee57 Step 3:computing the expected search cost of an OBST O PTIMAL -BST(p,q,n) 1for i  1 to n + 1 2do e[i, i – 1]  q i-1 3 w[i, i – 1]  q i-1 4for l  1 to n 5do for i  1 to n – l + 1 6do j  i + l – 1 7 e[i, j]   8 w[i, j]  w[i, j – 1] + p j +q j

58 20071214 chap15Hsiu-Hui Lee58 9 for r  i to j 10do t  e[i, r – 1]+e[r +1, j]+w[i, j] 11 if t < e[i, j] 12then e[i, j]  t 13 root[i, j]  r 14return e and root OPTIMAL-BST procedure takes  (n 3 ), just like MATRIX-CHAIN-ORDER

59 20071214 chap15Hsiu-Hui Lee59 The table e[i,j], w[i,j], and root[i,j] computer by O PTIMAL -BST on the key distribution.

60 20071214 chap15Hsiu-Hui Lee60 Knuth has shown that there are always roots of optimal subtrees such that root[i, j – 1]  root[i+1, j] for all 1  i  j  n. We can use this fact to modify Optimal-BST procedure to run in  (n 2 ) time.


Download ppt "Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from."

Similar presentations


Ads by Google