Download presentation
Presentation is loading. Please wait.
Published byAbigayle Powell Modified over 8 years ago
1
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm 7.5 The All-Pairs Shortest Path Problem 7.6 The Knapsack Problem
2
7.1 Introduction Main idea: Dynamic programming is a powerful algorithm design technique that is widely used to solve combinatorial optimization problems. An algorithm that employs this technique is not recursive by itself, but the underlying solution of the problem is usually stated in the form of a recursive function. For this reason, this technique resorts to evaluating the recurrence in a bottom-up manner, saving intermediate results that are used later on to compute the desired solution.
3
Example 7.1 One of the most popular examples used to introduce recursion and induction is the problem of computing the Fibonacci sequence: 1. procedure f(n) 2. if (n=1) or (n=2) then return 1 3. else return f(n-1)+f(n-2) 7.1 The recursive procedure looks like the following. Time complexity: (1)Using recurrence: T(n)=θ(φ n ) (2) Using dynamic programming: Θ(n)
4
7.2 The longest Common subsequence Problem Problem: Given two strings A and B of lengths n and m, respectively, over an alphabet , determine the length of the longest subsequence that is common to both A and B. Here, a subsequence of A=a 1 a 2, … a n is a string of the form, where each i j is between 1 and n and
5
7.2 The longest Common subsequence Problem Solution: In order to make use of the dynamic programming technique, we first find a recursive formula for the length of the longest common subsequence. Let A=a 1 a 2, … a n and B=b 1 b 2, … b m. Let L[i,j] denote the length of a longest common subsequence of a 1 a 2, … a i and b 1 b 2,..b j. Note that i or j may be zero, in which case one or both of a 1 a 2, … a i and b 1 b 2,..b j may be the empty string. Naturally, if i=0 or j=0, then L[i,j]=0.
6
7.2 The longest Common subsequence Problem Observation 7.1 Suppose that both i and j are greater than 0. Then If a i =b j, L[i,j]=L[i-1,j-1]+1. If a i /=b j,L[i,j]=max{L[i,j-1], L[i-1,j]} So we have:
7
Algorithm 7.1 LCS Input: Two strings A and B of lengths n and m, respectively, over an alphabet. Output: The length of the longest common subsequence of A and B. 1. For i 0 to n 2. L[I,0] 0 3.End for 4.For j 0 to m 5. L[0,j] 0 6.End for 7.For i 1 to n 8. for j 1 to m 9. if = then L[i,j] L[i-1,j-1]+1 10. else L[i,j] max{L[i,j-1],L[i-1,j]} 11. end if 12. End for 13.End for 14.Return L[n,m] 7.2
8
Theorem 7.1 An optimal solution to the longest common subsequence problem can be found in (nm) time and (min{m,n}) space. 7.2
9
7.3 Matrix Chain Multiplication Example: Suppose we want to compute the product M 1 M 2 M 3 of three matricesM 1,M 2 and M 3 of dimensions 2*10,10*2 and 2*10, using the standard method of matrix multiplication. If we multiple M 1 and M 2 and then multiply the result by M 3, the number of scalar multiplications will be 2*10*2+2*2*10=80. If, instead, we multplyM1 by the result of multiplying M 2 and M 3, then the number of scalar multiplications becomes 10*2*10+2*10*10=400. Thus, carrying out the multiplicationM 1 (M 2 M 3 ) costs five times the multiplication(M 1 M 2 )M 3.
10
7.3 Matrix Chain Multiplication Problem: Let M1,M2, …,Mn to be n matrices, try to find a order of multiplication that makes the cost of multiplying a chain of the n matrices to be least. Time complexity: Using the brute-force method: Using dynamic programming: (n 3 )
11
7.3 Matrix Chain Multiplication solution: let M 1,M 2, …,M n be n matrices, r 1,r 2, …,r n+1, where r i and r i+1 are, respectively, the number of rows and columns in matrix M i, 1in. M ij denotes the product of M i M i+1 … M j. C[i,j] denotes the cost of number of scalar multiplications. Let k be an index between i+1 and j. compute the two matrices M i,k-1 =M i M i+1 … M k-1, and M k,j =M k M k+1 … M j. Then M i,j =M i,k-1 M kj.
12
7.3 Matrix Chain Multiplication So the cost of M[i,j]: It follows that in order to find the minimum number of scalar multiplications required to perform the matrix multiplication M 1 M 2 … M n, we only need to solve the recurrence
13
Algorithm 7.2 MATCHAIN Input: An array r[1..n+1] of positive integers corresponding to the dimensions of a chain of n matrices, where r[1..n] are the number of rows in the n matrices and r[n+1] is the number of columns in. Output: The least number of scalar multiplications required to multiply the n matrices. 7.3
14
Algorithm 7.2 MATCHAIN 1.For i 1 to n {Fill in diagonal } 2. C[i,i] 0 3.End for 4.For d 1 to n-1 {fill in diagonal to } 5. for i 1 to n-d {fill in entries in diagonal } 6. j i+d 7. comment: The next three lines compute C[i,j] 8. C[i,j] 9. For k i+1 to j 10. C[i,j] min{C[i,j],C[i,k-1]+C[k,j]+r[i]r[k]r[j+1]} 11. end for 12. end for 13.End for 14.Return C[1,n] 7.3
15
Theorem 7.2 The minimum number of scale multiplications required to multiply a chain of n matrices can be found in ( ) time and ( ) space. 7.3
16
7.4 The Dynamic Programming Paradigm Save solutions to subproblems in order to avoid their recomputation. All the table entries generated by the algorithm represent optimal solutions to the subinstances considered by the algorithm. Principle of optimality: Given an optimal sequence of decisions, each subsequence must be an optimal sequence of decisions by itself.
17
7.5 The all-pairs Shortest Path Problem Problem: Let G=(V,E) be a directed graph in which each edge (i,j) has a nonegative length l[i,j]. If there is no edge from vertex i to vertex j, then l[i,j]=. The problem is to find the distance from each vertex to all other vertices, where the distance from vertex x to vertex y is the length of a shortest path from x to y.
18
7.5 The all-pairs Shortest Path Problem Solution: assume that V={1,2, …,n}, let i and j be two different vertices in V. define d k i,j to be the length of a shortest path from i to j that does not pass through any vertex in {k+1,K+2, …,n}.Thus
19
Algorithm 7.3 FLOYD Input: An n*n matrix l[1..n,1..n] such that l[i,j] is the length of the edge (i,j) in a directed graph G=({1,2, …,n},E). Output: A matrix D with D[i,j]= the distance from i to j. 1.D l {copy the input matrix l into D} 2.For k 1 to n 3. for i 1 to n 4. for j 1 to n 5. D[i,j]=min{D[i,j],D[i,k]+D[k,j]} 6. end for 7. end for 8.End for 7.5
20
7.6 The Knapsack Problem Problem: Let U={u 1,u 2, …,u n } be a set of n items to be packed in a knapsack of size C. For 1jn, let s j and v j be the size an value of the jth item, respectively, where C and s j,v j, 1jn, are all positive integers. The objective is to fill the knapsack with some items from U whose total size is at most C and such that their total value is maximum. More formally, given U of n items, we want to find a subset SU such that is maximized subject to the constraint
21
7.6 The Knapsack Problem Solution: Let V[i,j] denote the value obtained by filling a knapsack of size j with items taken from the first i items {u 1,u 2, …,u i } in an optimal way.
22
7.6 The Knapsack Problem solution: Observation 7.2 V[i,j] is the maximum of the following two quantities: V[i-1,j]: The maximum value obtained by filling a knapsack of size j with items taken from {u 1,u 2, …, u i-1 } only in an optimal way. V[i-1, j-s i ]+v i : The maximum value obtained by filling a knapsack of size j-s i with items taken from {u 1,u 2, … u i-1 } in an optimal way plus the value of item u i. This case applies only if j>=s i and it amounts to adding item u i to the knapsack.
23
7.6 The Knapsack Problem Solution: Observation 7.2 implies the following recurrences for finding the value in an optimal packing:
24
Algorithm 7.4 KNAPSACK Input: A set of items U={,, …, } with sizes,, … and values,, …, and a knapsack capacity C. Output: The maximum value of the function subject to C for some subset of items S U. 1. For i 0 to n 2. V[i,0] 0 3.End for 4.for j 0 to C 5. V[0,j] 0 6.End for 7.for i 1 to n 8.For j 0 to C 9. V[i,j] V[i-1,j] 10. if j then V[i,j] max{V[i,j],V[i-1,j- ]+ } 11.End for 12.End for 13.Return V[n,C] 7.6
25
Theorem 7.3 An optimal solution to the Knapsack problem can be found in (nC) time and (C) space. 7.6
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.