Download presentation

1
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation of the recursive solution results in identical recursive calls that are executed more than once. Dynamic programming implements such algorithms by evaluating the recurrence in a bottom-up manner, saving intermediate results that are later used in computing the desired solution

2
Fibonacci Numbers What is the recursive algorithm that computes Fibonacci numbers? What is its time complexity? Note that it can be shown that

3
**Computing the Binomial Coefficient**

Recursive Definition Actual Value

4
**Computing the Binomial Coefficient**

What is the direct recursive algorithm for computing the binomial coefficient? How much does it cost? Note that

5
Dynamic Programming Development of a dynamic programming solution to an optimization problem involves three steps Characterize the structure of an optimal solution Optimal substructures, where an optimal solution consists of sub-solutions that are optimal. Overlapping sub-problems. Recursively define the value of an optimal solution. Compute the value of an optimal solution in a bottom-up manner. Construct an optimal solution from the computed optimal value.

6
**Longest Common Subsequence**

Problem Definition: Given two strings A and B over alphabet , determine the length of the longest subsequence that is common in A and B. A subsequence of A=a1a2…an is a string of the form ai1ai2…aik where 1i1<i2<…<ik n Example: Let = { x , y , z }, A = xyxyxxzy, B=yxyyzxy, and C= zzyyxyz LCS(A,B)=yxyzy Hence the length = LCS(B,C)= Hence the length = LCS(A,C)= Hence the length =

7
**Straight-Forward Solution**

Brute-force search How many subsequences exist in a string of length n? How much time needed to check a string whether it is a subsequence of another string of length m? What is the time complexity of the brute-force search algorithm of finding the length of the longest common subsequence of two strings of sizes n and m?

8
**Dynamic Programming Solution**

Let L[i,j] denote the length of the longest common subsequence of a1a2…ai and b1b2…bj, which are substrings of A and B of lengths n and m, respectively. Then L[i,j] = when i = 0 or j = 0 L[i,j] = when i > 0, j > 0, ai=bj L[i,j] = when i > 0, j > 0, aibj

9
**LCS Algorithm Algorithm LCS(A,B)**

Input: A and B strings of length n and m Output: Length of LCS of A and B Initialize L[i,0] and L[0,j] to zero; for i ← 1 to n do for j ← 1 to m do { if ai = bj then L[i,j] ← 1 + L[i-1,j-1] else L[i,j] ← max(L[i-1,j],L[i,j-1]) } return L[n,m];

10
Example (Q7.5 pp. 220) Find the length of the longest common subsequence of A=xzyzzyx and B=zxyyzxz

11
Example (Cont.) x z y

12
**Complexity Analysis of LCS Algorithm**

What is the time and space complexity of the algorithm?

13
**Matrix Chain Multiplication**

Assume Matrices A, B, and C have dimensions 210, 102, and 210 respectively. The number of scalar multiplications using the standard Matrix multiplication algorithm for (A B) C is A (B C) is Problem Statement: Find the order of multiplying n matrices in which the number of scalar multiplications is minimum.

14
**Straight-Forward Solution**

Again, let us consider the brute-force method. We need to compute the number of different ways that we can parenthesize the product of n matrices. e.g. how many different orderings do we have for the product of four matrices? Let f(n) denote the number of ways to parenthesize the product M1, M2, …, Mn. (M1M2…Mk) (M k+1M k+2…Mn) What is f(2), f(3) and f(1)?

15
Catalan Numbers f(n) is approximately

16
**Cost of Brute Force Method**

How many possibilities do we have for parenthesizing n matrices? How much does it cost to find the number of scalar multiplications for one parenthesized expression? Therefore, the total cost is

17
**The Recursive Solution**

Since the number of columns of each matrix Mi is equal to the number of rows of Mi+1, we only need to specify the number of rows of all the matrices, plus the number of columns of the last matrix, r1, r2, …, rn+1 respectively. Let the cost of multiplying the chain Mi…Mj (denoted by Mi,j) be C[i,j] If k is an index between i+1 and j, what is the cost of multiplying Mi,j considering multiplying Mi,k-1 with Mk,j? Therefore, C[1,n]=

18
**The Dynamic Programming Algorithm**

19
**MatChain Algorithm Algorithm MatChain**

Input: r[1..n+1] of +ve integers (dimensions of matr.) Output: Least # of scalar multiplications required for i := 1 to n do C[i,i] := 0; // diagonal d0 for d := 1 to n-1 do // for diagonals d1 to dn-1 for i := 1 to n-d do { j := i+d; C[i,j] := ; for k := i+1 to j do C[i,j] := min{C[i,j],C[i,k-1]+C[k,j]+r[i]r[k]r[j+1]; }; return C[1,n];

20
Example (Q7.11 pp ) Given as input 2 , 3 , 6 , 4 , 2 , 7 compute the minimum number of scalar multiplications:

21
**Time and Space Complexity of MatChain**

Time Complexity Space Complexity

22
**Assembly-Line Scheduling**

Two parallel assembly lines in a factory, lines 1 and 2 Each line has n stations Si,1…Si,n For each j, S1, j does the same thing as S2, j , but it may take a different amount of assembly time ai, j Transferring away from line i after stage j costs ti, j Also entry time ei and exit time xi at beginning and end

23
Assembly-Lines Brute force algorithm Time complexity O(n2n)

24
**Finding Subproblem Pick some convenient stage of the process**

Say, just before the last station What’s the next decision to make? Whether the last station should be S1,n or S2,n What do you need to know to decide which option is better? What the fastest times are for S1,n & S2,n

25
**Recursive Formula for Subproblem**

=min ( , Fastest time through prev station (same line) Fastest time to any given station Fastest time through prev station (other line) ) + Time it takes to switch lines

26
**Recursive Formula (II)**

Let fi [ j] denote the fastest possible time to get the chassis through S i, j Have the following formulas: f1[ 1] = e1 + a1,1 f1[ j] = min( f1[ j-1] + a1, j, f2 [ j-1]+t2, j-1+ a1, j ) Total time: f * = min( f1[n] + x1, f2 [ n]+x2)

28
Analysis + an example Only loop is lines 3-13 which iterate n-1 times: Algorithm is O(n). The array l records which line is used for each station number

29
**All-Pairs Shortest Paths**

Problem Statement: Let G = (V, E) be a directed graph in which each edge (i, j) has a nonnegative length l[i, j]. The all-pairs shortest path problem is to find the length of the shortest path from each vertex to all other vertices. The set of vertices V = {1, 2, …, n} l[i,j] = if (i, j) E, i j. Brute force algorithms? O(n3): Use dijskstra’s shortest path problem n times.

30
**Dynamic Programming Solution**

Optimal Substructure Property: A shortest path from a vertex to another vertex consists of the concatenation of shortest sub-paths of the intermediate vertices. Definition: A k-path from vertex i to vertex j is a path that does not pass through any vertex in {k + 1, k + 2,…, n} What is a 0-path? 1-path? … n-path? A nice property: an r-path is an s-path if and only if r s.

31
**Dynamic Programming Solution**

Let p be the shortest path from i to j containing only vertices from the set {1, …, k}. If vertex k is not in p then a shortest (k -1)-path is the shortest k-path. If k is an intermediate vertex in p, then we break down p into p1(i to k) and p2(k to j). i k j p1 p2

32
**Floyd’s Algorithm Algorithm Floyd**

Input: An n n matrix length[1..n, 1..n] such that length[i,j] is the weight of the edge (i,j) in a directed graph G = ({1,2,…,n}, E) Output: A matrix D with D[i,j] = [i,j] 1 D = length; //copy the input matrix length into D 2 for k = 1 to n do 3 for i = 1 to n do 4 for j = 1 to n do D[i,j] = min{D[i,j] , D[i,k] + D[k,j]}

33
Example 1 3 4 2 8 11 5 12 15

34
Example (Cont.) 0-p 1 2 3 4 12 5 15 8 11 1-p 1 2 3 4 12 5 15 8 11 6 4-p 1 2 3 4 9 5 7 11 8 10 6 … …

35
**Time and Space Complexity**

Time Complexity: Space Complexity:

36
**Greedy vs. DP Greedy: • Make a choice at each step.**

• Make the choice before solving the subproblems. • Solve top-down. Dynamic programming: • Choice depends on knowing optimal solutions to subproblems. Solve subproblems first. • Solve bottom-up.

37
Coin changing Greedy algorithm works fine (for this example: [100, 25, 10, 5, 1]) Prove greedy choice property See Rosen (section 2.1, pages 128,129) Greedy Method does not work in all cases Coin sets = { 8, 5, 1} Change = 10. Greedy Solution = {8, 1, 1} Optimal Solution = { 5, 5} What if Coin sets = {10, 6, 1}, Change = 12?

38
**Coin Changing: Dyn. Prog.**

10 6 1 A =12, denom = [10, 6, 1]? What could be the sub-problems? Described by which parameters? How do we solve sub-problems? How do we solve the trivial sub-problems? In which order do I have to solve sub-problems?

39
**0/1 Knapsack Problem Greedy approach does not give optimal:**

n = 3, W = 30 weights = (20, 10, 5) values = (180, 80, 50) Ratios = (180/20, 80/10, 50/5) = (9, 8, 10) Greedy solution: (1,0,1) =180+50=230 The optimal solution: (1,1,0) =180+80=260

40
**0-1 Knapsack problem: brute-force approach**

Let’s first solve this problem with a straightforward algorithm Since there are n items, there are 2n possible combinations of items. We go through all combinations and find the one with the most total value and with total weight less or equal to W Running time will be O(2n)

41
**0-1 Knapsack problem: brute-force approach**

Can we do better? Yes, with an algorithm based on dynamic programming We need to carefully identify the subproblems Let’s try this: If items are labeled 1..n, then a subproblem would be to find an optimal solution for Sk = {items labeled 1, 2, .. k}

42
Defining a Subproblem If items are labeled 1..n, then a subproblem would be to find an optimal solution for Sk = {items labeled 1, 2, .. k} Let’s add another parameter: w, which will represent the exact weight for each subset of items The subproblem is to compute B[k,w]

43
Recursive Formula The best subset of Sk that has the total weight w, either contains item k or not. First case: wk>w. Item k can’t be part of the solution, since if it was, the total weight would be > w, which is unacceptable Second case: wk w. Then the item k can be in the solution, and we choose the case with greater value

44
**The 0/1 Knapsack Algorithm**

B[k,w] = best selection from items 1-k with weight exactly equal to w Base case: k = 0, no items to choose from. Total value is 0. The answer will be the largest (rightmost) value in the last row (k = n) Running time: O(nW). Note: not a polynomial-time algorithm if W is large

45
**The 0/1 Knapsack Algorithm**

Algorithm 0-1Knapsack(S, W): Input: set S of items w/ benefit bi and weight wi; max. weight W Output: value of best subset w/weight ≤ W for w 0 to W do B[0,w] 0 for k 1 to n do for w W downto wk do B[k,w] max(B[k-1,w], B[k-1,w-wk]+bk)

46
**Example Let’s run our algorithm on the following data:**

n = 4 (# of elements) W = 5 (max weight) Elements (weight, benefit): (2,3), (3,4), (4,5), (5,6)

47
Example k 1 2 3 4 W 1 2 3 4 5 for w 0 to W do B[0,w] 0

48
**Example Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)**

k 1 2 3 4 W 1 2 3 3 3 3 3 3 4 4 4 4 3 4 5 5 5 3 7 7 7 B[k,w] max(B[k-1,w], B[k-1,w-wk]+bk)

49
**Improvements Running time: O(nW).**

Note: not a polynomial-time algorithm if W is large (e.g., W=n!, it’s worse than 2n) Improvement B[k,w] is computed from B[k-1,w], and B[k-1,w-wk]. Start from B[k,w] and see which B[i,j] are needed to be computed. Compute them At most …+ 2n-1 = 2n-1 Worst case complexity: O(min{nW, 2n})

50
**Summary 3 steps in dynamic programming solution**

Characterize the structure of an optimal solution Recursively define the value of an optimal solution. Compute the value of an optimal solution in a bottom-up manner. Construct an optimal solution. We discussed DP solutions for (a) LCS, (b) MCM, (c) Production line, (d) All pairs shortest, (e) Knapsack, and (f) Coin change.

Similar presentations

© 2020 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google