Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algorithm Course Dr. Aref Rashad

Similar presentations


Presentation on theme: "Algorithm Course Dr. Aref Rashad"— Presentation transcript:

1 Algorithm Course Dr. Aref Rashad
Part: 7 Dynamic Programming February 2013 Algorithms Course Dr. Aref Rashad

2 Dynamic Programming The dynamic programming (DP) concept is similar to divide-and-conquer technique DP technique is used primarily for optimisation problems However DP is efficient only if the problem has a certain amount of structure that we can exploit

3 Basic idea: Optimal substructure: optimal solution to the problem consists of optimal solutions to subproblems Overlapping subproblems: few subproblems in total, many recurring instances of each Solve bottom-up, building a table of solved subproblems that are used to solve larger ones

4 Shortest Paths: Dynamic Programming
2 f 5 7 Start 11 3 b e 5 3 7 a d End 7 6 14 6 g

5 General characteristics of Dynamic Programming
The problem structure is divided into stages Each stage has a number of states associated with it Making decisions at one stage transforms one state of the current stage into a state in the next stage. Given the current state, the optimal decision for each of the remaining states does not depend on the previous states or decisions. This is known as the principle of optimality for dynamic programming. The principle of optimality allows to solve the problem stage by stage recursively.

6 Division into stages The problem is divided into smaller subproblems each of them represented by a stage. The stages are defined in many different ways depending on the context of the problem. If the problem is about long-time development of a system then the stages naturally correspond to time periods. If the goal of the problem is to move some objects from one location to another on a map then partitioning the map into several geographical regions might be the natural division into stages. Generally, if an accomplishment of a certain task can be considered as a multi-step process then each stage can be defined as a step in the process.

7 States Each stage has a number of states associated with it. Depending what decisions are made in one stage, the system might end up in different states in the next stage. If a geographical region corresponds to a stage then the states associated with it could be some particular locations (cities, warehouses, etc.) in that region. In other situations a state might correspond to amounts of certain resources which are essential for optimizing the system.

8 Decisions Making decisions at one stage transforms one state of the current stage into a state in the next stage. In a geographical example, it could be a decision to go from one city to another. In resource allocation problems, it might be a decision to create or spend a certain amount of a resource.

9 Optimal Policy and Principle of Optimality
The goal of the solution procedure is to find an optimal policy for the overall problem, i.e., an optimal policy decision at each stage for each of the possible states. Given the current state, the optimal decision for each of the remaining states does not depend on the previous states or decisions. This is known as the principle of optimality for dynamic programming. For example, in the geographical setting the principle works as follows: the optimal route from a current city to the final destination does not depend on the way we got to the city. A system can be formulated as a dynamic programming problem only if the principle of optimality holds for it.

10 Recursive solution to the problem
The principle of optimality allows to solve the problem stage by stage recursively. The solution procedure : Finds the optimal policy for the last stage. The solution for the last stage is normally trivial. Then a recursive relationship is established which identifies the optimal policy for stage t, given that stage t+1 has already been solved. Moves backward stage by stage until it finds the optimal policy starting at the initial stage.

11 Solving Inventory Problems by DP
Main characteristics: Time is broken up into periods. The demands for all periods are known in advance. At the beginning of each period, the firm must determine how many units should be produced. Production and storage capacities are limited. Each period’s demand must be met on time from inventory or current production. During any period in which production takes place, a fixed cost of production as well as a variable per-unit cost is incurred. The firm’s goal is to minimize the total cost of meeting on time the demands.

12 Inventory Problems: Example
Producing airplanes 3 production periods No inventory at the beginning Can produce at most 3 airplanes in each period Can keep at most 2 airplanes in inventory Set-up cost for each period is 10 Determine a production schedule to minimize the total cost Period 1 2 3 Demand Unit cost 5 4

13 Resource Allocation Problems
Limited resources must be allocated to different activities Each activity has a benefit value which is variable and depends on the amount of the resource assigned to the activity The goal is to determine how to allocate the resources to the activities such that the total benefit is maximized

14 Resource Allocation Problems: Example
A college student has 6 days remaining before final exams begin in his 4 courses He wants allocate the study time as effectively as possible Needs at least 1 day for each course and wants to concentrate on just one course each day. So 1, 2, or 3 days should be allocated to each course He estimates that the alternative allocations for each course would yield the number of grade points shown in the following table: How many days should be allocated to each course? Courses Days 1 2 3 4 5 6 7 9

15 0-1 Knapsack problem Given a knapsack with maximum capacity W, and a set S consisting of n items Each item i has some weight wi and benefit value bi (all wi , bi and W are integer values) Problem: How to pack the knapsack to achieve maximum total value of packed items? where T S

16 0-1 Knapsack problem wi bi Weight value Items W = 20 3 2 4
This is a knapsack Max weight: W = 20 4 3 5 4 W = 20 8 5 9 10

17 0-1 Knapsack problem “brute-force approach”
Since there are n items, there are 2n possible combinations of items. We go through all combinations and find the one with the most total value and with total weight less or equal to W Running time will be O(2n)

18 Defining a Subproblem Let’s add another parameter: w, which will represent the exact weight for each subset of items The subproblem then will be to compute B[k,w]

19 Recursive Formula The best subset of Sk that has the total weight w, either contains item k or not. First case: wk>w. Item k can’t be part of the solution, since if it was, the total weight would be > w, which is unacceptable Second case: wk <=w. Then the item k can be in the solution, and we choose the case with greater value

20 Shortest Path Using Dynamic Programming

21 Shortest Paths: Dynamic Programming
Def. OPT(i, v) = length of shortest v-t path P using at most i edges. Case 1: P uses at most i-1 edges. OPT(i, v) = OPT(i-1, v) Case 2: P uses exactly i edges. if (v, w) is first edge, then OPT uses (v, w), and then selects best w-t path using at most i-1 edges

22 Shortest Paths: Dynamic Programming
2 f 5 7 Start 11 3 b e 5 3 7 a d End 7 6 14 6 g

23 Shortest Paths: Dynamic Programming
f 7 b e 5 End 6 b = min(6+g, 5+e, 7+f) g

24 Shortest Paths: Dynamic Programming
2 f 3 e a d 7 7 6 14 g = min(6+d, 14) e = min(3+c, 7+d, 7+g) f = 2+c g

25 Shortest Paths: Dynamic Programming
5 Start 11 3 a d c = min(5, 11+d) d = 3

26 Shortest Paths: Dynamic Programming
b = min(6+g, 5+e, 7+f) g = min(6+d, 14) e = min(3+c, 7+d, 7+g) f = 2+c c = min(5, 11+d) d = 3 via “a to d”

27 Shortest Paths: Dynamic Programming
b = min(6+g, 5+e, 7+f) g = min(6+d, 14) e = min(3+c, 7+d, 7+g) f = 2+c c = min(5, 11+d) d = 3 via “a to d”

28 Shortest Paths: Dynamic Programming
b = min(6+g, 5+e, 7+f) g = min(6+3, 14) e = min(3+c, 7+3, 7+g) f = 2+c c = min(5, 11+3) d = 3 via “a to d”

29 Shortest Paths: Dynamic Programming
b = min(6+g, 5+e, 7+f) g = min(9, 14) e = min(3+c, 10, 7+g) f = 2+c c = min(5, 14) d = 3 via “a to d”

30 Shortest Paths: Dynamic Programming
b = min(6+g, 5+e, 7+f) g = min(9, 14) e = min(3+c, 10, 7+g) f = 2+c c = min(5, 14) d = 3 via “a to d”

31 Shortest Paths: Dynamic Programming
b = min(6+g, 5+e, 7+f) g = 9 via “a to d to g” e = min(3+c, 10, 7+g) f = 2+c c = 5 via “a to c” d = 3 via “a to d” 5 11 3 d a

32 Shortest Paths: Dynamic Programming
b = min(6+g, 5+e, 7+f) g = 9 via “a to d to g” e = min(3+c, 10, 7+g) f = 2+c c = 5 via “a to c” d = 3 via “a to d” a d 14 6 g

33 Shortest Paths: Dynamic Programming
b = min(6+9, 5+e, 7+f) g = 9 via “a to d to g” e = min(3+5, 10, 7+9) f = 2+5 c = 5 via “a to c” d = 3 via “a to d”

34 Shortest Paths: Dynamic Programming
b = min(15, 5+e, 7+f) g = 9 via “a to d to g” e = min(8, 10, 16) f = 7 via “a to c to f” c = 5 via “a to c” d = 3 via “a to d”

35 Shortest Paths: Dynamic Programming
b = min(15, 5+e, 7+f) g = 9 via “a to d to g” e = min(8, 10, 16) f = 7 via “a to c to f” c = 5 via “a to c” d = 3 via “a to d”

36 Shortest Paths: Dynamic Programming
b = min(15, 5+e, 7+f) g = 9 via “a to d to g” e = 8 via “a to c to e” f = 7 via “a to c to f” c = 5 via “a to c” d = 3 via “a to d”

37 Shortest Paths: Dynamic Programming
b = min(15, 5+e, 7+f) g = 9 via “a to d to g” e = 8 via “a to c to e” f = 7 via “a to c to f” c = 5 via “a to c” d = 3 via “a to d”

38 Shortest Paths: Dynamic Programming
b = min(15, 5+8, 7+7) g = 9 via “a to d to g” e = 8 via “a to c to e” f = 7 via “a to c to f” c = 5 via “a to c” d = 3 via “a to d”

39 Shortest Paths: Dynamic Programming
b = min(15, 13, 14) g = 9 via “a to d to g” e = 8 via “a to c to e” f = 7 via “a to c to f” c = 5 via “a to c” d = 3 via “a to d”

40 Shortest Paths: Dynamic Programming
b = min(15, 13, 14) g = 9 via “a to d to g” e = 8 via “a to c to e” f = 7 via “a to c to f” c = 5 via “a to c” d = 3 via “a to d”

41 Shortest Paths: Dynamic Programming
b = 13 via “a to c to e to b” g = 9 via “a to d to g” e = 8 via “a to c to e” f = 7 via “a to c to f” c = 5 via “a to c” d = 3 via “a to d”

42 Shortest Paths: Dynamic Programming
2 f 5 7 Start 11 3 b e 5 3 a d 7 End 7 6 14 6 Analysis. (VE) time, (V2) space Shortest Path = 13 g

43 Longest Common Subsequence
Problem: Given 2 sequences, X = x1,...,xm and Y = y1,...,yn, find a common subsequence whose length is maximum. springtime ncaa tournament basketball printing north carolina krzyzewski Subsequence need not be consecutive, but must be in order.

44 Other sequence problems
Edit distance: Given 2 sequences, X = x1,...,xm and Y = y1,...,yn, what is the minimum number of deletions, insertions, and changes that you must do to change one to another? Protein sequence alignment: Given a score matrix on amino acid pairs, s(a,b) for a,b{}A, and 2 amino acid sequences, X = x1,...,xmAm and Y = y1,...,ynAn, find the alignment with lowest score…

45 Longest Common Subsequence (LCS)
Application: comparison of two DNA strings Ex: X= {A B C B D A B }, Y= {B D C A B A} Longest Common Subsequence: X = A B C B D A B Y = B D C A B A Brute force algorithm would compare each subsequence of X with the symbols in Y

46 LCS Algorithm if |X| = m, |Y| = n, then there are 2m subsequences of x; we must compare each with (n-comparisons) So the running time of the brute-force algorithm is O(n 2m) per sequence Notice that the LCS problem has optimal substructure: solutions of subproblems are parts of the final solution. Subproblems: “find LCS of pairs of prefixes of X and Y”

47 LCS Algorithm First we’ll find the length of LCS. Later we’ll modify the algorithm to find LCS itself. Define Xi, Yj to be the prefixes of X and Y of length i and j respectively Define c[i,j] to be the length of LCS of Xi and Yj Then the length of LCS of X and Y will be c[m,n]

48 LCS recursive solution
We start with i = j = 0 (empty substrings of x and y) Since X0 and Y0 are empty strings, their LCS is always empty (i.e. c[0,0] = 0) LCS of empty string and any other string is empty, so for every i and j: c[0, j] = c[i,0] = 0

49 LCS recursive solution
When we calculate c[i,j], we consider two cases: First case: x[i]=y[j]: one more symbol in strings X and Y matches, so the length of LCS Xi and Yj equals to the length of LCS of smaller strings Xi-1 and Yi-1 , plus 1

50 LCS recursive solution
Second case: x[i] != y[j] As symbols don’t match, our solution is not improved, and the length of LCS(Xi , Yj) is the same as before (i.e. maximum of LCS(Xi, Yj-1) and LCS(Xi-1,Yj)

51 LCS Length Algorithm LCS-Length(X, Y) 1. m = length(X) // get the # of symbols in X 2. n = length(Y) // get the # of symbols in Y 3. for i = 1 to m c[i,0] = 0 // special case: Y0 4. for j = 1 to n c[0,j] = 0 // special case: X0 5. for i = 1 to m // for all Xi 6. for j = 1 to n // for all Yj 7. if ( Xi == Yj ) 8. c[i,j] = c[i-1,j-1] + 1 9. else c[i,j] = max( c[i-1,j], c[i,j-1] ) 10. return c

52 We’ll see how LCS algorithm works on the following example: X = ABCB
LCS Example We’ll see how LCS algorithm works on the following example: X = ABCB Y = BDCAB What is the Longest Common Subsequence of X and Y? LCS(X, Y) = BCB X = A B C B Y = B D C A B

53 LCS Example (0) ABCB BDCAB X = ABCB; m = |X| = 4
j i Yj B D C A B Xi A 1 B 2 3 C 4 B X = ABCB; m = |X| = 4 Y = BDCAB; n = |Y| = 5 Allocate array c[5,4]

54 ABCB BDCAB for i = 1 to m c[i,0] = 0 for j = 1 to n c[0,j] = 0
LCS Example (1) j i Yj B D C A B Xi A 1 B 2 3 C 4 B for i = 1 to m c[i,0] = 0 for j = 1 to n c[0,j] = 0

55 ABCB BDCAB LCS Example (2) if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1
A 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

56 else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB BDCAB LCS Example (3) j i Yj B D C A B Xi A 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

57 else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB BDCAB LCS Example (4) j i Yj B D C A B Xi A 1 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

58 else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB BDCAB LCS Example (5) j i Yj B D C A B Xi A 1 1 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

59 else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB BDCAB LCS Example (6) j i Yj B D C A B Xi A 1 1 1 B 2 1 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

60 else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB BDCAB LCS Example (7) j i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

61 else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB BDCAB LCS Example (8) j i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

62 else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB BDCAB LCS Example (10) j i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

63 else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB BDCAB LCS Example (11) j i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

64 else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB BDCAB LCS Example (12) j i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

65 else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB BDCAB LCS Example (13) j i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B 1 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

66 else c[i,j] = max( c[i-1,j], c[i,j-1] )
ABCB BDCAB LCS Example (14) j i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B 1 1 2 2 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

67 else c[i,j] = max( c[i-1,j], c[i,j-1] )
LCS Example (15) ABCB BDCAB j i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

68 LCS Algorithm Running Time
LCS algorithm calculates the values of each entry of the array c[m,n] So what is the running time? O(m*n) since each c[i,j] is calculated in constant time, and there are m*n elements in the array

69 How to find actual LCS So far, we have just found the length of LCS, but not LCS itself. We want to modify this algorithm to make it output Longest Common Subsequence of X and Y Each c[i,j] depends on c[i-1,j] and c[i,j-1] or c[i-1, j-1] For each c[i,j] we can say how it was acquired: For example, here c[i,j] = c[i-1,j-1] +1 = 2+1=3 2 2 2 3

70 How to find actual LCS - continued
Remember that So we can start from c[m,n] and go backwards Whenever c[i,j] = c[i-1, j-1]+1, remember x[i] (because x[i] is a part of LCS) When i=0 or j=0 (i.e. we reached the beginning), output remembered letters in reverse order

71 3 Finding LCS j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 2 3
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2

72 3 Finding LCS (2) B C B LCS (reversed order): LCS (straight order):
j i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2 B C B LCS (reversed order): LCS (straight order): B C B

73 Compute LCS between the two strings:
Another Example Compute LCS between the two strings: Y = CGATAATTGAGA X = GTTCCTAATA

74 Yj C G A T Xi 1 2 3 4 5 6 LCS = GTAATA

75 Summary: Dynamic programming
DP is a method for solving certain kind of problems DP can be applied when the solution of a problem includes solutions to subproblems We need to find a recursive formula for the solution We can recursively solve subproblems, starting from the trivial case, and save their solutions in memory In the end we’ll get the solution of the whole problem

76 Summary: Properties of a problem that can be solved with dynamic programming
Simple Subproblems We should be able to break the original problem to smaller subproblems that have the same structure Optimal Substructure of the problems The solution to the problem must be a composition of subproblem solutions Subproblem Overlap Optimal subproblems to unrelated problems can contain subproblems in common

77 Summary: Longest Common Subsequence (LCS)
Problem: how to find the longest pattern of characters that is common to two text strings X and Y Dynamic programming algorithm: solve subproblems until we get the final solution Subproblem: first find the LCS of prefixes of X and Y. this problem has optimal substructure: LCS of two prefixes is always a part of LCS of bigger strings

78 Summary: Longest Common Subsequence (LCS)
Define Xi, Yj to be prefixes of X and Y of length i and j; m = |X|, n = |Y| We store the length of LCS(Xi, Yj) in c[i,j] Trivial cases: LCS(X0 , Yj ) and LCS(Xi, Y0) is empty (so c[0,j] = c[i,0] = 0 ) Recursive formula for c[i,j]: c[m,n] is the final solution

79 Summary: Longest Common Subsequence (LCS)
After we have filled the array c[ ], we can use this data to find the characters that constitute the Longest Common Subsequence Algorithm runs in O(m*n), which is much better than the brute-force algorithm: O(n 2m) per sequence

80 More problems Matrix chain multiplication
Minimum convex decomposition of a polygon Weighted interval scheduling RNA sequence secondary structure Knapsack problem Shortest paths


Download ppt "Algorithm Course Dr. Aref Rashad"

Similar presentations


Ads by Google