Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dynamic Programming Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

Similar presentations


Presentation on theme: "Dynamic Programming Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!"— Presentation transcript:

1 Dynamic Programming Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

2 COSC 3101E 2 Optimization Problems For most, the best known algorithm runs in exponential time. Some have quick Greedy or Dynamic Programming algorithms.

3 COSC 3101E 3 What is Dynamic Programming? Dynamic programming solves optimization problems by combining solutions to subproblems “ Programming ” refers to a tabular method with a series of choices, not “ coding ”

4 COSC 3101E 4 What is Dynamic Programming? A set of choices must be made to arrive at an optimal solution As choices are made, subproblems of the same form arise frequently The key is to store the solutions of subproblems to be reused in the future

5 COSC 3101E 5 Example 1 Fibonacci numbers are defined by:

6 COSC 3101E 6 Fibonacci Example Time?

7 COSC 3101E 7 Fibonacci Example Time: Exponential Waste time redoing work

8 COSC 3101E 8 Memoization Definition: An algorithmic technique which saves (memoizes) a computed answer for later reuse, rather than recomputing the answer. Memo functions were invented by Professor Donald Michie of Edinburgh University.Donald MichieEdinburgh University The idea was further developed by Robin Popplestone in his Pop2 language.Robin PopplestonePop2 It was later integrated into LISP. This same principle is found at the hardware level in computer architectures which use a cache to store recently accessed memory locations.cache ["'Memo' functions: and machine learning", Donald Michie, Nature, 218, 19-22, 1968].

9 COSC 3101E 9 Remember the solutions for the subinstances If the same subinstance needs to be solved again, the same answer can be used. Memoization in Optimization

10 COSC 3101E 10 Memoization Memoization reduces the complexity from exponential to linear!

11 COSC 3101E 11 From Memoization to Dynamic Programming Determine the set of subinstances that need to be solved. Instead of recursing from top to bottom, solve each of the required subinstances in smallest to largest order, storing results along the way.

12 COSC 3101E 12 Dynamic Programming First determine the complete set of subinstances Compute them in an order such that no friend must wait. {100, 99, 98,…, 0} 0 1 Smallest to largest

13 COSC 3101E 13 Dynamic Programming Fill out a table containing an optimal solution for each subinstance. 0, 1, 2, 3, 4, 5, …. 99, 100 0 1 11235 0 2.19 × 10 20 3.54 × 10 20

14 COSC 3101E 14 Dynamic Programming Time Complexity? Linear!

15 COSC 3101E 15 Dynamic Programming vs Divide-and-Conquer Recall the divide-and-conquer approach –Partition the problem into independent subproblems –Solve the subproblems recursively –Combine solutions of subproblems –e.g., mergesort, quicksort This contrasts with the dynamic programming approach

16 COSC 3101E 16 Dynamic Programming vs Divide-and-Conquer Dynamic programming is applicable when subproblems are not independent –i.e., subproblems share subsubproblems –Solve every subsubproblem only once and store the answer for use when it reappears A divide-and-conquer approach will do more work than necessary

17 COSC 3101E 17 A Sequence of 3 Steps A dynamic programming approach consists of a sequence of 3 steps 1.Characterize the structure of an optimal solution 2.Recursively define the value of an optimal solution 3.Compute the value of an optimal solution in a bottom-up fashion

18 COSC 3101E 18 Elements of Dynamic Programming For dynamic programming to be applicable, an optimization problem must have: 1.Optimal substructure –An optimal solution to the problem contains within it optimal solutions to subproblems (but this may also mean a greedy strategy applies) 2.Overlapping subproblems –The space of subproblems must be small; i.e., the same subproblems are encountered over and over

19 COSC 3101E 19 Dynamic programming uses optimal substructure from the bottom up: –First find optimal solutions to subproblems –Then choose which to use in optimal solution to problem. Elements of Dynamic Programming

20 Example 2. Making Change

21 COSC 3101E 21 Making Change To find the minimum number of Canadian coins to make any amount, the greedy method always works –At each step, just choose the largest coin that does not overshoot the desired amount The greedy method would not work if we did not have 5¢ coins –For 31 cents, the greedy method gives seven coins (25+1+1+1+1+1+1), but we can do it with four (10+10+10+1) The greedy method also would not work if we had a 21¢ coin –For 63 cents, the greedy method gives six coins (25+25+10+1+1+1), but we can do it with three (21+21+21) How can we find the minimum number of coins for any given set of denominations?

22 COSC 3101E 22 Example We assume coins in the following denominations: 1¢ 5¢ 10¢ 21¢ 25¢ We’ll use 63¢ as our goal

23 COSC 3101E 23 A simple solution We always need a 1¢ coin, otherwise no solution exists for making one cent To make K cents: –If there is a K-cent coin, then that one coin is the minimum –Otherwise, for each value i < K, Find the minimum number of coins needed to make i cents Find the minimum number of coins needed to make K - i cents –Choose the i that minimizes this sum This algorithm can be viewed as divide-and-conquer, or as brute force –This solution is very recursive –It requires exponential work –It is infeasible to solve for 63¢

24 COSC 3101E 24 Another solution We can reduce the problem recursively by choosing the first coin, and solving for the amount that is left For 63¢: –One 1¢ coin plus the best solution for 62¢ –One 5¢ coin plus the best solution for 58¢ –One 10¢ coin plus the best solution for 53¢ –One 21¢ coin plus the best solution for 42¢ –One 25¢ coin plus the best solution for 38¢ Choose the best solution from among the 5 given above Instead of solving 62 recursive problems, we solve 5 This is still a very expensive algorithm

25 End of Lecture 18 Nov 15, 2007

26 COSC 3101E 26

27 COSC 3101E 27 A dynamic programming solution Idea: Solve first for one cent, then two cents, then three cents, etc., up to the desired amount –Save each answer in an array ! For each new amount N, compute all the possible pairs of previous answers which sum to N –For example, to find the solution for 13¢, First, solve for all of 1¢, 2¢, 3¢,..., 12¢ Next, choose the best solution among: –Solution for 1¢ + solution for 12¢ –Solution for 2¢ + solution for 11¢ –Solution for 3¢ + solution for 10¢ –Solution for 4¢ + solution for 9¢ –Solution for 5¢ + solution for 8¢ –Solution for 6¢ + solution for 7¢

28 COSC 3101E 28 An even better dynamic programming solution In fact, we can do a bit better than this, since the coins come in only a small number of denominations (1¢, 5¢, 10¢, 21¢, 25¢) For each new amount N, compute cost of solution based on smaller sum + one additional coin. –For example, to find the solution for 13¢, First, solve for all of 1¢, 2¢, 3¢,..., 12¢ Next, choose the best solution among: –solution for 12¢ + 1¢ coin –solution for 8¢ + 5¢ coin –solution for 3¢ + 10¢ coin

29 COSC 3101E 29 Making Change: Recurrence Relation

30 COSC 3101E 30 function coins = makechange(d, sum) %precondition: d=set of denominations (must include penny), sum=change to be made %postcondition: coins = a minimal set of coins summing to sum mincoins(0) = 0 mincoins(1…sum) = ∞ for i = 1:sum %LI: mincoins(0...i-1) holds the min number of coins required to make change of (0…i-1). % onecoin(1...i-1) holds the value of one coin in a minimal set of coins making the correct change. for j = 1:length(d) %try each denomination if d(j) <= i & mincoins(i-d(j)) + 1 < mincoins(i) mincoins(i) = mincoins(i-d(j)) + 1 %best solution so far onecoin(i) = d(j) ncoins = mincoins(sum) change = sum for i = 1:ncoins %recover coins in optimal set coins(i) = onecoin(change) change = change - coins(i)

31 COSC 3101E 31 How good is the algorithm? The first algorithm is exponential, with a base proportional to sum (e.g., 63). The second algorithm is much better – exponential with a base proportional to the number of denominations (e.g., 5). The dynamic programming algorithm is

32 COSC 3101E 32 Elements of Dynamic Programming For dynamic programming to be applicable, an optimization problem must have: 1.Optimal substructure –An optimal solution to the problem contains within it optimal solutions to subproblems (but this may also mean a greedy strategy applies)

33 COSC 3101E 33 Dynamic programming uses optimal substructure from the bottom up: –First find optimal solutions to subproblems –Then choose which to use in optimal solution to problem. Elements of Dynamic Programming

34 COSC 3101E 34 Example Proof of Optimal Substructure Consider the problem of making N¢ with the fewest number of coins –Either there is an N¢ coin, or –The set of coins n making up an optimal solution for N¢ can be divided into two nonempty subsets, n 1 and n 2, which make N 1 ¢ and N 2 ¢ change respectively, where N 1 ¢ + N 2 ¢ = N¢. –If either N 1 ¢ or N 2 ¢ can be made with fewer coins, then clearly N¢ can be made with fewer coins, hence solution was not optimal. –Thus each subset n 1 and n 2 must themselves be optimal solutions to the subproblems of making N 1 ¢ and N 2 ¢ change, respectively.

35 COSC 3101E 35 Optimal Substructure Optimal substructure means that –Every optimal solution to a problem contains... –...optimal solutions to subproblems Optimal substructure does not mean that –If you have optimal solutions to all subproblems... –...then you can combine any of them to get an optimal solution to a larger problem. Example: In Canadian coinage, –The optimal solution to 7¢ is 5¢ + 1¢ + 1¢, and –The optimal solution to 6¢ is 5¢ + 1¢, but –The optimal solution to 13¢ is not 5¢ + 1¢ + 1¢ + 5¢ + 1¢ But there is some way of dividing up 13¢ into subsets with optimal solutions (say, 11¢ + 2¢) that will give an optimal solution for 13¢ –Hence, the making change problem exhibits optimal substructure.

36 COSC 3101E 36 Thus the step of choosing which subsolutions to combine is a key part of a dynamic programming algorithm. Optimal Substructure

37 Don’t all problems have this optimal substructure property?

38 COSC 3101E 38 Longest simple path Consider the following graph: The longest simple path (path not containing a cycle) from A to D is A B C D However, the subpath A B is not the longest simple path from A to B ( A C B is longer) The principle of optimality is not satisfied for this problem Hence, the longest simple path problem cannot be solved by a dynamic programming approach A CD B 4 2 3 1 1

39 COSC 3101E 39 Example 2. Knapsack Problem Get as much value as you can into the knapsack

40 COSC 3101E 40 The (General) 0-1 Knapsack Problem 0-1 knapsack problem: n items. Item i is worth $v i, weighs w i pounds. Find a most valuable subset of items with total weight ≤ W. v i, w i and W are all integers. Have to either take an item or not take it - can’t take part of it. Is there a greedy solution to this problem?

41 COSC 3101E 41 What are good greedy local choices? Select most valuable object? Select smallest object? Select object most valuable by weight?

42 COSC 3101E 42 Some example problem instances Select most valuable object? Select smallest object? Select object most valuable by weight? All Fail!

43 COSC 3101E 43 Simplified 0-1 Knapsack Problem The general 0-1 knapsack problem cannot be solved by a greedy algorithm. What if we make the problem simpler: Can this simplified knapsack problem be solved by a greedy algorithm? No!

44 COSC 3101E 44 Some example problem instances Select largest (most valuable) object? Select smallest object? Both Fail!

45 COSC 3101E 45 Approximate Greedy Solution For the simplified knapsack problem: the greedy solution (taking the most valuable object first) isn’t that bad:

46 End of Lecture 19 Nov 20, 2007

47 COSC 3101E 47 YORK UNIVERSITY Office of the Dean FACULTY OF SCIENCE AND ENGINEERING M E M O R A N D U M TO: Distribution FROM: Walter P. Tholen, Associate Dean (Research & Faculty Affairs) DATE: November 15, 2007 SUBJECT: NSERC Undergraduate Student Research Awards (NSERC USRA) It is a pleasure to bring to your attention that for 2008-2009 York has been allocated 51 NSERC USRAs. Considering not all offers will be accepted, we should try to recruit at least 70 qualified applicants. Please bring these awards to the attention of qualified students as soon as possible. Further information on NSERC USRAs can be found on the NSERC website. The application form and instructions (Form 202) are now available only on the NSERC website and can be printed from there. Please read all instructions before completing the application form. I. NSERC USRA in Universities (a) The value of the award from NSERC is $4,500 for 16 consecutive weeks. Faculty members must pay the student at least 25% ($1,125) on top of this. If a student is selected from another university to hold their award at York, the supervisor will be responsible for paying at least $787.50 extra; i.e., 4% vacation pay and 10% for benefits. Travel allowance may also be available when applicable. (b) At York, the award can only be held during the summer session. (c) NSERC expects students to work the full term. NSERC may approve shorter tenure in exceptional circumstances (these circumstances must be approved by NSERC), so the departments must make the students aware that the awards are for the full 16 weeks. (d) Students must return the completed applications to their departmental offices by January 25, 2008. Transcripts for York students can be printed by the departments as NSERC does not require that an official transcript be sent to them. Departments must submit their completed applications and transcripts, along with their rankings, to my office no later than February 8, 2008.

48 COSC 3101E 48 Approximate Greedy Solution

49 COSC 3101E 49 Dynamic Programming Solution The General 0-1 Knapsack Problem can be solved by dynamic programming.

50 COSC 3101E 50 Correctness One of these must be true!

51 COSC 3101E 51 Bottom-Up Computation

52 COSC 3101E 52 Integer Knapsack Solution Value=0 if no items or no knapsack.

53 COSC 3101E 53 Integer Knapsack Solution Fill in table row-wise Recurrence relation

54 COSC 3101E 54 Example i v w 1 1 2 2 3 3 3 5 1 4 2 5 5 6 3 6 10 5 c[i,w] w i Allowed Items 0 1 2 3 4 5 6 0 {} 1 {1} 0 1 1 1 1 1 2 {1 2} 0 1 3 3 4 4 3 {1 2 3} 5 5 6 8 8 9 4 3 4} 5 5 6 8 8 9 5 3 4 5} 5 5 6 11 12 6 {1 2 3 4 5 6} 0 0000000 0 0 0 0 0 5 5 6 11 15

55 COSC 3101E 55 Solving for the Items to Pack i v w 1 1 2 2 3 3 3 5 1 4 2 5 5 6 3 6 10 5

56 COSC 3101E 56 Second Example 5 4 3 6 2 3 c[i,w] w i Allowed Items 0 1 2 3 4 5 6 0 {} 0 0 0 0 0 0 0 1 {1} 0 0 1 1 1 1 1 2 {1 2} 0 0 1 4 4 5 5 3 {1 2 3} 0 2 2 4 6 6 7 4 3 4} 0 2 2 4 6 7 7 5 {1 2 3 4 5} 0 2 2 4 6 7 8 6 {1 2 3 4 5 6} 0 2 2 4 6 7 8

57 COSC 3101E 57 Knapsack Problem: Running Time Running time  (n  W). (cf. Making change  (d  sum)). –Not polynomial in input size!

58 End of Lecture 20 Nov 22, 2007

59 COSC 3101E 59 Recall: Knapsack Problem 5 4 3 6 2 3 c[i,w] w i Allowed Items 0 1 2 3 4 5 6 0 {} 0 0 0 0 0 0 0 1 {1} 0 0 1 1 1 1 1 2 {1 2} 0 0 1 4 4 5 5 3 {1 2 3} 0 2 2 4 6 6 7 4 3 4} 0 2 2 4 6 7 7 5 {1 2 3 4 5} 0 2 2 4 6 7 8 6 {1 2 3 4 5 6} 0 2 2 4 6 7 8

60 COSC 3101E 60 Observation from Last Day (Jonathon): We could still implement this recurrence relation directly as a recursive program.

61 COSC 3101E 61 Remember the solutions for the subinstances If the same subinstance needs to be solved again, the same answer can be used. Recall: Memoization in Optimization

62 COSC 3101E 62 Memoization Memoization reduces the complexity from exponential to linear!

63 COSC 3101E 63 From Memoization to Dynamic Programming Determine the set of subinstances that need to be solved. Instead of recursing from top to bottom, solve each of the required subinstances in smallest to largest order, storing results along the way.

64 COSC 3101E 64 Dynamic Programming Examples 1.Fibonacci numbers 2.Making change 3.0-1 Knapsack problem 4.Activity Scheduling with profits

65 COSC 3101E 65 Ingredients: Instances: Events with starting and finishing times,,…, >. Solutions: A set of events that do not overlap. Value of Solution: The number of events scheduled. Goal: Given a set of events, schedule as many as possible. Recall: The Activity (Job/Event) Selection Problem

66 COSC 3101E 66 From Previous Lecture: Problem can be solved by greedy algorithm Earliest Finishing Time Schedule the event that will free up your room for someone else as soon as possible. Motivation: Works! Greedy Criteria:

67 COSC 3101E 67 But what if activities have different values? Activity Selection with Profits:

68 COSC 3101E 68 Will a greedy algorithm based on finishing time still work? No!

69 COSC 3101E 69 Dynamic Programming Solution Time?

70 COSC 3101E 70 Step 1. Define an array of values to compute

71 COSC 3101E 71 Step 2. Provide a Recurrent Solution Decide not to schedule activity i Profit from scheduling activity i Optimal profit from scheduling activities that end before activity i begins One of these must be true!

72 COSC 3101E 72 Step 3. Provide an Algorithm function A=actselwithp(g, H, n) % assumes inputs sorted by finishing time A(0)=0 for i=1:n A(i)=max(A(i-1), g(i)+A(H(i))) end Running time?O(n)

73 COSC 3101E 73 Step 4. Compute Optimal Solution function actstring=printasp(A,H,i,actstring) if i=0 return if A(i)>A(i-1) actstring = printasp(A, H, H(i), actstring) actstring = [actstring, sprintf('%d ', i)] else actstring = printasp(A, H, i-1, actstring) Running time?O(n) Invoke with: printasp(A,H,n,‘Activities to Schedule:’)

74 COSC 3101E 74 Example Activity i1234 Start s i 0232 Finish f i 36610 Profit g i 20302030 H(i)????

75 COSC 3101E 75 Example Activity i1234 Start s i 0232 Finish f i 36610 Profit g i 20302030 H(i)0???

76 COSC 3101E 76 Example Activity i1234 Start s i 0232 Finish f i 36610 Profit g i 20302030 H(i)00??

77 COSC 3101E 77 Example Activity i1234 Start s i 0232 Finish f i 36610 Profit g i 20302030 H(i)001?

78 COSC 3101E 78 Example Activity i1234 Start s i 0232 Finish f i 36610 Profit g i 20302030 H(i)0010

79 COSC 3101E 79 Dynamic Programming Examples 1.Fibonacci numbers 2.Making change 3.0-1 Knapsack problem 4.Activity scheduling with profits 5.Longest common subsequence

80 COSC 3101E 80 Longest Common Subsequence Input: 2 sequences, X = x 1,..., x m and Y = y 1,..., y n. Output: a subsequence common to both whose length is longest. Note: A subsequence doesn’t have to be consecutive, but it has to be in order.

81 COSC 3101E 81 Example 3. Longest Common Subsequence

82 COSC 3101E 82 Brute-force Algorithm

83 COSC 3101E 83 Step 1. Define Data Structure Input: 2 sequences, X = x 1,..., x m and Y = y 1,..., y n.

84 COSC 3101E 84 Step 2. Define Recurrence Case 1. Input sequence is empty

85 COSC 3101E 85 Recurrence X Y Z __ __ … __ __ m Case 2. Last elements match: must be part of an LCS

86 COSC 3101E 86 Recurrence X Y X Y X Y Case 3. Last elements don’t match: at most one of them is part of LCS Choose!

87 COSC 3101E 87 Proof of Optimal Substructure Part 1

88 COSC 3101E 88 Proof of Optimal Substructure Part 2

89 COSC 3101E 89 Proof of Optimal Substructure Part 3

90 COSC 3101E 90 Step 3. Provide an Algorithm Running time? O(mn)

91 COSC 3101E 91 Step 4. Compute Optimal Solution Running time?O(m+n)

92 COSC 3101E 92 Example

93 COSC 3101E 93 Dynamic Programming Examples 1.Fibonacci numbers 2.Making change 3.0-1 Knapsack problem 4.Activity scheduling with profits 5.Longest common subsequence 6.Longest increasing subsequence

94 COSC 3101E 94 Example 6: Longest Increasing Subsequence Input: 1 sequence, X = x 1,..., x n. Output: the longest increasing subsequence of X. Note: A subsequence doesn’t have to be consecutive, but it has to be in order.

95 COSC 3101E 95 Step 1. Define an array of values to compute

96 COSC 3101E 96 Step 2. Provide a Recurrent Solution

97 COSC 3101E 97 Step 3. Provide an Algorithm Running time? O(n 2 ) function A=LIS(X) for i=1:length(X) m=0; for j=1:i-1 if X(j) m m=A(j); end A(i)=m+1; end

98 COSC 3101E 98 Step 4. Compute Optimal Solution Running time?O(n) function lis=printLIS(X, A) [m,mi]=max(A); lis=printLISm(X,A,mi,'LIS: '); lis=[lis,sprintf('%d', X(mi))]; function lis=printLISm(X, A, mi, lis) if A(mi) > 1 i=mi-1; while ~(X(i) < X(mi) & A(i) == A(mi)-1) i=i-1; end lis=printLISm(X, A, i, lis); lis=[lis, sprintf('%d ', X(i))]; end

99 COSC 3101E 99 LIS Example X = 96 24 61 49 90 77 46 2 83 45 A = 1 1 2 2 3 3 2 1 4 2 > printLIS(X,A) > LIS: 24 49 77 83

100 COSC 3101E 100 Dynamic Programming Examples 1.Fibonacci numbers 2.Making change 3.0-1 Knapsack problem 4.Activity scheduling with profits 5.Longest common subsequence 6.Longest increasing subsequence 7.Rock Climbing!

101 COSC 3101E 101 Example 1. Rock Climbing Problem A rock climber wants to get from the bottom of a rock to the top by the safest possible path. At every step, he reaches for handholds above him; some holds are safer than other. From every place, he can only reach a few nearest handholds.

102 COSC 3101E 102 Rock climbing (cont) At every step our climber can reach exactly three handholds: above, above and to the right and above and to the left. Suppose we have a wall instead of the rock. There is a table of “danger ratings” provided. The “danger” of a path is the sum of danger ratings of all handholds on the path. 53 4

103 COSC 3101E 103 Rock Climbing (cont) We represent the wall as a table. Every cell of the table contains the danger rating of the corresponding block. 2 8958 44623 57561 32548 The obvious greedy algorithm does not give an optimal solution. 2 5 4 2 The rating of this path is 13. The rating of an optimal path is 12. 4 1 2 5 However, we can solve this problem by a dynamic programming strategy in polynomial time.

104 Idea: once we know the minimum total danger of getting to every handhold on a layer, we can easily compute the minimum danger of getting to holds on the next layer. For the top layer, that gives us an answer to the problem itself.

105 For every handhold, there is only one “path” danger rating. Once we have reached a hold, we don’t need to know how we got there to move to the next level. (Optimal substructure property: Once we know optimal solutions to subproblems, we can compute an optimal solution to the problem itself.)

106 COSC 3101E 106 Rock climbing: step 1. Step 1: Describe an array of values you want to compute. For 1  i  n and 1  j  m, define A(i,j) to be the cumulative rating of the least dangerous path from the bottom to the hold (i,j). The rating of the best path to the top will be the minimal value in the last row of the array.

107 COSC 3101E 107 Rock climbing: step 2. Step 2: Give a recurrence for computing later values from earlier (bottom-up). Let C(i,j) be the rating of the hold (i,j). There are three cases for A(i,j): Left (j=1): C(i,j)+min{A(i-1,j),A(i-1,j+1)} Right (j=m): C(i,j)+min{A(i-1,j-1),A(i-1,j)} Middle: C(i,j)+min{A(i-1,j-1),A(i-1,j),A(i-1,j+1)} For the first row (i=1), A(i,j)=C(i,j).

108 COSC 3101E 108 Rock climbing: simpler step 2 Add initialization row: A(0,j)=0. No danger to stand on the ground. Add two initialization columns: A(i,0)=A(i,m+1)= . It is infinitely dangerous to try to hold on to the air where the wall ends. Now the recurrence becomes, for every i,j: A(i,j) = C(i,j)+min{A(i-1,j-1),A(i-1,j),A(i-1,j+1)}

109 COSC 3101E 109 Rock climbing: example 2 8958 44623 57561 32548 C(i,j): A(i,j): i\j0123456 0 1 2 3 4 0123456 0  00000  1  2  3  4  Initialization: A(i,0)=A(i,m+1)= , A(0,j)=0

110 COSC 3101E 110 Rock climbing: example 2 8958 44623 57561 32548 C(i,j): A(i,j): i\j0123456 0  00000  1  32548  2  3  4  The values in the first row are the same as C(i,j). i\j0123456 0  00000  1  2  3  4 

111 COSC 3101E 111 Rock climbing: example 2 8958 44623 57561 32548 C(i,j): A(i,j): A(2,1)=5+min{ ,3,2}=7. i\j0123456 0  00000  1  32548  2  7  3  4 

112 COSC 3101E 112 Rock climbing: example 2 8958 44623 57561 32548 C(i,j): A(i,j): A(2,1)=5+min{ ,3,2}=7. A(2,2)=7+min{3,2,5}=9 i\j0123456 0  00000  1  32548  2  79  3  4 

113 COSC 3101E 113 Rock climbing: example 2 8958 44623 57561 32548 C(i,j): A(i,j): A(2,1)=5+min{ ,3,2}=7. A(2,2)=7+min{3,2,5}=9 A(2,3)=5+min{2,5,4}=7. i\j0123456 0  00000  1  32548  2  797  3  4 

114 COSC 3101E 114 Rock climbing: example 2 8958 44623 57561 32548 C(i,j): A(i,j): The best cumulative rating on the second row is 5. i\j0123456 0  00000  1  32548  2  797105  3  4 

115 COSC 3101E 115 Rock climbing: example 2 8958 44623 57561 32548 C(i,j): A(i,j): The best cumulative rating on the third row is 7. i\j0123456 0  00000  1  32548  2  797105  3  11 1378  4 

116 COSC 3101E 116 Rock climbing: example 2 8958 44623 57561 32548 C(i,j): A(i,j): The best cumulative rating on the last row is 12. i\j0123456 0  00000  1  32548  2  797105  3  11 1378  4  19161215 

117 COSC 3101E 117 Rock climbing: example 2 8958 44623 57561 32548 C(i,j): A(i,j): The best cumulative rating on the last row is 12. i\j0123456 0  00000  1  32548  2  797105  3  11 1378  4  19161215  So the rating of the best path to the top is 12.

118 COSC 3101E 118 Rock climbing example: step 4 2 8958 44623 57561 32548 C(i,j): A(i,j): i\j0123456 0  00000  1  32548  2  797105  3  11 1378  4  19161215  To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j).

119 COSC 3101E 119 Rock climbing example: step 4 2 8958 44623 57561 32548 C(i,j):A(i,j): i\j0123456 0  00000  1  32548  2  797105  3  11 1378  4  19161215  The last hold was (4,4). To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j).

120 COSC 3101E 120 Rock climbing example: step 4 2 8958 44623 57561 32548 C(i,j):A(i,j): i\j0123456 0  00000  1  32548  2  797105  3  11 1378  4  19161215  The hold before the last was (3,4), since min{13,7,8} was 7. To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j).

121 COSC 3101E 121 Rock climbing example: step 4 2 8958 44623 57561 32548 C(i,j):A(i,j): To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j). i\j0123456 0  00000  1  32548  2  797105  3  11 1378  4  19161215  The hold before that was (2,5), since min{7,10,5} was 5.

122 COSC 3101E 122 Rock climbing example: step 4 2 8958 44623 57561 32548 C(i,j):A(i,j): To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j). i\j0123456 0  00000  1  32548  2  797105  3  11 1378  4  19161215  Finally, the first hold was (1,4), since min{4,8} was 4.

123 COSC 3101E 123 Rock climbing example: step 4 2 8958 44623 57561 32548 C(i,j):A(i,j): We are done! i\j0123456 0  00000  1  32548  2  797105  3  11 1378  4  19161215 


Download ppt "Dynamic Programming Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!"

Similar presentations


Ads by Google