Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Dynamic Programming Chapter 3. Learning Objectives 2 After completing this chapter, students will be able to: 1.Understand the overall approach of dynamic.

Similar presentations


Presentation on theme: "1 Dynamic Programming Chapter 3. Learning Objectives 2 After completing this chapter, students will be able to: 1.Understand the overall approach of dynamic."— Presentation transcript:

1 1 Dynamic Programming Chapter 3

2 Learning Objectives 2 After completing this chapter, students will be able to: 1.Understand the overall approach of dynamic programming. 2.Use dynamic programming to solve the shortest-route problem. 3.Develop dynamic programming stages. 4.Describe important dynamic programming terminology. 5.Describe the use of dynamic programming in solving knapsack problems.

3 Chapter Outline 1. Introduction. 2. Shortest-Route Problem Solved by Dynamic Programming. 3. Dynamic Programming Terminology. 4. Dynamic Programming Notation. 5. Knapsack Problem. 3

4 1. Introduction Dynamic programming is a quantitative analytic technique applied to large, complex problems that have sequences of decisions to be made. Dynamic programming divides problems into a number of decision stages; the outcome of a decision at one stage affects the decision at each of the next stages. The technique is useful in a large number of multi-period business problems, such as: Smoothing production employment, Allocating capital funds, Allocating salespeople to marketing areas, and Evaluating investment opportunities. 4

5 Dynamic Programming vs. Linear Programming Dynamic programming differs from linear programming in two ways: First, there is no algorithm (like the simplex method) that can be programmed to solve all problems. Instead, dynamic programming is a technique that allows a difficult problem to be broken down into a sequence of easier sub-problems, which are then evaluated by stages. 5

6 Linear programming is a method that gives single-stage (i.e., one-time period) solutions. Dynamic programming has the power to determine the optimal solution over a one- year time horizon by breaking the problem into 12 smaller one-month horizon problems and to solve each of these optimally. Hence, it uses a multistage approach. 6 Second:

7 Four Steps of Dynamic Programming 1.Divide the original problem into sub-problems called stages. 2.Solve the last stage of the problem for all possible conditions or states. 3.Working backward from that last stage, solve each intermediate stage. This is done by determining optimal policies from that stage to the end of the problem. 4.Obtain the optimal solution for the original problem by solving all stages sequentially. 7

8 Solving Types of Dynamic Programming Problems Two types of DP problems as examples: 1.network 2.non-network The Shortest-Route Problem is a network problem that can be solved by dynamic programming. The Knapsack Problem is an example of a non-network problem that can be solved using dynamic programming. 8

9 2- SHORTEST-ROUTE PROBLEM SOLVED BY DYNAMIC PROGRAMMING George Yates is to travel from Rice, Georgia (1) to Dixieville, Georgia (7).  George wants to find the shortest route. there are small towns between Rice and Dixieville. The road map is on the next slide.  The circles (nodes) on the map represent cities such as Rice, Athens, Georgetown, Dixieville, Brown, and so on. The arrows (arcs) represent highways between the cities. 9

10 10 Brown Dixieville

11 Highway Map between Rice and Dixieville 11 Dixieville 1 4 3 2 5 6 7 4 miles 10 miles 14 miles 2 miles 10 miles 6 miles 4 miles 2 miles 12 miles 5 miles Rice LakecityAthens HopeGeorgetown Brown Figure M2.1

12 We can solve this by inspection, but it is instructive seeing dynamic programming used here to show how to solve more complex problems. 12

13 Step-1:  Divide the problem into sub-problems or stages.  Figure M2.2 (next slide) reveals the stages of this problem.  In dynamic programming (backward procedure), we start with the last part of the problem, Stage 1, and work backward to the beginning of the problem or network, which is Stage 3 in this problem.  Table M2.1 (second slide) summarizes the arcs and arc distances for each stage. 13

14 The Stages for George Yates Problem 14 Dixieville 1 4 3 2 5 6 7 4 miles 10 miles 14 miles 2 miles 10 miles 6 miles 4 miles 2 miles 12 miles 5 miles Rice LakecityAthens HopeGeorgetown Brown Stage 1 Stage 2Stage 3 Figure M2.2

15 Table M2.1: Distance Along Each Arc 15 STAGE ARC ARC DISTANCE 1 5-7 14 6-72 2 4-5 10 3-512 3-66 2-54 2-6 10 3 1-44 1-35 1-2 2 Table M2.1 Dixieville 1 4 3 2 5 6 7 4 miles 10 miles 14 miles 2 miles 10 miles 6 miles 4 miles 2 miles 12 miles 5 miles Rice Lakecity Athens Hope Georgetown Brown Stage 1 Stage 2Stage 3

16 Step 2: Solve The Last Stage – Stage 1 Solve Stage 1, the last part of the network. This is usually trivial. Find the shortest path to the end of the network: node 7 in this problem. The objective is to find the shortest distance to node 7. 16

17 The shortest paths, from nodes: 5 and 6 to node 7 are the only paths (5- 7 and 6-7), mark these arcs (red color or a dash). Also note in Figure M2.3 (next slide) that the minimum distances are enclosed in boxes by the entering nodes to stage 1, node 5 and node 6. 17 At Stage 1:

18 Stage 1 18 1 4 3 2 5 6 7 4 10 14 2 10 6 4 2 12 5 14 2 Minimum Distance to Node 7 from Node 5 Minimum Distance to Node 7 from Node 6 Figure M2.3 Stage 1

19 19 1 4 3 2 5 6 7 4 10 14 2 10 6 4 2 12 5 14 2 STAGE 1 BEGINNING NODE SHORTEST DISTANCE TO NODE 7 ARCS ALONG THIS PATH 55-7 66-7 Stage 1 14 2

20 Step 3: Moving Backwards Solving Intermediate Problems Moving backward, now solve for Stages 2 and 3 in the same manner. At Stage 2 use Figure M2.4. (next slide): 20

21 Solution for Stage - 2 21 1 4 3 2 5 6 7 4 10 14 2 10 6 4 2 12 5 2 14 24 12 8 Minimum Distance to Node 7 from Node 4 Minimum Distance to Node 7 from Node 2 Stage 1 Stage 2 Fig M2.4

22 Fig M2.4 (previous slide) Analysis If we are at node 4, the shortest and only route to node 7 is arcs 4–5 and 5–7 with a total minimum distance of 24 miles (10+14). At node 3, the shortest route is arcs 3–6 and 6–7 with a total minimum distance of 8 miles = Min{(12+14), (6+2)}. If we are at node 2, the shortest route is arcs 2–6 and 6–7 with a minimum total distance of 12 miles = Min{(4+14), (10+2)}. 22 1 4 3 2 5 6 7 4 10 14 2 10 6 4 2 12 5 2 14 24 12 8 Stage 1 Stage 2

23 1 - 23 STAGE 2 BEGINNING NODE SHORTEST DISTANCE TO NODE 7 ARCS ALONG THIS PATH 44-5 5-7 33-6 6-7 22-6 6-7 1 4 3 2 5 6 7 4 10 14 2 10 6 4 2 12 5 2 14 24 12 8 Stage 1 Stage 2 24 8 12

24 24 1.State variables are the entering nodes: (a) node 2, (b) node 3, (c ) node 4. 2. Decision variables are the arcs or routes: (a) 4-5 (b) 3-5 (c ) 3-6 (d) 2-5 (e) 2-6. 3. The decision criterion is the minimization of the total distances traveled. 4. The optimal policy for any beginning condition is shown in Figure M2.6: For Stage 2, we have: 1 4 3 2 5 6 7 4 10 14 2 10 6 4 2 12 5 2 14 24 12 8 Stage 1 Stage 2

25 25 1 4 3 2 5 6 7 4 10 14 2 10 6 4 2 12 5 2 14 24 12 8 State variables are the entering nodes The optimal policy is the arc, for any entering node, that will minimize total distance to the destination at this stage Stage 1 Stage 2 Figure M2.6 Decision variables are all the arcs

26 Solution for the Third Stage 26 Minimum Distance to Node 7 from Node 1 1 4 3 2 5 6 7 4 10 14 2 10 6 4 2 12 5 2 14 12 24 8 13 Stage 3

27 1 - 27 STAGE 3 BEGINNING NODE SHORTEST DISTANCE TO NODE 7 ARCS ALONG THIS PATH 11-3 3-6 6-7 1 4 3 2 5 6 7 4 10 14 2 10 6 4 2 12 5 2 14 12 24 8 13

28 Step 4 : Final Step The final step is to find the optimal solution after all stages have been solved. To obtain the optimal solution at any stage, only consider the arcs to the next stage and the optimal solution at the next stage. For Stage 3, we only have to consider the three arcs to Stage 2 (1–2, 1–3, and 1–4) and the optimal policies at Stage 2. 28

29 More Complicated Networks 29

30 30

31 3. DYNAMIC PROGRAMMING TERMINOLOGY 1.Stage: a period or a logical sub-problem. 2.State variables: possible beginning situations or conditions of a stage. These have also been called the input variables. 3.Decision variables: alternatives or possible decisions that exist at each stage. 4.Decision criterion: a statement concerning the objective of the problem. 31 5. Optimal policy: a set of decision rules, developed as a result of the decision criteria, that gives optimal decisions for any entering condition at any stage. 6. Transformation: normally, an algebraic statement that reveals the relationship between stages:

32 Shortest Route Problem Transformation Calculation 32 Distance from the beginning of a given stage to the last node = = Distance from the beginning of the previous stage to the last node + + Distance from the given stage to the previous stage In the shortest-route problem, the following transformation can be given:

33 4. Dynamic Programming Notation In addition to terminology, mathematical notation can also be used to describe any dynamic programming problem. Here, an input, decision, output and return are specified for each stage. This helps to set up and solve the problem. Consider Stage 2 in the George Yates Dynamic Programming problem. This stage can be represented by the diagram shown in Figure M2.7 (as could any given stage of a given dynamic programming problem). 33

34 Input, Decision, Output, and Return for Stage 2 in George Yates’s Problem Note that the input to one stage is also the output from another stage. e.g., the input to Stage 2, s 2, is also the output from Stage 3. This leads us to the following equation: s n −1 = output from Stage n (M2-4) 34 s n = input to stage n (M2-1) d n = decision at stage n (M2-2) r n = return at stage n (M2-3) Fig M2.7 Input s 2 Decision d 2 Output s 1 Return r 2

35 Transformation Function A transformation function allows us to go from one stage to another. The total return function allows us to keep track of profits and costs. 35 t n = transformation function at Stage n (M2-5) The following general formula allows us to go from one stage to another using the transformation function: S n-1 = t n (S n, d n )(M2-6) The total return The total return allows us to keep track of the total profit or costs at each stage : f n = total return at stage n. (M2-7)

36 Dynamic Programming Key Equations  s n  Input to stage n  d n  Decision at stage n  r n  Return at stage n  s n-1  Input to stage n-1  t n  Transformation function at stage n  s n-1 = t n [s n, d n ]  General relationship between stages  f n  Total return at stage n 36

37 37 5. KNAPSACK PROBLEM

38 The “knapsack problem” involves the maximization or minimization of a value, such as profits or costs. Like a linear programming problem, there are restrictions. Imagine a knapsack or pouch that can only hold a certain weight or volume.  We can place different types of items in the knapsack. Our objective is to place items in the knapsack  to maximize total value without breaking the knapsack because of too much weight or a similar restriction. 38

39 39 Example

40 One possible solution 40

41 Types of Knapsack Problems  e.g., Choosing items to place in the cargo compartment of an airplane.  selecting which payloads to put on the next NASA space shuttle. The restriction can be volume, weight, or both. Some scheduling problems are also knapsack problems.  e.g., we may want to determine which jobs to complete in the next two weeks. The two-week period is the knapsack, and we want to load it with jobs in such a way so as to maximize profits or minimize costs. The restriction is the number of days or hours during the two-week period. 41

42 Roller’s Air Transport Service Problem 42 ITEM WEIGHT (tons) PROFIT/ UNIT (10 3 $) NUMBER AVAILABLE 1136 2491 3382 4252 TABLE M2.2: Items to be Shipped. Roller’s Air Transport Service ships cargo by plane in the United States and Canada. The remaining capacity for one of the one of the flights from Seattle to Vancouver is 10 tons. There are four different items to ship. Each item has a weight in tons, a net profit in thousands of dollars, and a total number that is available. This information is presented in Table M2.2.

43 43

44 44 TABLE M2.3: Relationship Between Items and Stages ITEMNUMBER AVAILABLE WEIGHT (tons) STAGE 1614 2143 3232 4221 Assigning items to stages is arbitrary, you can choose any item to any stage. But in the backward procedure it is recommended to start with those items having less availability and/or higher weight.

45 45 Figure M2.8 Roller’s Air Transport Service Problem d3d3 s2s2 r3r3 s4s4 d4d4 s3s3 r4r4 d2d2 s1s1 r2r2 d1d1 s0s0 r1r1 Decisions Returns

46 46 ITEMSTAGE WEIGHT/UNIT (tons) PROFIT/ UNIT ($) MAXIMUM VALUE OF DECISION 14136 23491 32382 41252 ITEMWEIGHT/UNIT (tons) PROFIT/ UNIT ($) NUMBER AVAILABLE 1136 2491 3382 4252

47 47 The Transformation Functions s n-1 = (a n x s n ) + (b n x d n ) + c n a n,b n and c n are coefficients (a n =1 and c n =0 for this problem) The general transformation function for knapsack problem: s4s4 d4d4 s3s3 r4r4 Decisions Output of stage 4 (S3) is the remaining weight in the plane after this stage 4 = Remaining weight before stage 4 (S4) – weight taken in stage 4 (1× d 4 ) s 3 = s 4 - 1 d 4 stage 4 (a) Weight/Unit

48 48 STAGEItem COEFFICIENTS OF TRANSITION FUNCTION anananan bnbnbnbn cncncncn 4110 321-40 231-30 141-20 Weight/Unit a n c n s n-1 = (a n x s n ) + (b n x d n ) + c n

49 49 d3d3 s2s2 r3r3 s4s4 d4d4 s3s3 r4r4 d2d2 s1s1 r2r2 d1d1 s0s0 r1r1 Decisions 1 s 3 = s 4 - 1 d 4 stage 4 (a) 4 s 2 = s 3 - 4 d 3 stage 3 (b) 3 s 1 = s 2 - 3 d 2 stage 2 (c) 2 s 0 = s 1 - 2 d 1 stage 1 (d) Returns

50 50 The Return Function The general form for the return function: r n = (a n x s n ) + (b n x d n ) + c n a n, b n, c n are the coefficients for the return function. For this example: a n = c n = 0 r n = b n x d n Profit/Unit Decision

51 51 The return values table: bnbn DECISIONSITEMSTAGE UPPERLOWER 36 ≤ d n ≤ 014 91023 82032 52041 r n = b n x d n Numberavailable Example: r 4 = 3 d 4 Profit/Unit

52 52 r 4 = 3 d 4 r 3 = 9 d 3 r 2 = 8 d 2 r 1 = 5 d 1 Profit/Unit Units shipped Return at the stage

53 53 f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 0 1 2 …. 10 STAGE 1 Tons available (all possibilities) Units shipped Return = d1 x (profit/Unit) Profit for stage 0 = 0 (nothing is shipped) Total profit f 1 = r 1 + f 0 Tons available for stage 0 Profit = 5/unit Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit.

54 54 f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 000000 1 2 …. 10 STAGE 1 Tons available Units shipped Return = d1 x (profit/Unit) Profit for stage 0 = 0 (nothing is shipped) Total profit f 1 = r 1 + f 0 Tons available for stage 0 Profit = 5/unit Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit.

55 55 f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (0)000 0 000 1 0 (5) 0000 0000 0505 0 (1) 2 ……………3 ……………… STAGE 1 Tons available Units shipped (all possibilities) Return = d1 x (profit/Unit) Profit for stage 0 = 0 (nothing is shipped) Total profit f 1 = r 1 + f 0 Tons available for stage 0 Profit = 5/unit Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit. (Optimal)

56 1 - 56 f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (0)000 0 000 1 0 (5) 0000 0000 0505 0 (1) 2 0 (5) 0000 0000 0505 0 (1) 3 0 5 (10) 000000 000000 0 5 10 0 1 (2) 4 0 5 (10) 000000 000000 0 5 10 0 1 (2) 5 6 7 8 9 0 5 (10) 000000 000000 0 5 10 0 1 (2) 10 STAGE 1 (Optimal) Rows S 1 = 5, 6,.., 10 are similar to row S 1 = 4 Profit = 5/unit Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit. Same for S 1 = 4 to 10

57 57 f2f2f2f2 f1f1 s1s1 r2r2 d2d2 s2s2 (0)000(0)0 (0)010 1 (5)520 2 5(8)5030 0808 0 (1) 3 (10)810041 0808 (0) 1 4 10(13)10552 0808 0 (1) 5 1013(16)1050630 0 8 16 0 1 (2) 6 ……………… STAGE 2 f 2 = r 2 + f 1 From Stage 1 Take optimal Take f 1 optimal Depending on s 1 Profit = 8/unit Item 3: Weight/unit = 3 tons, Maximum units = 2, Profit = 8/unit.

58 1 - 58 f2f2 f1f1 s1s1 r2r2 d2d2 s2s2 (0)000 0 010 1 (5)520(0)2 5 (8) 5050 3030 0808 0 (1) 3 (10) 8 10 0 4141 0808 (0) 1 4 10 (13) 10 5 5252 0808 0 (1) 5 10 13 (16) 10 5 0 630630 8 16 0 1 (2) 6 10 (18) 16 10 0 741741 8 16 0 (1) 2 7 10 18 (21) 10 5 852852 0 8 16 0 1 (2) 8 10 18 (21) 10 5 963963 0 8 16 0 1 (2) 9 10 18 (26) 10 7 4 0 8 16 0 1 (2) 10 STAGE 2 Profit = 8/unit Item 3: Weight/unit = 3 tons, Maximum units = 2, Profit = 8/unit.

59 59 f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 (0)000 0 010 1 (5)520(0)2 (8)830(0)3 (10) 9 10 0 4040 0909 (0) 1 4 (13) 9 13 0 5151 0909 (0) 1 5 (16) 14 16 5 6262 0909 (0) 1 6 (18) 17 18 8 7373 0909 (0) 1 7 (21) 19 21 10 8484 0909 (0) 1 8 21 (22) 21 13 9595 0909 0 (1) 9 (26) 25 26 16 10 6 0909 (0) 1 10 STAGE 3 Profit = 9/unit Item 2: weight/unit =4 tons, Maximum units = 1, Profit = 9/unit.

60 60 f4f4 f3f3 s3s3 r4r4 d4d4 s4s4 26 25 27 (28) 26 22 21 18 16 13 10 9 8 7 6 5 4 0 3 6 9 12 15 18 0123(4)(5)(6)0123(4)(5)(6) 10 STAGE 4 f 4 = r 4 + f 3 From Stage 3 Depending on s 3 There are three possible decisions that will give the same highest profit Profit = 3/unit Item 1: Weight/item =1 tons, Maximum units = 6, Profit = 3/unit.

61 61 f4f4 f3f3 s3s3 r4r4 d4d4 s4s4 (28)10418(6)10 One possible optimal solution: STAGE 4 (Item 1) f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 (0)000 0 010 1 (5)520(0)2 (8)830(0)3 (10) 9 10 0 4040 0909 (0) 1 4 (13) 9 13 0 5151 0909 (0) 1 5 ….. STAGE 3 (Item 2) STAGE 2

62 62 f4f4 f3f3 s3s3 r4r4 d4d4 s4s4 (28)10418(6)10 One possible optimal solution: STAGE 4 (Item 1) STAGE 3 (Item 2) (10)1040(0)4 STAGE 2 (Item 3) f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 f2f2 f1f1 s1s1 r2r2 d2d2 s2s2 (10)1040(0)4 STAGE 1 (Item 4) f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (10)0010(2)4

63 1 - 63 f4f4 f3f3 s3s3 r4r4 d4d4 s4s4 (28)10418(6)10 STAGE 4 STAGE 3 (10)1040(0)4 STAGE 2 f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 f2f2 f1f1 s1s1 r2r2 d2d2 s2s2 (10)1040(0)4 STAGE 1 f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (10)0010(2)4 OPTIMAL RETURN (r n ) OPTIMAL DECISION (d n ) Item STAGE (n) 18614 0023 0032 10241 288Total FINAL SOLUTION

64 64 OPTIMAL RETURN (r n ) OPTIMAL DECISION (d n ) Item STAGE (n) 18614 0023 0032 10241 288Total One possible optimal solution: FINAL SOLUTION (One Possible Solution)

65 65 f4f4 f3f3 s3s3 r4r4 d4d4 s4s4 (28)13515(5)10 STAGE 4 (Item 1) Second Possible Optimal Solution: STAGE 3 (Item 2) (13)1350(0)5 STAGE 2 (Item 3) f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 (13)528(1)5 STAGE 1 (Item 4) f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (5)005(1)2 Optimal Solution: Item1 Item 2 Item 3 Item 4 Profit 5 0 1 1 28

66 66 Third Possible Optimal Solution: f4f4 f3f3 s3s3 r4r4 d4d4 s4s4 (28)16612(4)10 STAGE 4 (Item 1) STAGE 3 (Item 2) (16)1660(0)6 STAGE 2 (Item 3) f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 (16)0016(2)6 STAGE 1 (Item 4) f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (0)000 0 Optimal Solution: Item1 Item 2 Item 3 Item 4 Profit 4 0 2 0 28

67 67

68 68

69 69

70 70

71 Solution Using Software Mathematical Model: Integer Programming: 71 ITEM WEIGHT (tons) PROFIT/ UNIT (10 3 $) NUMBER AVAILABLE 1136 2491 3382 4252

72 72 Solution Using QM for Windows

73 73

74 74

75 75

76 76 Using Excel ITEM WEIGHT (tons) PROFIT/ UNIT (10 3 $) NUMBER AVAILABLE 1136 2491 3382 4252

77 77 Cell F5: = SUMPRODUCT(B5:E5;B2:E2) F6: = SUMPRODUCT(B6:E6;B2:E2) F7: = SUMPRODUCT(B7:E7;B2:E2) F8: = SUMPRODUCT(B8:E8;B2:E2) F9: = SUMPRODUCT(B9:E9;B2:E2)

78 78

79 79 Integer Variables

80 80 Solution

81 Lab Exercise Solve the Knapsack Example using Excel and QM for Windows. 81

82 GLOSSARY Decision Criterion. A statement concerning the objective of a dynamic programming problem. Decision Variable. The alternatives or possible decisions that exist at each stage of a dynamic programming problem. Dynamic Programming. A quantitative technique that works backward from the end of the problem to the beginning of the problem in determining the best decision for a number of interrelated decisions. 82

83 Glossary continued  Optimal Policy. A set of decision rules, developed as a result of the decision criteria, that gives optimal decisions at any stage of a dynamic programming problem.  Stage. A logical sub-problem in a dynamic programming problem.  State Variable. A term used in dynamic programming to describe the possible beginning situations or conditions of a stage.  Transformation. An algebraic statement that shows the relationship between stages in a dynamic programming problem. 83


Download ppt "1 Dynamic Programming Chapter 3. Learning Objectives 2 After completing this chapter, students will be able to: 1.Understand the overall approach of dynamic."

Similar presentations


Ads by Google