1 Dynamic Programming Chapter 3. Learning Objectives 2 After completing this chapter, students will be able to: 1.Understand the overall approach of dynamic.

Slides:



Advertisements
Similar presentations
Network Models Robert Zimmer Room 6, 25 St James.
Advertisements

IEOR 4004 Final Review part II.
BU Decision Models Integer_LP1 Integer Optimization Summer 2013.
Linear Programming. Introduction: Linear Programming deals with the optimization (max. or min.) of a function of variables, known as ‘objective function’,
Dynamic Programming Rahul Mohare Faculty Datta Meghe Institute of Management Studies.
Chapter 11 To accompany Quantitative Analysis for Management, Eleventh Edition, Global Edition by Render, Stair, and Hanna Power Point slides created by.
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Decision Analysis Chapter 3
Dynamic Programming In this handout A shortest path example
Managerial Decision Modeling with Spreadsheets
Deterministic Dynamic Programming.  Dynamic programming is a widely-used mathematical technique for solving problems that can be divided into stages.
© 2008 Prentice-Hall, Inc. Chapter 12 To accompany Quantitative Analysis for Management, Tenth Edition, by Render, Stair, and Hanna Power Point slides.
To accompany Quantitative Analysis for Management, 8e by Render/Stair/Hanna M2-1 © 2003 by Prentice Hall, Inc. Upper Saddle River, NJ Module 2 Dynamic.
Network Flow Models Chapter 7.
Transportation, Assignment, Network Models
Chapter 10 Dynamic Programming. 2 Agenda for This Week Dynamic Programming –Definition –Recursive Nature of Computations in DP –Forward and Backward Recursion.
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
To accompany Quantitative Analysis for Management, 9e by Render/Stair/Hanna 12-1 © 2006 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 12.
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Introduction to Management Science
Max-flow/min-cut theorem Theorem: For each network with one source and one sink, the maximum flow from the source to the destination is equal to the minimal.
By Saparila Worokinasih
1 1 Slide © 2009 South-Western, a part of Cengage Learning Slides by John Loucks St. Edward’s University.
7-1 Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Network Flow Models Chapter 7.
7-1 Copyright © 2013 Pearson Education, Inc. Publishing as Prentice Hall Network Flow Models Chapter 7.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
Chapter 7 Transportation, Assignment & Transshipment Problems
Dynamic Programming Key characteristic of a Dynamic Program: Breaking up a large, unwieldy problem into a series of smaller, more tractable problems. Shortest.
ECES 741: Stochastic Decision & Control Processes – Chapter 1: The DP Algorithm 31 Alternative System Description If all w k are given initially as Then,
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Welcome to MM305 Unit 6 Seminar Larry Musolino
D Nagesh Kumar, IIScOptimization Methods: M5L2 1 Dynamic Programming Recursive Equations.
1 1 Slide © 2009 South-Western, a part of Cengage Learning Slides by John Loucks St. Edward’s University.
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
DISTRIBUTION AND NETWORK MODELS (1/2)
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 Network Models Transportation Problem (TP) Distributing any commodity from any group of supply centers, called sources, to any group of receiving.
Group members: Ibrahim jan Qesar Habib Najeebullah
Math Programming Concept of Optimization (L.O. a ) Linear Programming Managerial Value of Information (L.O. d) Theory (L.O. b) Example Applications (L.O.
IT Applications for Decision Making. Operations Research Initiated in England during the world war II Make scientifically based decisions regarding the.
Prepared by Dr.Osman Taylan
D Nagesh Kumar, IIScOptimization Methods: M5L1 1 Dynamic Programming Introduction.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Welcome Unit 6 Seminar MM305 Wednesday 8:00 PM ET Quantitative Analysis for Management Delfina Isaac.
IE 312 Review 1. The Process 2 Problem Model Conclusions Problem Formulation Analysis.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or.
Introduction and Preliminaries D Nagesh Kumar, IISc Water Resources Planning and Management: M4L1 Dynamic Programming and Applications.
Mjdah Al Shehri Hamdy A. Taha, Operations Research: An introduction, 8 th Edition Chapter 6: Network Models.
Transportation, Assignment, and Network Models 9 To accompany Quantitative Analysis for Management, Twelfth Edition, by Render, Stair, Hanna and Hale Power.
© 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
Dynamic Programming - DP December 18, 2014 Operation Research -RG 753.
1 Chapter 13 Mathematical models of networks give us algorithms so computationally efficient that we can employ them to evaluate problems too big to be.
1 1 Slide © 2005 Thomson/South-Western Chapter 9 Network Models n Shortest-Route Problem n Minimal Spanning Tree Problem n Maximal Flow Problem.
Network Models Chapter 12
St. Edward’s University
Linear Programming Dr. T. T. Kachwala.
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Network Models Chapter 12
Chapter 12 Network Models 12-1
Business Statistics with Quantitative Analysis
Dynamic Programming General Idea
Chapter 3 Dynamic Programming.
MATS Quantitative Methods Dr Huw Owens
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Network Models 7-1.
Prepared by Lee Revere and John Large
Chapter 5 Transportation, Assignment, and Transshipment Problems
Network Models Chapter 12
Dynamic Programming General Idea
Chapter 6 Network Flow Models.
Presentation transcript:

1 Dynamic Programming Chapter 3

Learning Objectives 2 After completing this chapter, students will be able to: 1.Understand the overall approach of dynamic programming. 2.Use dynamic programming to solve the shortest-route problem. 3.Develop dynamic programming stages. 4.Describe important dynamic programming terminology. 5.Describe the use of dynamic programming in solving knapsack problems.

Chapter Outline 1. Introduction. 2. Shortest-Route Problem Solved by Dynamic Programming. 3. Dynamic Programming Terminology. 4. Dynamic Programming Notation. 5. Knapsack Problem. 3

1. Introduction Dynamic programming is a quantitative analytic technique applied to large, complex problems that have sequences of decisions to be made. Dynamic programming divides problems into a number of decision stages; the outcome of a decision at one stage affects the decision at each of the next stages. The technique is useful in a large number of multi-period business problems, such as: Smoothing production employment, Allocating capital funds, Allocating salespeople to marketing areas, and Evaluating investment opportunities. 4

Dynamic Programming vs. Linear Programming Dynamic programming differs from linear programming in two ways: First, there is no algorithm (like the simplex method) that can be programmed to solve all problems. Instead, dynamic programming is a technique that allows a difficult problem to be broken down into a sequence of easier sub-problems, which are then evaluated by stages. 5

Linear programming is a method that gives single-stage (i.e., one-time period) solutions. Dynamic programming has the power to determine the optimal solution over a one- year time horizon by breaking the problem into 12 smaller one-month horizon problems and to solve each of these optimally. Hence, it uses a multistage approach. 6 Second:

Four Steps of Dynamic Programming 1.Divide the original problem into sub-problems called stages. 2.Solve the last stage of the problem for all possible conditions or states. 3.Working backward from that last stage, solve each intermediate stage. This is done by determining optimal policies from that stage to the end of the problem. 4.Obtain the optimal solution for the original problem by solving all stages sequentially. 7

Solving Types of Dynamic Programming Problems Two types of DP problems as examples: 1.network 2.non-network The Shortest-Route Problem is a network problem that can be solved by dynamic programming. The Knapsack Problem is an example of a non-network problem that can be solved using dynamic programming. 8

2- SHORTEST-ROUTE PROBLEM SOLVED BY DYNAMIC PROGRAMMING George Yates is to travel from Rice, Georgia (1) to Dixieville, Georgia (7).  George wants to find the shortest route. there are small towns between Rice and Dixieville. The road map is on the next slide.  The circles (nodes) on the map represent cities such as Rice, Athens, Georgetown, Dixieville, Brown, and so on. The arrows (arcs) represent highways between the cities. 9

10 Brown Dixieville

Highway Map between Rice and Dixieville 11 Dixieville miles 10 miles 14 miles 2 miles 10 miles 6 miles 4 miles 2 miles 12 miles 5 miles Rice LakecityAthens HopeGeorgetown Brown Figure M2.1

We can solve this by inspection, but it is instructive seeing dynamic programming used here to show how to solve more complex problems. 12

Step-1:  Divide the problem into sub-problems or stages.  Figure M2.2 (next slide) reveals the stages of this problem.  In dynamic programming (backward procedure), we start with the last part of the problem, Stage 1, and work backward to the beginning of the problem or network, which is Stage 3 in this problem.  Table M2.1 (second slide) summarizes the arcs and arc distances for each stage. 13

The Stages for George Yates Problem 14 Dixieville miles 10 miles 14 miles 2 miles 10 miles 6 miles 4 miles 2 miles 12 miles 5 miles Rice LakecityAthens HopeGeorgetown Brown Stage 1 Stage 2Stage 3 Figure M2.2

Table M2.1: Distance Along Each Arc 15 STAGE ARC ARC DISTANCE Table M2.1 Dixieville miles 10 miles 14 miles 2 miles 10 miles 6 miles 4 miles 2 miles 12 miles 5 miles Rice Lakecity Athens Hope Georgetown Brown Stage 1 Stage 2Stage 3

Step 2: Solve The Last Stage – Stage 1 Solve Stage 1, the last part of the network. This is usually trivial. Find the shortest path to the end of the network: node 7 in this problem. The objective is to find the shortest distance to node 7. 16

The shortest paths, from nodes: 5 and 6 to node 7 are the only paths (5- 7 and 6-7), mark these arcs (red color or a dash). Also note in Figure M2.3 (next slide) that the minimum distances are enclosed in boxes by the entering nodes to stage 1, node 5 and node At Stage 1:

Stage Minimum Distance to Node 7 from Node 5 Minimum Distance to Node 7 from Node 6 Figure M2.3 Stage 1

STAGE 1 BEGINNING NODE SHORTEST DISTANCE TO NODE 7 ARCS ALONG THIS PATH Stage

Step 3: Moving Backwards Solving Intermediate Problems Moving backward, now solve for Stages 2 and 3 in the same manner. At Stage 2 use Figure M2.4. (next slide): 20

Solution for Stage Minimum Distance to Node 7 from Node 4 Minimum Distance to Node 7 from Node 2 Stage 1 Stage 2 Fig M2.4

Fig M2.4 (previous slide) Analysis If we are at node 4, the shortest and only route to node 7 is arcs 4–5 and 5–7 with a total minimum distance of 24 miles (10+14). At node 3, the shortest route is arcs 3–6 and 6–7 with a total minimum distance of 8 miles = Min{(12+14), (6+2)}. If we are at node 2, the shortest route is arcs 2–6 and 6–7 with a minimum total distance of 12 miles = Min{(4+14), (10+2)} Stage 1 Stage 2

STAGE 2 BEGINNING NODE SHORTEST DISTANCE TO NODE 7 ARCS ALONG THIS PATH Stage 1 Stage

24 1.State variables are the entering nodes: (a) node 2, (b) node 3, (c ) node Decision variables are the arcs or routes: (a) 4-5 (b) 3-5 (c ) 3-6 (d) 2-5 (e) The decision criterion is the minimization of the total distances traveled. 4. The optimal policy for any beginning condition is shown in Figure M2.6: For Stage 2, we have: Stage 1 Stage 2

State variables are the entering nodes The optimal policy is the arc, for any entering node, that will minimize total distance to the destination at this stage Stage 1 Stage 2 Figure M2.6 Decision variables are all the arcs

Solution for the Third Stage 26 Minimum Distance to Node 7 from Node Stage 3

STAGE 3 BEGINNING NODE SHORTEST DISTANCE TO NODE 7 ARCS ALONG THIS PATH

Step 4 : Final Step The final step is to find the optimal solution after all stages have been solved. To obtain the optimal solution at any stage, only consider the arcs to the next stage and the optimal solution at the next stage. For Stage 3, we only have to consider the three arcs to Stage 2 (1–2, 1–3, and 1–4) and the optimal policies at Stage 2. 28

More Complicated Networks 29

30

3. DYNAMIC PROGRAMMING TERMINOLOGY 1.Stage: a period or a logical sub-problem. 2.State variables: possible beginning situations or conditions of a stage. These have also been called the input variables. 3.Decision variables: alternatives or possible decisions that exist at each stage. 4.Decision criterion: a statement concerning the objective of the problem Optimal policy: a set of decision rules, developed as a result of the decision criteria, that gives optimal decisions for any entering condition at any stage. 6. Transformation: normally, an algebraic statement that reveals the relationship between stages:

Shortest Route Problem Transformation Calculation 32 Distance from the beginning of a given stage to the last node = = Distance from the beginning of the previous stage to the last node + + Distance from the given stage to the previous stage In the shortest-route problem, the following transformation can be given:

4. Dynamic Programming Notation In addition to terminology, mathematical notation can also be used to describe any dynamic programming problem. Here, an input, decision, output and return are specified for each stage. This helps to set up and solve the problem. Consider Stage 2 in the George Yates Dynamic Programming problem. This stage can be represented by the diagram shown in Figure M2.7 (as could any given stage of a given dynamic programming problem). 33

Input, Decision, Output, and Return for Stage 2 in George Yates’s Problem Note that the input to one stage is also the output from another stage. e.g., the input to Stage 2, s 2, is also the output from Stage 3. This leads us to the following equation: s n −1 = output from Stage n (M2-4) 34 s n = input to stage n (M2-1) d n = decision at stage n (M2-2) r n = return at stage n (M2-3) Fig M2.7 Input s 2 Decision d 2 Output s 1 Return r 2

Transformation Function A transformation function allows us to go from one stage to another. The total return function allows us to keep track of profits and costs. 35 t n = transformation function at Stage n (M2-5) The following general formula allows us to go from one stage to another using the transformation function: S n-1 = t n (S n, d n )(M2-6) The total return The total return allows us to keep track of the total profit or costs at each stage : f n = total return at stage n. (M2-7)

Dynamic Programming Key Equations  s n  Input to stage n  d n  Decision at stage n  r n  Return at stage n  s n-1  Input to stage n-1  t n  Transformation function at stage n  s n-1 = t n [s n, d n ]  General relationship between stages  f n  Total return at stage n 36

37 5. KNAPSACK PROBLEM

The “knapsack problem” involves the maximization or minimization of a value, such as profits or costs. Like a linear programming problem, there are restrictions. Imagine a knapsack or pouch that can only hold a certain weight or volume.  We can place different types of items in the knapsack. Our objective is to place items in the knapsack  to maximize total value without breaking the knapsack because of too much weight or a similar restriction. 38

39 Example

One possible solution 40

Types of Knapsack Problems  e.g., Choosing items to place in the cargo compartment of an airplane.  selecting which payloads to put on the next NASA space shuttle. The restriction can be volume, weight, or both. Some scheduling problems are also knapsack problems.  e.g., we may want to determine which jobs to complete in the next two weeks. The two-week period is the knapsack, and we want to load it with jobs in such a way so as to maximize profits or minimize costs. The restriction is the number of days or hours during the two-week period. 41

Roller’s Air Transport Service Problem 42 ITEM WEIGHT (tons) PROFIT/ UNIT (10 3 $) NUMBER AVAILABLE TABLE M2.2: Items to be Shipped. Roller’s Air Transport Service ships cargo by plane in the United States and Canada. The remaining capacity for one of the one of the flights from Seattle to Vancouver is 10 tons. There are four different items to ship. Each item has a weight in tons, a net profit in thousands of dollars, and a total number that is available. This information is presented in Table M2.2.

43

44 TABLE M2.3: Relationship Between Items and Stages ITEMNUMBER AVAILABLE WEIGHT (tons) STAGE Assigning items to stages is arbitrary, you can choose any item to any stage. But in the backward procedure it is recommended to start with those items having less availability and/or higher weight.

45 Figure M2.8 Roller’s Air Transport Service Problem d3d3 s2s2 r3r3 s4s4 d4d4 s3s3 r4r4 d2d2 s1s1 r2r2 d1d1 s0s0 r1r1 Decisions Returns

46 ITEMSTAGE WEIGHT/UNIT (tons) PROFIT/ UNIT ($) MAXIMUM VALUE OF DECISION ITEMWEIGHT/UNIT (tons) PROFIT/ UNIT ($) NUMBER AVAILABLE

47 The Transformation Functions s n-1 = (a n x s n ) + (b n x d n ) + c n a n,b n and c n are coefficients (a n =1 and c n =0 for this problem) The general transformation function for knapsack problem: s4s4 d4d4 s3s3 r4r4 Decisions Output of stage 4 (S3) is the remaining weight in the plane after this stage 4 = Remaining weight before stage 4 (S4) – weight taken in stage 4 (1× d 4 ) s 3 = s d 4 stage 4 (a) Weight/Unit

48 STAGEItem COEFFICIENTS OF TRANSITION FUNCTION anananan bnbnbnbn cncncncn Weight/Unit a n c n s n-1 = (a n x s n ) + (b n x d n ) + c n

49 d3d3 s2s2 r3r3 s4s4 d4d4 s3s3 r4r4 d2d2 s1s1 r2r2 d1d1 s0s0 r1r1 Decisions 1 s 3 = s d 4 stage 4 (a) 4 s 2 = s d 3 stage 3 (b) 3 s 1 = s d 2 stage 2 (c) 2 s 0 = s d 1 stage 1 (d) Returns

50 The Return Function The general form for the return function: r n = (a n x s n ) + (b n x d n ) + c n a n, b n, c n are the coefficients for the return function. For this example: a n = c n = 0 r n = b n x d n Profit/Unit Decision

51 The return values table: bnbn DECISIONSITEMSTAGE UPPERLOWER 36 ≤ d n ≤ r n = b n x d n Numberavailable Example: r 4 = 3 d 4 Profit/Unit

52 r 4 = 3 d 4 r 3 = 9 d 3 r 2 = 8 d 2 r 1 = 5 d 1 Profit/Unit Units shipped Return at the stage

53 f1f1 f0f0 s0s0 r1r1 d1d1 s1s …. 10 STAGE 1 Tons available (all possibilities) Units shipped Return = d1 x (profit/Unit) Profit for stage 0 = 0 (nothing is shipped) Total profit f 1 = r 1 + f 0 Tons available for stage 0 Profit = 5/unit Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit.

54 f1f1 f0f0 s0s0 r1r1 d1d1 s1s …. 10 STAGE 1 Tons available Units shipped Return = d1 x (profit/Unit) Profit for stage 0 = 0 (nothing is shipped) Total profit f 1 = r 1 + f 0 Tons available for stage 0 Profit = 5/unit Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit.

55 f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (0) (5) (1) 2 ……………3 ……………… STAGE 1 Tons available Units shipped (all possibilities) Return = d1 x (profit/Unit) Profit for stage 0 = 0 (nothing is shipped) Total profit f 1 = r 1 + f 0 Tons available for stage 0 Profit = 5/unit Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit. (Optimal)

f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (0) (5) (1) 2 0 (5) (1) (10) (2) (10) (2) (10) (2) 10 STAGE 1 (Optimal) Rows S 1 = 5, 6,.., 10 are similar to row S 1 = 4 Profit = 5/unit Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit. Same for S 1 = 4 to 10

57 f2f2f2f2 f1f1 s1s1 r2r2 d2d2 s2s2 (0)000(0)0 (0)010 1 (5) (8) (1) 3 (10) (0) (13) (1) (16) (2) 6 ……………… STAGE 2 f 2 = r 2 + f 1 From Stage 1 Take optimal Take f 1 optimal Depending on s 1 Profit = 8/unit Item 3: Weight/unit = 3 tons, Maximum units = 2, Profit = 8/unit.

f2f2 f1f1 s1s1 r2r2 d2d2 s2s2 (0) (5)520(0)2 5 (8) (1) 3 (10) (0) (13) (1) (16) (2) 6 10 (18) (1) (21) (2) (21) (2) (26) (2) 10 STAGE 2 Profit = 8/unit Item 3: Weight/unit = 3 tons, Maximum units = 2, Profit = 8/unit.

59 f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 (0) (5)520(0)2 (8)830(0)3 (10) (0) 1 4 (13) (0) 1 5 (16) (0) 1 6 (18) (0) 1 7 (21) (0) (22) (1) 9 (26) (0) 1 10 STAGE 3 Profit = 9/unit Item 2: weight/unit =4 tons, Maximum units = 1, Profit = 9/unit.

60 f4f4 f3f3 s3s3 r4r4 d4d4 s4s (28) (4)(5)(6)0123(4)(5)(6) 10 STAGE 4 f 4 = r 4 + f 3 From Stage 3 Depending on s 3 There are three possible decisions that will give the same highest profit Profit = 3/unit Item 1: Weight/item =1 tons, Maximum units = 6, Profit = 3/unit.

61 f4f4 f3f3 s3s3 r4r4 d4d4 s4s4 (28)10418(6)10 One possible optimal solution: STAGE 4 (Item 1) f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 (0) (5)520(0)2 (8)830(0)3 (10) (0) 1 4 (13) (0) 1 5 ….. STAGE 3 (Item 2) STAGE 2

62 f4f4 f3f3 s3s3 r4r4 d4d4 s4s4 (28)10418(6)10 One possible optimal solution: STAGE 4 (Item 1) STAGE 3 (Item 2) (10)1040(0)4 STAGE 2 (Item 3) f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 f2f2 f1f1 s1s1 r2r2 d2d2 s2s2 (10)1040(0)4 STAGE 1 (Item 4) f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (10)0010(2)4

f4f4 f3f3 s3s3 r4r4 d4d4 s4s4 (28)10418(6)10 STAGE 4 STAGE 3 (10)1040(0)4 STAGE 2 f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 f2f2 f1f1 s1s1 r2r2 d2d2 s2s2 (10)1040(0)4 STAGE 1 f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (10)0010(2)4 OPTIMAL RETURN (r n ) OPTIMAL DECISION (d n ) Item STAGE (n) Total FINAL SOLUTION

64 OPTIMAL RETURN (r n ) OPTIMAL DECISION (d n ) Item STAGE (n) Total One possible optimal solution: FINAL SOLUTION (One Possible Solution)

65 f4f4 f3f3 s3s3 r4r4 d4d4 s4s4 (28)13515(5)10 STAGE 4 (Item 1) Second Possible Optimal Solution: STAGE 3 (Item 2) (13)1350(0)5 STAGE 2 (Item 3) f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 (13)528(1)5 STAGE 1 (Item 4) f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (5)005(1)2 Optimal Solution: Item1 Item 2 Item 3 Item 4 Profit

66 Third Possible Optimal Solution: f4f4 f3f3 s3s3 r4r4 d4d4 s4s4 (28)16612(4)10 STAGE 4 (Item 1) STAGE 3 (Item 2) (16)1660(0)6 STAGE 2 (Item 3) f3f3 f2f2 s2s2 r3r3 d3d3 s3s3 (16)0016(2)6 STAGE 1 (Item 4) f1f1 f0f0 s0s0 r1r1 d1d1 s1s1 (0)000 0 Optimal Solution: Item1 Item 2 Item 3 Item 4 Profit

67

68

69

70

Solution Using Software Mathematical Model: Integer Programming: 71 ITEM WEIGHT (tons) PROFIT/ UNIT (10 3 $) NUMBER AVAILABLE

72 Solution Using QM for Windows

73

74

75

76 Using Excel ITEM WEIGHT (tons) PROFIT/ UNIT (10 3 $) NUMBER AVAILABLE

77 Cell F5: = SUMPRODUCT(B5:E5;B2:E2) F6: = SUMPRODUCT(B6:E6;B2:E2) F7: = SUMPRODUCT(B7:E7;B2:E2) F8: = SUMPRODUCT(B8:E8;B2:E2) F9: = SUMPRODUCT(B9:E9;B2:E2)

78

79 Integer Variables

80 Solution

Lab Exercise Solve the Knapsack Example using Excel and QM for Windows. 81

GLOSSARY Decision Criterion. A statement concerning the objective of a dynamic programming problem. Decision Variable. The alternatives or possible decisions that exist at each stage of a dynamic programming problem. Dynamic Programming. A quantitative technique that works backward from the end of the problem to the beginning of the problem in determining the best decision for a number of interrelated decisions. 82

Glossary continued  Optimal Policy. A set of decision rules, developed as a result of the decision criteria, that gives optimal decisions at any stage of a dynamic programming problem.  Stage. A logical sub-problem in a dynamic programming problem.  State Variable. A term used in dynamic programming to describe the possible beginning situations or conditions of a stage.  Transformation. An algebraic statement that shows the relationship between stages in a dynamic programming problem. 83