Download presentation

Presentation is loading. Please wait.

Published byElsa Albany Modified over 2 years ago

1
Prepared by Mr. Prabhu Assistant Professor, Mechanical Department VelTech Dr.RR & Dr.SR Technical University 1 Operation and Planning Control U7MEA37

2
Unit I Linear programming 2

3
Introduction to Operations Research Operations research/management science – Winston: “a scientific approach to decision making, which seeks to determine how best to design and operate a system, usually under conditions requiring the allocation of scarce resources.” – Kimball & Morse: “a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control.”

4
Introduction to Operations Research Provides rational basis for decision making – Solves the type of complex problems that turn up in the modern business environment – Builds mathematical and computer models of organizational systems composed of people, machines, and procedures – Uses analytical and numerical techniques to make predictions and decisions based on these models

5
Introduction to Operations Research Draws upon – engineering, management, mathematics Closely related to the "decision sciences" – applied mathematics, computer science, economics, industrial engineering and systems engineering

6
Methodology of Operations Research* The Seven Steps to a Good OR Analysis Identify the Problem or Opportunity Understand the System Formulate a Mathematical Model Verify the Model Select the Best Alternative Implement and Evaluate Present the Results of the Analysis

7
Methodology of Operations Research* The Seven Steps to a Good OR Analysis Identify the Problem or Opportunity Understand the System Formulate a Mathematical Model Verify the Model Select the Best Alternative Implement and Evaluate Present the Results of the Analysis What are the objectives? Is the proposed problem too narrow? Is it too broad?

8
Methodology of Operations Research* The Seven Steps to a Good OR Analysis Identify the Problem or Opportunity Understand the System Formulate a Mathematical Model Verify the Model Select the Best Alternative Implement and Evaluate Present the Results of the Analysis What data should be collected? How will data be collected? How do different components of the system interact with each other?

9
Methodology of Operations Research* The Seven Steps to a Good OR Analysis Identify the Problem or Opportunity Understand the System Formulate a Mathematical Model Verify the Model Select the Best Alternative Implement and Evaluate Present the Results of the Analysis What kind of model should be used? Is the model accurate? Is the model too complex?

10
Methodology of Operations Research* The Seven Steps to a Good OR Analysis Identify the Problem or Opportunity Understand the System Formulate a Mathematical Model Verify the Model Select the Best Alternative Implement and Evaluate Present the Results of the Analysis Do outputs match current observations for current inputs? Are outputs reasonable? Could the model be erroneous?

11
Methodology of Operations Research* The Seven Steps to a Good OR Analysis Identify the Problem or Opportunity Understand the System Formulate a Mathematical Model Verify the Model Select the Best Alternative Implement and Evaluate Present the Results of the Analysis What if there are conflicting objectives? Inherently the most difficult step. This is where software tools will help us!

12
Methodology of Operations Research* The Seven Steps to a Good OR Analysis Identify the Problem or Opportunity Understand the System Formulate a Mathematical Model Verify the Model Select the Best Alternative Implement and Evaluate Present the Results of the Analysis Must communicate results in layman’s terms. System must be user friendly!

13
Methodology of Operations Research* The Seven Steps to a Good OR Analysis Identify the Problem or Opportunity Understand the System Formulate a Mathematical Model Verify the Model Select the Best Alternative Implement and Evaluate Present the Results of the Analysis Users must be trained on the new system. System must be observed over time to ensure it works properly.

14
Linear Programming 14

15
Objectives – Requirements for a linear programming model. – Graphical representation of linear models. – Linear programming results: Unique optimal solution Alternate optimal solutions Unbounded models Infeasible models – Extreme point principle. 15

16
Objectives - continued – Sensitivity analysis concepts: Reduced costs Range of optimality--LIGHTLY Shadow prices Range of feasibility--LIGHTLY Complementary slackness Added constraints / variables – Computer solution of linear programming models WINQSB EXCEL LINDO 16

17
A Linear Programming model seeks to maximize or minimize a linear function, subject to a set of linear constraints. The linear model consists of the following components: – A set of decision variables. – An objective function. – A set of constraints. – SHOW FORMAT 17 3.1 Introduction to Linear Programming

18
The Importance of Linear Programming – Many real static problems lend themselves to linear programming formulations. – Many real problems can be approximated by linear models. – The output generated by linear programs provides useful “what’s best” and “what-if” information. 18

19
Assumptions of Linear Programming The decision variables are continuous or divisible, meaning that 3.333 eggs or 4.266 airplanes is an acceptable solution The parameters are known with certainty The objective function and constraints exhibit constant returns to scale (i.e., linearity) There are no interactions between decision variables 19

20
Methodology of Linear Programming Determine and define the decision variables Formulate an objective function verbal characterization Mathematical characterization Formulate each constraint 20

21
MODEL FORMULATION : Decisions variables: – X1 = Production level of Space Rays (in dozens per week). – X2 = Production level of Zappers (in dozens per week). Objective Function: – Weekly profit, to be maximized 21

22
The Objective Function Each dozen Space Rays realizes $8 in profit. Total profit from Space Rays is 8X1. Each dozen Zappers realizes $5 in profit. Total profit from Zappers is 5X2. The total profit contributions of both is 8X1 + 5X2 8X1 + 5X2 (The profit contributions are additive because of the linearity assumption) 22

23
The Linear Programming Model Max 8X1 + 5X2 (Weekly profit) subject to 2X1 + 1X2 < = 1200 (Plastic) 3X1 + 4X2 < = 2400 (Production Time) X1 + X2 < = 800 (Total production) X1 - X2 < = 450 (Mix) X j > = 0, j = 1,2 (Nonnegativity) 23

24
3.4 The Set of Feasible Solutions for Linear Programs 24 The set of all points that satisfy all the constraints of the model is called a FEASIBLE REGION

25
Using a graphical presentation we can represent all the constraints, the objective function, and the three types of feasible points. 25

26
26 1200 600 The Plastic constraint Feasible The plastic constraint: 2X1+X2<=1200 X2 Infeasible Production Time 3X1+4X2<=2400 Total production constraint: X1+X2<=800 600 800 Production mix constraint: X1-X2<=450 There are three types of feasible points Interior points. Boundary points. Extreme points. X1

27
Linear Programming- Simplex method “...finding the maximum or minimum of linear functions in which many variables are subject to constraints.” (dictionary.com) A linear program is a “problem that requires the minimization of a linear form subject to linear constraints...” (Dantzig vii)

28
Important Note Linear programming requires linear inequalities In other words, first degree inequalities only! Good: ax + by + cz < 3 Bad: ax 2 + log 2 y > 7

29
Lets look at an example... Farm that produces Apples (x) and Oranges (y) Each crop needs land, fertilizer, and time. 6 acres of land: 3x + y < 6 6 tons of fertilizer: 2x + 3y < 6 8 hour work day: x + 5y < 8 Apples sell for twice as much as oranges We want to maximize profit (z): 2x + y = z We can't produce negative: x > 0, y > 0

30
Traditional Method Graph the inequalities Look at the line we're trying to maximize. x = 1.71 y =.86 z = 4.29

31
Problems... More variables? Cannot eyeball the answer?

32
Simplex Method George B. Dantzig in 1951 Need to convert equations Slack variables

33
Performing the Conversion -z + 2x + y = 0 (Objective Equation) s 1 + x + 5y = 8 s 2 + 2x + 3y = 6 s 3 + 3x + y = 6 Initial feasible solution

34
More definitions Non-basic: x, y Basic variables: s 1, s 2, s 3, z Current Solution: Set non-basic variables to 0 -z + 2x + y = 0 => z = 0 Valid, but not good!

35
Next step... Select a non-basic variable – -z + 2 x + 1 y = 0 – x has the higher coefficient Select a basic variable – s 1 + 1x + 5y = 8 1/8 – s 2 + 2 x + 3y = 6 2/6 – s 3 + 3 x + y = 6 3/6 3/6 is the highest, use equation with s 3

36
New set of equations Solve for x – x = 2 - (1/3)s 3 -(1/3)y Substitute in to other equations to get... – -z – (2/3)s 3 +(1/3)y = -4 – s 1 – (1/3)s 3 + (14/3)y = 6 – s 2 – (2/3)s 3 +(7/3)y = 2 – x + (1/3)s 3 +(1/3)y = 2

37
Redefine everything... Update variables Non-Basic: s 3 and y Basic: s 1, s 2, z, and x Current Solution: – -z – (2/3)s 3 +(1/3)y = -4 => z = 4 – x + (1/3)s 3 +(1/3)y = 2 => x = 2 – y = 0 Better, but not quite there.

38
Do it again! Repeat this process Stop repeating when the coefficients in the objective equation are all negative.

39
Improvements Different kinds of inequalities Minimized instead of maximized L. G. Kachian algorithm proved polynomial

40
Artificial Variable Technique (The Big-M Method)

41
Big-M Method of solving LPP The Big-M method of handling instances with artificial variables is the “commonsense approach”. Essentially, the notion is to make the artificial variables, through their coefficients in the objective function, so costly or unprofitable that any feasible solution to the real problem would be preferred....unless the original instance possessed no feasible solutions at all. But this means that we need to assign, in the objective function, coefficients to the artificial variables that are either very small (maximization problem) or very large (minimization problem); whatever this value,let us call it Big M. In fact, this notion is an old trick in optimization in general; we simply associate a penalty value with variables that we do not want to be part of an ultimate solution(unless such an outcome Is unavoidable).

42
Indeed, the penalty is so costly that unless any of the respective variables' inclusion is warranted algorithmically, such variables will never be part of any feasible solution. This method removes artificial variables from the basis. Here, we assign a large undesirable (unacceptable penalty) coefficients to artificial variables from the objective function point of view. If the objective function (Z) is to be minimized, then a very large positive price (penalty, M) is assigned to each artificial variable and if Z is to be minimized, then a very large negative price is to be assigned. The penalty will be designated by +M for minimization problem and by – M for a maximization problem and also M>0.

43
Example: Minimize Z= 600X 1 +500X 2 subject to constraints, 2X 1 + X 2 >or= 80 X 1 +2X 2 >or= 60 and X 1,X 2 >or= 0 Step1: Convert the LP problem into a system of linear equations. We do this by rewriting the constraint inequalities as equations by subtracting new “surplus & artificial variables" and assigning them zero & +M coefficientsrespectively in the objective function as shown below. So the Objective Function would be: Z=600X 1 +500X 2 +0.S 1 +0.S 2 +MA 1 +MA 2 subject to constraints, 2X 1 + X 2 -S 1 +A 1 = 80 X 1 +2X 2 -S 2 +A 2 = 60 X 1,X 2,S 1,S 2,A 1,A 2 >or= 0

44
Step 2: Obtain a Basic Solution to the problem. We do this by putting the decision variables X 1 =X 2 =S 1 =S 2 =0, so that A 1 = 80 and A 2 =60. These are the initial values of artificial variables. Step 3: Form the Initial Tableau as shown.

45
It is clear from the tableau that X 2 will enter and A 2 will leave the basis. Hence 2 is the key element in pivotal column. Now,the new row operations are as follows: R 2 (New) = R 2 (Old)/2 R 1 (New) = R 1 (Old) - 1*R 2 (New)

46
It is clear from the tableau that X 1 will enter and A 1 will leave the basis. Hence 2 is the key element in pivotal column. Now,the new row operations are as follows: R 1 (New) = R 1 (Old)*2/3 R 2 (New) = R 2 (Old) – (1/2)*R 1 (New)

47
Since all the values of (C j -Z j ) are either zero or positive and also both the artificial variables have been removed, an optimum solution has been arrived at with X 1 =100/3, X 2 =40/3 and Z=80,000/3.

48
48 Unit II Dynamic Programming Characteristics and Examples

49
49 Overview What is dynamic programming? Examples Applications

50
50 What is Dynamic Programming? Design technique – ‘optimization’ problems (sequence of related decisions) – Programming does not mean ‘coding’ in this context, it means ‘solve by making a chart’- or ‘using an array to save intermediate steps”. Some books call this ‘memoization’ (see below) – Similar to Divide and Conquer BUT subproblem solutions are SAVED and NEVER recomputed – Principal of optimality: the optimal solution to the problem contains optimal solutions to the subproblems (Is this true for EVERYTHING?)

51
51 Characteristics Optimal substructure – Unweighted shortest path? – Unweighted longest simple path? Overlapping Subproblems – What happens in recursion (D&C) when this happens? Memoization (not a typo!) – Saving solutions of subproblems (like we did in Fibonacci) to avoid recomputation

52
52 Examples Matrix chain Longest common subsequence (called the LCS problem”

53
53 Review of Technique You have already applied dynamic programming and understand it why it may result in a good algorithm – Fibonacci – Ackermann – Combinations

54
54 Principal of Optimality Often called the “optimality condition” What it means in plain English: you apply the divide and conquer technique so the subproblems are SMALLER VERSIONS OF THE ORIGINAL PROBLEM: if you solve “optimize” the answer to the small problems, does that fact automatically mean that the solution to the big problem is also optimized????? If the answer is yes, then DP applies to his problem

55
55 Example 1 Assume your problem is to draw a straight line between two points A and B. You solve this by divide and conquer by drawing a line from A to the midpoint and from the midpoint to B. QUESTION: if you paste the two smaller lines together will the RESULTING LINE FROM A TO B BE THE SHORTEST DISTANCE FROM A to B???

56
56 Example 2 Say you want to buy Halloween candy – 100 candy bars. You do plan to do this by divide and conquer – buying 10 sets of 10 bars. Is this necessarily less expensive per bar than just buying 2 packages of 50? Or perhaps 1 package of 100?

57
57 How To Apply DP There are TWO ways to apply dynamic programming – METHOD 1: solve the problem at hand recursively, notice where the same subproblem is being ‘re- solved’ and implement the algorithm as a TABLE (example: fibonacci) – METHOD 2: generate all feasible solutions to a problem but prune (eliminate) the solutions that cannot be optimal (example: shortest path)

58
58 Practice Is the Only Way to Learn This Technique See class webpage for homework. Do problems in textbook not assigned but for practice – even after term ends. It took me two years after I took this course before I could apply DP in the real world.

59
59 Dynamic programming Design technique, like divide-and-conquer. Example: Longest Common Subsequence (LCS) Given two sequences x[1.. m] and y[1.. n], find a longest subsequence common to them both. x:x:ABCBDAB y:y:BDCABA “a” not “the” BCBA = LCS(x, y) functional notation, but not a function

60
Review: Dynamic programming DP is a method for solving certain kind of problems DP can be applied when the solution of a problem includes solutions to subproblems We need to find a recursive formula for the solution We can recursively solve subproblems, starting from the trivial case, and save their solutions in memory In the end we’ll get the solution of the whole problem 5/2/201560

61
Properties of a problem that can be solved with dynamic programming Simple Subproblems – We should be able to break the original problem to smaller subproblems that have the same structure Optimal Substructure of the problems – The solution to the problem must be a composition of subproblem solutions Subproblem Overlap – Optimal subproblems to unrelated problems can contain subproblems in common 5/2/201561

62
Review: Longest Common Subsequence (LCS) Problem: how to find the longest pattern of characters that is common to two text strings X and Y Dynamic programming algorithm: solve subproblems until we get the final solution Subproblem: first find the LCS of prefixes of X and Y. this problem has optimal substructure: LCS of two prefixes is always a part of LCS of bigger strings 5/2/201562

63
Review: Longest Common Subsequence (LCS) continued Define X i, Y j to be prefixes of X and Y of length i and j; m = |X|, n = |Y| We store the length of LCS(X i, Y j ) in c[i,j] Trivial cases: LCS(X 0, Y j ) and LCS(X i, Y 0 ) is empty (so c[0,j] = c[i,0] = 0 ) Recursive formula for c[i,j]: 5/2/201563 c[m,n] is the final solution

64
Review: Longest Common Subsequence (LCS) After we have filled the array c[ ], we can use this data to find the characters that constitute the Longest Common Subsequence Algorithm runs in O(m*n), which is much better than the brute-force algorithm: O(n 2 m ) 5/2/201564

65
0-1 Knapsack problem Given a knapsack with maximum capacity W, and a set S consisting of n items Each item i has some weight w i and benefit value b i (all w i, b i and W are integer values) Problem: How to pack the knapsack to achieve maximum total value of packed items? 5/2/201565

66
0-1 Knapsack problem: a picture 5/2/201566 W = 20 wiwi bibi 10 9 8 5 5 4 4 3 3 2 WeightBenefit value This is a knapsack Max weight: W = 20 Items

67
0-1 Knapsack problem Problem, in other words, is to find 5/2/201567 n The problem is called a “0-1” problem, because each item must be entirely accepted or rejected. n Just another version of this problem is the “Fractional Knapsack Problem”, where we can take fractions of items.

68
0-1 Knapsack problem: brute-force approach Let’s first solve this problem with a straightforward algorithm Since there are n items, there are 2 n possible combinations of items. We go through all combinations and find the one with the most total value and with total weight less or equal to W Running time will be O(2 n ) 5/2/201568

69
0-1 Knapsack problem: brute-force approach Can we do better? Yes, with an algorithm based on dynamic programming We need to carefully identify the subproblems 5/2/201569 Let’s try this: If items are labeled 1..n, then a subproblem would be to find an optimal solution for S k = {items labeled 1, 2,.. k}

70
Defining a Subproblem If items are labeled 1..n, then a subproblem would be to find an optimal solution for S k = {items labeled 1, 2,.. k} This is a valid subproblem definition. The question is: can we describe the final solution (S n ) in terms of subproblems (S k )? Unfortunately, we can’t do that. Explanation follows…. 5/2/201570

71
Defining a Subproblem 5/2/201571 Max weight: W = 20 For S 4 : Total weight: 14; total benefit: 20 w 1 =2 b 1 =3 w 2 =4 b 2 =5 w 3 =5 b 3 =8 w 4 =3 b 4 =4 wiwi bibi 10 85 54 43 32 WeightBenefit 9 Item # 4 3 2 1 5 S4S4 S5S5 w 1 =2 b 1 =3 w 2 =4 b 2 =5 w 3 =5 b 3 =8 w 4 =9 b 4 =10 For S 5 : Total weight: 20 total benefit: 26 Solution for S 4 is not part of the solution for S 5 !!! ?

72
Defining a Subproblem (continued) As we have seen, the solution for S 4 is not part of the solution for S 5 So our definition of a subproblem is flawed and we need another one! Let’s add another parameter: w, which will represent the exact weight for each subset of items The subproblem then will be to compute B[k,w] 5/2/201572

73
Recursive Formula for subproblems It means, that the best subset of S k that has total weight w is one of the two: 1) the best subset of S k-1 that has total weight w, or 2) the best subset of S k-1 that has total weight w-w k plus the item k 5/2/201573 n Recursive formula for subproblems:

74
Recursive Formula The best subset of S k that has the total weight w, either contains item k or not. First case: w k >w. Item k can’t be part of the solution, since if it was, the total weight would be > w, which is unacceptable Second case: w k <=w. Then the item k can be in the solution, and we choose the case with greater value 5/2/201574

75
0-1 Knapsack Algorithm for w = 0 to W B[0,w] = 0 for i = 0 to n B[i,0] = 0 for w = 0 to W if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 5/2/201575

76
Running time for w = 0 to W B[0,w] = 0 for i = 0 to n B[i,0] = 0 for w = 0 to W 5/2/201576 What is the running time of this algorithm? O(W) Repeat n times O(n*W) Remember that the brute-force algorithm takes O(2 n )

77
Example 5/2/201577 Let’s run our algorithm on the following data: n = 4 (# of elements) W = 5 (max weight) Elements (weight, benefit): (2,3), (3,4), (4,5), (5,6)

78
Example (2) 5/2/201578 for w = 0 to W B[0,w] = 0 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 4

79
Example (3) 5/2/201579 for i = 0 to n B[i,0] = 0 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 4

80
Example (4) 5/2/201580 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=1 b i =3 w i =2 w=1 w-w i =-1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0

81
Example (5) 5/2/201581 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=1 b i =3 w i =2 w=2 w-w i =0 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3

82
Example (6) 5/2/201582 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=1 b i =3 w i =2 w=3 w-w i =1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3 3

83
Example (7) 5/2/201583 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=1 b i =3 w i =2 w=4 w-w i =2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3 3 3

84
Example (8) 5/2/201584 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=1 b i =3 w i =2 w=5 w-w i =2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3 3 3 3

85
Example (9) 5/2/201585 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=2 b i =4 w i =3 w=1 w-w i =-2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3 3 3 3 0

86
Example (10) 5/2/201586 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=2 b i =4 w i =3 w=2 w-w i =-1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3 3 3 3 0 3

87
Example (11) 5/2/201587 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=2 b i =4 w i =3 w=3 w-w i =0 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3 3 3 3 0 3 4

88
Example (12) 5/2/201588 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=2 b i =4 w i =3 w=4 w-w i =1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3 3 3 3 0 3 4 4

89
Example (13) 5/2/201589 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=2 b i =4 w i =3 w=5 w-w i =2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3 3 3 3 0 3 4 4 7

90
Example (14) 5/2/201590 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=3 b i =5 w i =4 w=1..3 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3 3 3 3 00 3 4 4 7 0 3 4

91
Example (15) 5/2/201591 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=3 b i =5 w i =4 w=4 w- w i =0 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 000 3 4 4 7 0 3 4 5 3 3 3 3

92
Example (15) 5/2/201592 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=3 b i =5 w i =4 w=5 w- w i =1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 000 3 4 4 7 0 3 4 5 7 3 3 3 3

93
Example (16) 5/2/201593 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=3 b i =5 w i =4 w=1..4 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 000 3 4 4 7 0 3 4 5 7 0 3 4 5 3 3 3 3

94
Example (17) 5/2/201594 if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w 0 0 0 0 0 0 W 0 1 2 3 4 5 i 0123 0000 i=3 b i =5 w i =4 w=5 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 000 3 4 4 7 0 3 4 5 7 0 3 4 5 7 3 3 3 3

95
Comments This algorithm only finds the max possible value that can be carried in the knapsack To know the items that make this maximum value, an addition to this algorithm is necessary Please see LCS algorithm from the previous lecture for the example how to extract this data from the table we built 5/2/201595

96
Conclusion Dynamic programming is a useful technique of solving certain kind of problems When the solution can be recursively described in terms of partial solutions, we can store these partial solutions and re-use them as necessary Running time (Dynamic Programming algorithm vs. naïve algorithm): – LCS: O(m*n) vs. O(n * 2 m ) – 0-1 Knapsack problem: O(W*n) vs. O(2 n ) 5/2/201596

97
Unit III Network Models

98
Chapter Outline 12.1 Introduction 12.2 Minimal-Spanning Tree Technique 12.3 Maximal-Flow Technique 12.4 Shortest-Route technique

99
Learning Objectives Students will be able to – Connect all points of a network while minimizing total distance using the minimal-spanning tree technique. – Determine the maximum flow through a network using the maximal-flow technique. – Find the shortest path through a network using the shortest-route technique. – Understand the important role of software in solving network problems.

100
Minimal-Spanning Tree Technique Determines the path through the network that connects all the points while minimizing total distance.

101
Minimal-Spanning Tree Steps 1. Select any node in the network. 2. Connect this node to the nearest node that minimizes the total distance. 3. Considering all of the nodes that are now connected, find and connect the nearest node that is not connected. 4. Repeat the third step until all nodes are connected. 5. If there is a tie in the third step and two or more nodes that are not connected are equally near, arbitrarily select one and continue. A tie suggests that there might be more than one optimal solution.

102
Minimal-Spanning Tree Lauderdale Construction 1 2 3 4 5 7 8 6 3 3 3 5 4 2 1 3 2 5 2 6 7

103
Minimal-Spanning Tree Iterations 1&2 1 2 3 4 5 7 8 6 3 3 3 5 4 2 1 3 2 5 2 6 7 3 1 2 3 4 5 7 8 6 3 3 5 4 2 1 3 2 5 2 6 7 First Iteration Second Iteration

104
Minimal-Spanning Tree Iterations 3&4 1 2 3 4 5 7 8 6 3 3 3 5 4 2 1 3 2 5 2 6 7 3 1 2 3 4 5 7 8 6 3 3 5 4 2 1 3 2 5 2 6 7 Third Iteration Fourth Iteration

105
Minimal-Spanning Tree Iterations 4&5 1 2 3 4 5 7 8 6 3 3 3 5 4 2 1 3 2 5 2 6 7 3 1 2 3 4 5 7 8 6 3 3 5 4 2 1 3 2 5 2 6 7 Fourth Iteration Fifth Iteration

106
Minimal-Spanning Tree Iterations 6&7 1 2 3 4 5 7 8 6 3 3 3 5 4 2 1 3 2 5 2 6 7 3 1 2 3 4 5 7 8 6 3 3 5 4 2 1 3 2 5 2 6 7 Sixth iteration Seventh & final iteration Minimum Distance: 16

107
The Maximal-Flow Technique 1. Pick any path (streets from west to east) with some flow. 2. Increase the flow (number of cars) as much as possible. 3. Adjust the flow capacity numbers on the path (streets). 4. Repeat the above steps until an increase in flow is no longer possible.

108
Maximal-Flow Road Network for Waukesha 1 2 4 3 5 6 1 2 1 11 1 2 0 6 1 2 3 0 10 2 3 0 1 East Point West Point

109
Maximal-Flow Road Network for Waukesha 1 2 4 3 5 6 1 2 1 11 1 2 0 6 1 2 3 0 10 2 3 0 1 East Point West Point Add 2 Subtract 2

110
Road Network for Waukesha First Iteration 12-110 1 2 4 3 5 6 3 0 1 11 1 4 0 6 1 2 3 0 10 2 1 0 1 East Point West Point Add 1 Subtract 1

111
Road Network for Waukesha Second Iteration 1 2 4 3 5 6 4 0 0 20 2 4 0 6 1 2 3 0 10 2 0 0 1 East Point West Point Add 2 Subtract 2

112
Road Network for Waukesha Third Iteration 1 2 4 3 5 6 4 0 0 20 2 4 2 4 3 0 3 2 8 2 0 0 1 East Point West Point

113
The Shortest-Route Technique 1. Find the nearest node to the origin (plant). Put the distance in a box by the node. – In some cases, several paths will have to be checked to find the nearest node. 2. Repeat this process until you have gone through the entire network. The last distance at the ending node will be the distance of the shortest route. You should note that the distances placed in the boxes by each node are the shortest route to this node. These distances are used as intermediate results in finding the next nearest node.

114
Shortest-Route Problem Ray Design, Inc. 200 16 53 4 2 Roads from Ray’s Plant to the Warehouse PlantWarehouse 100 150 100 40 50 200

115
Ray Design, Inc. First Iteration 200 16 53 4 2 PlantWarehouse 100 150 100 40 50 200 100

116
Ray Design, Inc. Second Iteration 200 16 53 4 2 PlantWarehouse 100 150 100 40 50 200 100 150

117
Ray Design, Inc. Third Iteration 200 16 53 4 2 PlantWarehouse 100 150 100 40 50 200 100 150190

118
Ray Design, Inc. Fourth Iteration 200 16 53 4 2 PlantWarehouse 100 150 100 40 50 200 100 150190 290

119
Project Management - CPM/PERT

120
120 Project Scheduling and Control Techniques Gantt Chart Critical Path Method (CPM) Program Evaluation and Review Technique (PERT)

121
121 History of CPM/PERT Critical Path Method (CPM) – E I Du Pont de Nemours & Co. (1957) for construction of new chemical plant and maintenance shut-down – Deterministic task times – Activity-on-node network construction – Repetitive nature of jobs Project Evaluation and Review Technique (PERT) – U S Navy (1958) for the POLARIS missile program – Multiple task time estimates (probabilistic nature) – Activity-on-arrow network construction – Non-repetitive jobs (R & D work)

122
122 Project Network Network analysis is the general name given to certain specific techniques which can be used for the planning, management and control of projects Use of nodes and arrows Arrows An arrow leads from tail to head directionally – Indicate ACTIVITY, a time consuming effort that is required to perform a part of the work. Nodes A node is represented by a circle - Indicate EVENT, a point in time where one or more activities start and/or finish. Activity –A task or a certain amount of work required in the project –Requires time to complete –Represented by an arrow Dummy Activity –Indicates only precedence relationships –Does not require any time of effort

123
123 CPM calculation Path – A connected sequence of activities leading from the starting event to the ending event Critical Path – The longest path (time); determines the project duration Critical Activities – All of the activities that make up the critical path

124
124 Forward Pass Earliest Start Time (ES) – earliest time an activity can start – ES = maximum EF of immediate predecessors Earliest finish time (EF) – earliest time an activity can finish – earliest start time plus activity time EF= ES + t Latest Start Time (LS) Latest time an activity can start without delaying critical path time LS= LF - t Latest finish time (LF) latest time an activity can be completed without delaying critical path time LS = minimum LS of immediate predecessors Backward Pass

125
125 CPM analysis Draw the CPM network Analyze the paths through the network Determine the float for each activity – Compute the activity’s float float = LS - ES = LF - EF – Float is the maximum amount of time that this activity can be delay in its completion before it becomes a critical activity, i.e., delays completion of the project Find the critical path is that the sequence of activities and events where there is no “slack” i.e.. Zero slack – Longest path through a network Find the project duration is minimum project completion time

126
Consider below table summarizing the details of a project involving 10 activities Activity Immediate precedence duration a - 6 b - 8 c - 5 d b 13 e c 9 f a 15 g a 17 h f 9 i g 6 j d,e 12 Construct the CPM network. Determine the critical path and project completion time.Also compute total float and free floats for the non- critical activities 126

127
127 CPM Example: CPM Network a, 6 f, 15 b, 8 c, 5 e, 9 d, 13 g, 17 h, 9 i, 6 j, 12

128
128 CPM Example ES and EF Times a, 6 f, 15 b, 8 c, 5 e, 9 d, 13 g, 17 h, 9 i, 6 j, 12 06 08 05

129
129 CPM Example ES and EF Times a, 6 f, 15 b, 8 c, 5 e, 9 d, 13 g, 17 h, 9 i, 6 j, 12 06 08 05 5 14 8 21 6 23 6 21

130
130 CPM Example ES and EF Times a, 6 f, 15 b, 8 c, 5 e, 9 d, 13 g, 17 h, 9 i, 6 j, 12 06 08 05 5 14 8 21 33 6 23 21 30 23 29 6 21 Project’s EF = 33 Project’s EF = 33

131
131 CPM Example LS and LF Times a, 6 f, 15 b, 8 c, 5 e, 9 d, 13 g, 17 h, 9 i, 6 j, 12 06 08 05 5 14 8 21 33 6 23 21 30 23 29 6 21 33 27 33 24 33

132
132 CPM Example LS and LF Times a, 6 f, 15 b, 8 c, 5 e, 9 d, 13 g, 17 h, 9 i, 6 j, 12 06 08 05 5 14 8 21 33 6 23 21 30 23 29 6 21 4 10 08 712 21 33 27 33 8 21 10 27 24 33 18 24

133
133 CPM Example Float a, 6 f, 15 b, 8 c, 5 e, 9 d, 13 g, 17 h, 9 i, 6 j, 12 06 08 05 5 14 8 21 33 6 23 21 30 23 29 6 21 3 9 08 712 21 33 27 33 8 21 10 27 24 33 9 24 3 4 3 3 4 0 0 7 7 0

134
134 CPM Example Critical Path a, 6 f, 15 b, 8 c, 5 e, 9 d, 13 g, 17 h, 9 i, 6 j, 12

135
darla/smbs/vit135 PERT PERT is based on the assumption that an activity’s duration follows a probability distribution instead of being a single value Three time estimates are required to compute the parameters of an activity’s duration distribution: – pessimistic time (t p ) - the time the activity would take if things did not go well – most likely time (t m ) - the consensus best estimate of the activity’s duration – optimistic time (t o ) - the time the activity would take if things did go well Mean (expected time): t e = t p + 4 t m + t o 6 Variance: V t = 2 = t p - t o 6 2

136
136 PERT analysis Draw the network. Analyze the paths through the network and find the critical path. The length of the critical path is the mean of the project duration probability distribution which is assumed to be normal The standard deviation of the project duration probability distribution is computed by adding the variances of the critical activities (all of the activities that make up the critical path) and taking the square root of that sum Probability computations can now be made using the normal distribution table.

137
137 Probability computation Determine probability that project is completed within specified time Z = x - where = t p = project mean time = project standard mean time x = (proposed ) specified time

138
138 Normal Distribution of Project Time = t p Time x ZZ Probability

139
139 PERT Example Immed. Optimistic Most Likely Pessimistic Activity Predec. Time (Hr.) Time (Hr.) Time (Hr.) A -- 4 6 8 B -- 1 4.5 5 C A 3 3 3 D A 4 5 6 E A 0.5 1 1.5 F B,C 3 4 5 G B,C 1 1.5 5 H E,F 5 6 7 I E,F 2 5 8 J D,H 2.5 2.75 4.5 K G,I 3 5 7

140
140 PERT Example A D C B F E G I H K J PERT Network

141
141 PERT Example Activity Expected Time Variance A 6 4/9 B 4 4/9 C 3 0 D 5 1/9 E 1 1/36 F 4 1/9 G 2 4/9 H 6 1/9 I 5 1 J 3 1/9 K 5 4/9

142
142 PERT Example Activity ES EF LS LF Slack A 0 6 0 6 0 *critical B 0 4 5 9 5 C 6 9 6 9 0 * D 6 11 15 20 9 E 6 7 12 13 6 F 9 13 9 13 0 * G 9 11 16 18 7 H 13 19 14 20 1 I 13 18 13 18 0 * J 19 22 20 23 1 K 18 23 18 23 0 *

143
143 PERT Example V path = V A + V C + V F + V I + V K = 4/9 + 0 + 1/9 + 1 + 4/9 = 2 path = 1.414 z = (24 - 23)/ (24-23)/1.414 =.71 From the Standard Normal Distribution table: P(z <.71) =.5 +.2612 =.7612

144
PROJECT COST

145
145 Cost consideration in project Project managers may have the option or requirement to crash the project, or accelerate the completion of the project. This is accomplished by reducing the length of the critical path(s). The length of the critical path is reduced by reducing the duration of the activities on the critical path. If each activity requires the expenditure of an amount of money to reduce its duration by one unit of time, then the project manager selects the least cost critical activity, reduces it by one time unit, and traces that change through the remainder of the network. As a result of a reduction in an activity’s time, a new critical path may be created. When there is more than one critical path, each of the critical paths must be reduced. If the length of the project needs to be reduced further, the process is repeated.

146
146 Project Crashing Crashing – reducing project time by expending additional resources Crash time – an amount of time an activity is reduced Crash cost – cost of reducing activity time Goal – reduce project duration at minimum cost

147
147 Time-Cost Relationship Crashing costs increase as project duration decreases Indirect costs increase as project duration increases Reduce project length as long as crashing costs are less than indirect costs Time-Cost Tradeoff cost time Direct cost Indirect cost Total project cost Min total cost = optimal project time

148
148 Benefits of CPM/PERT Useful at many stages of project management Mathematically simple Give critical path and slack time Provide project documentation Useful in monitoring costs How long will the entire project take to be completed? What are the risks involved? Which are the critical activities or tasks in the project which could delay the entire project if they were not completed on time? Is the project on schedule, behind schedule or ahead of schedule? If the project has to be finished earlier than planned, what is the best way to do this at the least cost? CPM/PERT can answer the following important questions:

149
149 Limitations to CPM/PERT Clearly defined, independent and stable activities Specified precedence relationships Over emphasis on critical paths Deterministic CPM model Activity time estimates are subjective and depend on judgment PERT assumes a beta distribution for these time estimates, but the actual distribution may be different PERT consistently underestimates the expected project completion time due to alternate paths becoming critical To overcome the limitation, Monte Carlo simulations can be performed on the network to eliminate the optimistic bias

150
150 CPMPERT CPM uses activity oriented network.PERT uses event oriented Network. Durations of activity may be estimated with a fair degree of accuracy. Estimate of time for activities are not so accurate and definite. It is used extensively in construction projects. It is used mostly in research and development projects, particularly projects of non-repetitive nature. Deterministic concept is used.Probabilistic model concept is used. CPM can control both time and cost when planning. PERT is basically a tool for planning. In CPM, cost optimization is given prime importance. The time for the completion of the project depends upon cost optimization. The cost is not directly proportioned to time. Thus, cost is the controlling factor. In PERT, it is assumed that cost varies directly with time. Attention is therefore given to minimize the time so that minimum cost results. Thus in PERT, time is the controlling factor. PERT vs CPM

151
Unit IV Inventory Management The objective of inventory management is to strike a balance between inventory investment and customer service

152
Inventory control It means stocking adequate number and kind of stores, so that the materials are available whenever required and wherever required. Scientific inventory control results in optimal balance

153
What is inventory? Inventory is the raw materials, component parts, work-in-process, or finished products that are held at a location in the supply chain.

154
Input Material Management department Inventory (money) Goods in stores Work-in-progress Finished products Equipment etc. Output Production department Basic inventory model

156
Zero Inventory? Reducing amounts of raw materials and purchased parts and subassemblies by having suppliers deliver them directly. Reducing the amount of works-in process by using just-in-time production. Reducing the amount of finished goods by shipping to markets as soon as possible.

157
Importance of Inventory One of the most expensive assets of many companies representing as much as 50% of total invested capital Operations managers must balance inventory investment and customer service

158
FUNCTIONS OF INVENTORY To meet anticipated demand. To smoothen production requirements. To decouple operations. SUPPLY PROCESS PRODUCTS DEMAND INVENTORY PRODUCTS DEMAND PROCESS

159
Functions Of Inventory (Cont’d) To protect against stock-outs. To take advantage of order cycles. To help hedge against price increases. To permit operations. To take advantage of quantity discounts.

160
Types of Inventory Raw material Purchased but not processed Work-in-process Undergone some change but not completed A function of cycle time for a product Maintenance/repair/operating (MRO) Necessary to keep machinery and processes productive Finished goods Completed product awaiting shipment

161
The Material Flow Cycle InputWait forWait toMoveWait in queueSetupRunOutput inspectionbe movedtimefor operatortimetime Cycle time 95%5%

162
Service level Safety Stock Probability of stock-out Safety stock = (safety factor z)(std deviation in LT demand ) Read z from Normal table for a given service level

163
Average Inventory = (Order Qty)/2 + Safety Stock Receive order Time Place order Lead Time Inventory Level Order Quantity Safety Stock (SS) EOQ/2 Average Inventory

164
Managing Inventory 1.How inventory items can be classified 2.How accurate inventory records can be maintained

165
Inventory Models for Independent Demand 1.Basic economic order quantity 2.Production order quantity 3.Quantity discount model Need to determine when and how much to order

166
Economic order of quantity EOQ = Average Monthly Consumption X Lead Time [in months] + Buffer Stock – Stock on hand

167
Re-order level: stock level at which fresh order is placed. Average consumption per day x lead time + buffer stock Lead time: Duration time between placing an order & receipt of material Ideal – 2 to 6 weeks.

168
Basic EOQ Model 1.Demand is known, constant, and independent 2.Lead time is known and constant 3.Receipt of inventory is instantaneous and complete 4.Quantity discounts are not possible 5.Only variable costs are setup and holding 6.Stockouts can be completely avoided Important assumptions

169
An EOQ Example Determine optimal number of needles to order D = 1,000 units S = $10 per order H = $.50 per unit per year Q* = 2DS H Q* = 2(1,000)(10) 0.50 = 40,000 = 200 units

170
An EOQ Example Determine optimal number of needles to order D = 1,000 units Q*= 200 units S = $10 per order H = $.50 per unit per year = N = = Expected number of orders Demand Order quantity DQ*DQ* N = = 5 orders per year 1,000 200

171
An EOQ Example Determine optimal number of needles to order D = 1,000 unitsQ*= 200 units S = $10 per orderN= 5 orders per year H = $.50 per unit per year = T = Expected time between orders Number of working days per year N T = = 50 days between orders 250 5

172
ClassificationCriteria A-B-CAnnual value of consumption of the items V-E-DCritical nature of the components with respect to products. H-M-LUnit price of material F-S-NIssue from stores S-D-EPurchasing problems in regard to availability S-O-SSeasonality G-O-L-FChannel for procuring the material X-Y-ZInventory value of items stored Classification of Materials for Inventory Control

173
Holding costs Holding costs - the costs of holding or “carrying” inventory over time Ordering costs Ordering costs - the costs of placing an order and receiving goods Setup costs Setup costs - cost to prepare a machine or process for manufacturing an order Relevant Inventory Costs

174
Stationary Clerical and processing, salaries/rentals Postage Processing of bills Staff work in expedition /receiving/ inspection and documentation Ordering Costs

175
Holding/Carrying Costs Storage space (rent/depreciation) Property tax on warehousing Insurance Deterioration/Obsolescence Material handling and maintenance, equipment Stock taking, security and documentation Capital blocked (interest/opportunity cost) Quality control

176
Loss of business/ profit/ market/ advise Additional expenditure due to urgency of purchases a) telegraph / telephone charges b) purchase at premium c) air transport charges Loss of labor hours Stock out Costs

177
Balancing Carrying against Ordering Costs

178
Unit V Queuing Theory

179
Outlines IntroductionSingle server modelMulti server model

180
Introduction Involves the mathematical study of queues or waiting line. The formulation of queues occur whenever the demand for a service exceeds the capacity to provide that service. Decisions regarding the amount of capacity to provide must be made frequently in industry and elsewhere. Queuing theory provides a means for decision makers to study and analyze characteristics of the service facility for making better decisions.

181
Basic structure of queuing model Customers requiring service are generated over time by an input source. These customers enter the queuing system and join a queue. At certain times, a member of the queue is selected for service by some rule know as the service disciple. The required service is then performed for the customer by the service mechanism, after which the customer leaves the queuing system

182
The basic queuing process Input source Queue Service mechanis m Customer s Served Customer s Queuing system

183
Characteristics of queuing models Input or arrival (interarrival) distribution Output or departure (service) distribution Service channels Service discipline Maximum number of customers allowed in the system Calling source

184
Kendall and Lee’s Notation Kendall and Lee introduced a useful notation representing the 6 basic characteristics of a queuing model. Notation: a/b/c/d/e/f where a = arrival (or interarrival) distribution b = departure (or service time) distribution c = number of parallel service channels in the system d = service disciple e = maximum number allowed in the system (service + waiting) f = calling source

185
Conventional Symbols for a, b M = Poisson arrival or departure distribution (or equivalently exponential distribution or service times distribution) D = Deterministic interarrival or service times Ek = Erlangian or gamma interarrival or service time distribution with parameter k GI = General independent distribution of arrivals (or interarrival times) G = General distribution of departures (or service times)

186
Conventional Symbols for d FCFS = First come, first served LCFS = Last come, first served SIRO = Service in random order GD = General service disciple

187
Transient and Steady States Transient state The system is in this state when its operating characteristics vary with time. Occurs at the early stages of the system’s operation where its behavior is dependent on the initial conditions. Steady state The system is in this state when the behavior of the system becomes independent of time. Most attention in queuing theory analysis has been directed to the steady state results.

188
Queuing Model Symbols n = Number of customers in the system s = Number of servers pn(t) = Transient state probabilities of exactly n customers in the system at time t pn = Steady state probabilities of exactly n customers in the system λ = Mean arrival rate (number of customers arriving per unit time) μ = Mean service rate per busy server (number of customers served per unit time)

189
Queuing Model Symbols (Cont’d) ρ = λ/μ = Traffic intensity W = Expected waiting time per customer in the system Wq = Expected waiting time per customer in the queue L = Expected number of customers in the system Lq = Expected number of customers in the queue

190
Relationship Between L and W If λ n is a constant λ for all n, it can be shown that L = λW L q = λ W q If λ n are not constant then λ can be replaced in the above equations by λ bar,the average arrival rate over the long run. If μn is a constant μ for all n, then W = W q + 1/μ

191
Relationship Between L and W (cont’d) These relationships are important because: They enable all four of the fundamental quantities L, W, Lq and Wq to be determined as long as one of them is found analytically. The expected queue lengths are much easier to find than that of expected waiting times when solving a queuing model from basic principles.

192
Single server queuing models M/M/1/FCFS/∞/∞ Model when the mean arrival rate λn and mean service μn are all constant we have

193
Single server queuing models (cont’d) Consequently

194
Single server queuing models (cont’d)

195
Multi server queuing models M/M/s/FCFS/∞/∞ Model When the mean arrival rate λ n and mean service μ n, are all constant, we have the following rate diagram

196
Multi server queuing models (cont’d)

198
Thank You 198

Similar presentations

Presentation is loading. Please wait....

OK

Linear Programming.

Linear Programming.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google