Download presentation

Presentation is loading. Please wait.

Published byJared Boore Modified about 1 year ago

1
1 Introduction to optimization Dr. Md. Golam Hossain Professor Department of Statistics University of Rajshahi Rajshahi-6205, Bangladesh HP:

2
2 Module Outline Mathematical Concepts for Optimization Mathematical Concepts for Optimization Introduction to Linear Programming Introduction to Linear Programming Graphical Method Graphical Method Simplex Method for Maximization (only type of constraints “≤”) Problem Simplex Method for Maximization (only type of constraints “≤”) Problem Simplex Method for Minimization or Maximization (Type of constraints ≥ or mix) Problem Simplex Method for Minimization or Maximization (Type of constraints ≥ or mix) Problem Duality in Linear Programming Duality in Linear Programming

3
3 Mathematical Concepts for Optimization In Mathematics, Statistics, Empirical Sciences, Computer Science or Management Science, Mathematical optimization (Alternatively, Optimization or Mathematical Programming) is the selection of a best element (with regard to some criteria) from some set of available alternatives. More generally, optimization includes finding "best available" values of some objective function given a defined domain, including a variety of different types of objective functions and different types of domains.

4
4 Optimization Problem An optimization problem can be represented in the following way: A function f : A R from some set A to the real numbers. An element x 0 in A such that f(x 0 ) ≤ f(x) for all x in A (minimization) or such that f(x 0 ) ≥ f(x) for all x in A (maximization). Such a formulation is called an optimization problem or a mathematical programming problem.

5
5 We say that the function f (x) has a relative maximum value at x = a, if f (a) is greater than any value in its immediate neighborhood. We also say that a function f (x) has a relative minimum value at x = b, if f (b) is less than any value in its immediate neighborhood. The value of the function, the value of y, at either a maximum or a minimum is called an extreme value.

6
6 Sufficient conditions We can now state these sufficient conditions for extreme values of a function at a critical value a: The function has a minimum value at x = a if f '(a) = 0 and f ''(a) = a positive number. The function has a maximum value at x = a if f '(a) = 0 and f ''(a) = a negative number. Example. Let f(x) = x² − 6x + 5. Are there any critical values -- any turning points? If so, do they determine a maximum or a minimum? And what are the coordinates on the graph of that maximum or minimum?

7
7 History of Optimization Greek mathematicians solved some optimization problems that were related to their geometrical studies Euclid [Father of Geometry] Residence: Alexandria, Egypt, Field: Mathematics, Known for: Euclidean geometry, Euclid's Elements. Contribution: 300 BC Euclid proved that a square has the greatest area among the rectangles with given total length of edges. Heron (100 BC) Residence: Alexandria, Roman Egypt, Field: Mathematics. Contribution: 100 BC Heron proved that light travels between two points through the path with shortest length when reflecting from a mirror.

8
8 17th and 18th centuries Before the invention of calculus of variations (calculus) only some separate optimization problems were being investigated. Johannes Kepler (Germany) [ ] Field: Astronomy, Astrology, Mathematics and Natural Philosophy Pierre de Fermat (France): [ ] Field: Mathematics and Law. Contribution: In 1646 P. de Fermat showed that light travels between two points in minimal time. Contribution: In 1615 he Calculated the optimal dimension of wine barrel.

9
9 I. Newton(1660s) and G. W. von Leibniz(1670s) create mathematical analysis that forms the basis of calculus. Some separate finite optimization problems are also considered. Isaac Newton (England): [ ] Field: Physics, Mathematics, Astronomy, Natural Philosophy, Alchemy, Christian Theology. Contribution: In 1687 Newton studied the body of minimal resistance. Gottfried Wilhelm Leibniz (German): [ ] Field: Mathematics, Metaphysics, Theodicy. Contribution: In 1670 he studied some separate finite optimization problems in calculus.

10
10 Leonhard Euler (Switzerland): [ ] Field: Mathematics and Physics. Contribution: In 1740 he was first researcher who published research on general theory of calculus. Joseph Louis Lagrange (France): [ ]. Field: Mathematics and Mathematical Physics. Contribution: In 1760 he formulated the Plateau’s problem, the problem of minimal surfaces. Gaspard Monge (France): [ ]. Field: Mathematics and Engineering Education. Contribution: In 1784 G. Monge investigated a combinational optimization problem known as the transportation problem.

11
11 19th century The first optimization algorithms are presented. K.T.W. Weierstrass, J. Steiner, W.R. Hamilton and C.G.J. Jacobi further develop of calculus of variations.K.T.W. WeierstrassJ. SteinerW.R. HamiltonC.G.J. Jacobi Karl Theodor Wilhelm Weierstrass (German): [ ]. Field: Mathematics. William Rowan Hamilton (Irish): [1805–1865]. Field: Physics, Astronomy and Mathematics. Carl Gustav Jacob Jacobi (German): [ ] Field: Mathematics

12
12 Jean Baptiste Joseph Fourier (France): [ ]. Fields: Mathematics, Physics and History. Contribution: In 1826 J.B.J. Fourier formulated LP-problem for solving problems arising in mechanics and probability theory. Léon Walras (France) : [ ]. Field: Economics, Marginalism. Contribution: In 1870 Walras and Cournot shifted the focus of economists to utility maximizing individuals optimization becomes an integral part of economic theory.

13
13 20 th Century Johan Ludwig William Valdemar Jensen (Denmark): [ ] Field: Mathematics. Contribution: He introduced Convex functions in Harris Hancock (America): [ ] Field: Mathematics. Contribution: H Hancock published the first text book on optimization, Theory of Maxima and Minima in Leonid Kantorovich (Soviet Russia): [ ] Field: Mathematics. Contribution: 1939 L.V. Kantorovich presented LP- model and an algorithm for solving it. In 1975 Kantorovich and T.C. Koopmans got the Nobel memorial price of economics for their contributions on LP-problem.

14
After the world War II optimization develops simultaneously with operations research. J. Von Neumann is an important person behind the development of operations research. The field of algorithmic research expands as electronic calculation develops. John von Neumann (United States): [ ] Field: Mathematics and Computer Science Contribution: Additionally he established the theory of duality for LP-problems George Bernard Dantzig (America): [1914 – 2005] Field: Mathematics, Operation Research, Computer Science, Economics and Statistics Contribution: In 1947 G.B.Dantzig designed the “simplex method” for solving linear programming formulations of U.S. Air Force planning problems the first international congress, International Symposium on Mathematical Programming, on optimization is held in Chicago. The number of papers presented in the congress was 34.

15
15 Linear Programming has nothing to do with computer Linear programming has nothing to do with computerprogramming. The use of the word “programming” here means “choosing a course of action.” Linear programming involves choosing a course of action when the mathematical model of the problem contains only linear functions.

16
Historical Perspective for LP 1928 – John von Neumann published related central theorem of game theory 1944 – Von Neumann and Morgenstern published Theory of Games and Economic Behavior 1936 – W.W. Leontief published "Quantitative Input and Output Relations in the Economic Systems of the US" which was a linear model without objective function – Kantoravich (Russia) actually formulated and solved a LP problem 1941 – Hitchcock poses transportation problem (special LP) WWII – Allied forces formulate and solve several LP problems related to military A breakthrough occurred in

17
SCOOP US Air Force wanted to investigate the feasibility of applying mathematical techniques to military budgeting and planning. George Dantzig had proposed that interrelations between activities of a large organization can be viewed as a LP model and that the optimal program (solution) can be obtained by minimizing a (single) linear objective function. Air Force initiated project SCOOP (Scientific Computing of Optimum Programs) NOTE: SCOOP began in June 1947 and at the end of the same summer, Dantzig and associates had developed: 1)An initial mathematical model of the general linear programming problem. 2)A general method of solution called the simplex method.

18
Requirements of linear Programming In general, linear programming can be used for optimization problems if the following conditions are satisfied: There must be a well-defined objective function (profit, cost or quantities produced) which is to be either maximized or minimized and which can be expressed as a linear function of decision variables. There must be restrictions on the amount or extent of attainment of the objective and these restrictions must be capable of being expressed as linear equalities or inequalities in terms of variables. There must be alternative course of action. For example, a given product may be processed by two different machines and the problem may be as to how much of the product to allocate to which machine. Another necessary requirement is that decision variables should be interrelated and non- negative. The non-negativity condition shows that linear programming deals with real-life situations for which negative quantities are generally illogical. The recourses must be in limited supply. For example, if a firm starts producing greater number of a particular product, it must make smaller number of other products as the total production capacity is limited.

19
19 Linear Programming (LP) Problem If both the objective function and the constraints are linear, the problem is referred to as a If both the objective function and the constraints are linear, the problem is referred to as a linear programming problem. Linear functions are functions in which each variable appears in a separate term raised to the first power and is multiplied by a constant (which could be 0). Linear functions are functions in which each variable appears in a separate term raised to the first power and is multiplied by a constant (which could be 0). Linear constraints are linear functions that are restricted to be "less than or equal to", "equal to", or "greater than or equal to" a constant. Linear constraints are linear functions that are restricted to be "less than or equal to", "equal to", or "greater than or equal to" a constant.

20
20 Guidelines for Model Formulation Understand the problem thoroughly. Understand the problem thoroughly. Describe the objective. Describe the objective. Describe each constraint. Describe each constraint. Define the decision variables. Define the decision variables. Write the objective in terms of the decision variables. Write the objective in terms of the decision variables. Write the constraints in terms of the decision variables. Write the constraints in terms of the decision variables.

21
21Example1: Iron Works, Inc. seeks to maximize profit by making two products from steel. It just received this month's allocation of 19 pounds of steel. It takes 2 pounds of steel to make a unit of product 1, and 3 pounds of steel to make a unit of product 2. The physical plant has the capacity to make at most 6 units of product 1, and at most 8 units of total product (product 1 plus product 2). Product 1 has unit profit 5, and product 2 has unit profit 7. Formulate the linear program to maximize profit. Answer: Here is a mathematical formulation of the objective. Let x 1 and x 2 denote this month's production level of product 1 and product 2. The total monthly profit = (profit per unit of product 1) x (monthly production of product 1) + (profit per unit of product 2) x (monthly production of product 2) = 5x 1 + 7x 2 = 5x 1 + 7x 2 Maximize total monthly profit: Max z = 5x 1 + 7x 2

22
22 Here is a mathematical formulation of constraints. The total amount of steel used during monthly production = (steel used per unit of product 1) x (monthly production of product 1) + (steel used per unit of product 2) x (monthly production of product 2) = 2x 1 + 3x 2 = 2x 1 + 3x 2 That quantity must be less than or equal to the allocated 19 pounds of steel (the inequality < in the constraint below assumes excess steel can be freely disposed; if disposal is impossible, then use equality =) 2x 1 + 3x 2 < 19 2x 1 + 3x 2 < 19 The constraint that the physical plant has the capacity to make at most 6 units of product 1 is formulated x 1 < 6 x 1 < 6 The constraint that the physical plant has the capacity to make at most 8 units of total product (product 1 plus product 2) is x 1 + x 2 < 8 x 1 + x 2 < 8

23
23 Adding the non-negativity of production completes the formulation Max Z= 5 x x 2 s.t. x 1 < 6 2 x x 2 < 19 2 x x 2 < 19 x 1 + x 2 < 8 x 1 + x 2 < 8 x 1 > 0 and x 2 > 0 x 1 > 0 and x 2 > 0 ObjectiveFunction “Regular” Constraints Non-negativity Constraints Constraints “Max” means maximize, and “s.t.” means subject to.

24
24 Linear Programming Problem can be solved by Graphical Method Corner point solution method Isoprofit line solution method Simplex Method Simplex Method I (Maximization, for constrains, ≤) Simplex Method II (Minimization or maximization for constraints, ≥ or mix)

25
Some important mathematical items related to method solving LPP Hyperplane in R n : The set of points x =[ x 1, x 2,…,x n ] satisfying the equation, z = c 1 x 1 +c 2 x c n x n, where c i ’s are constants not all zero, and z is a constant, is called a hyperplane, determined by the vector c = (c 1, c 2, …, c n ) and scalar z. Half Spaces: In R n, let c.x =z be a hyperplane, then the two sets of points K + = {x|c.x≥z} and K - = {x|c.x≤z} are called the closed half spaces, determined by c.x=z. Convex Set: A subset S C R n is said to be convex, if for each pair of points x, y in S, the line segment [x:y] is contained in S.

26
Some Basic Definitions: Objective Function: The linear function, z = c 1 x 1 +c 2 x 2 +…+c n x n, which is to be minimized or maximized is called the objective function of the GLPP. Linear constraints): are linear functions that are restricted to be "less than or equal to", "equal to", or "greater than or equal to" a constant.Subject to the Constraints ( Linear constraints): are linear functions that are restricted to be "less than or equal to", "equal to", or "greater than or equal to" a constant. –Thus 2x 1 + 3x 2 < 19 is a linear constraint, but 2x 1 + 3x 2 < 19 and 2x 1 + 3x 1 x 2 < 19 are not. 2x 1 + 3x 2 < 19 and 2x 1 + 3x 1 x 2 < 19 are not. Solution: Values of decision variables x j (j=1, 2, 3,…,n) which satisfy subject to the constraints of a GLLP is called the solution to that LPP. Feasible solution: Any solution to a GLPP which satisfies the non- negative restrictions of the problem, is called a feasible solution to that problem. Optimum solution: Any feasible solution which optimizes (minimizes or maximizes) the objective function of a GLPP is called an optimum solution to that problem.

27
27 Example 1: Graphical Solution First Constraint Graphed First Constraint Graphed x 2 x 2 x1x1x1x1 x 1 = 6 (6, 0) Shaded region contains all feasible points for this constraint

28
28 Example 1: Graphical Solution Second Constraint Graphed Second Constraint Graphed 2 x x 2 = 19 x 2 x 2 x1x1x1x1 (0, 6 ) (9 , 0) Shaded region contains all feasible points for this constraint

29
29 Example 1: Graphical Solution Third Constraint Graphed Third Constraint Graphed x 2 x 2 x1x1x1x1 x 1 + x 2 = 8 (0, 8) (8, 0) Shaded region contains all feasible points for this constraint

30
30 Example 1: Graphical Solution x1x1x1x1 x 2 x x x 2 = 19 x 1 + x 2 = 8 x 1 = 6 Combined-Constraint Graph Showing Feasible Region Combined-Constraint Graph Showing Feasible Region Feasible Region Region

31
31 Example 1: Graphical Solution Objective Function Line Objective Function Line x1x1x1x1 x 2 x 2 (7, 0) (0, 5) Objective Function 5 x 1 + 7x 2 = 35 Objective Function 5 x 1 + 7x 2 =

32
32 Example 1: Graphical Solution n Selected Objective Function Lines x1x1x1x1 x 2 x 2 5 x 1 + 7x 2 = x 1 + 7x 2 = 42 5 x 1 + 7x 2 = 39

33
33 Example 1: Graphical Solution Optimal Solution Optimal Solution x1x1x1x1 x 2 x 2 Maximum Objective Function Line 5 x 1 + 7x 2 = 46 Maximum Objective Function Line 5 x 1 + 7x 2 = 46 Optimal Solution ( x 1 = 5, x 2 = 3) Optimal Solution ( x 1 = 5, x 2 = 3) Feasible /Convex Region

34
Problem. A firm manufactures pain relieving pills in two sizes A and B, size A contains 4 grains of element a, 7 grains of element b and 2 grains of element c, size B contains 2 grains of element a, 10 grains of element b and 8 grains of c. It is found by users that it requires at least 12 grains of element a, 74 grains of element b and 24 grains of element c to provide immediate relief. It is required to determine that least no. of pills a patient should take to get immediate relief. Formulate the problem as standard LPP.

35

36
36 Graphical Solution Algorithm Graphical Solution Algorithm Formulate the given linear programming problem in mathematical format. Formulate the given linear programming problem in mathematical format. Construct the graph for the problem so formulated. Construct the graph for the problem so formulated. Identify the convex region (solution space) and the vertices of the convex region. Identify the convex region (solution space) and the vertices of the convex region. Evaluate the values of the objective function at these vertices of the convex region. Evaluate the values of the objective function at these vertices of the convex region. Determine the vertex at which the objective function attains its maximum or minimum (optimum). Determine the vertex at which the objective function attains its maximum or minimum (optimum). Interpret the optimum solution so obtained. Interpret the optimum solution so obtained.

37
37 Slack and Surplus Variables A linear program in which all the variables are non- negative and all the constraints are equalities is said to be in standard form. A linear program in which all the variables are non- negative and all the constraints are equalities is said to be in standard form. Standard form is attained by adding slack variables to "less than or equal to" constraints, and by subtracting surplus variables from "greater than or equal to" constraints. Standard form is attained by adding slack variables to "less than or equal to" constraints, and by subtracting surplus variables from "greater than or equal to" constraints. Slack and surplus variables represent the difference between the left and right sides of the constraints. Slack and surplus variables represent the difference between the left and right sides of the constraints. Slack and surplus variables have objective function coefficients equal to 0. Slack and surplus variables have objective function coefficients equal to 0.

38
38 Max Z= 5x 1 + 7x 2 + 0s 1 + 0s 2 + 0s 3 s.t. x 1 + s 1 = 6 2x 1 + 3x 2 + s 2 = 19 2x 1 + 3x 2 + s 2 = 19 x 1 + x 2 + s 3 = 8 x 1 + x 2 + s 3 = 8 x 1, x 2, s 1, s 2, s 3 > 0 x 1, x 2, s 1, s 2, s 3 > 0 n Example 1 in Standard Form Slack Variables (for < constraints) s 1, s 2, and s 3 are s lack variables

39
39 Surplus Variables (for ≥ constraints) Write down the following LPP in standard form: Minimize Z = 3x 1 + 2x 2 +5x 3 Subject to the constraints: 2x 1 +3x 2 ≥3 x 1 +2x 2 +3x 3 ≥10 2x 1 +3x 3 ≥8 x 1, x 2, x 3 ≥0 The Standard Form LPP is, The Standard Form LPP is, Minimize Z = 3x 1 + 2x 2 +5x 3 +0.s 1 +0.s 2 +0.s 3 Subject to the constraints: 2x 1 +3x 2 –s 1 =3 x 1 +2x 2 +3x 3 –s 2 =10 2x 1 +3x 3 –s 3 =8 x 1, x 2, x 3, s 1, s 2, s 3 ≥0Where, s 1, s 2, and s 3 are surplus variables

40
40 Some important definitions Standard form of LPP: Let x, c ЄR n and z= c T x be a linear function on R n. Let A be an m x n real matrix of rank m. Then, the problem of determining x, so as to Maximize z = c T x, subject to the constraints: Ax = b, x≥0, where b is an m x 1 real matrix, is said to be a linear programming problem written in its standard form. Basic Solution: Given a system of m simultaneous linear equations in n unknowns Ax = b; x ЄR n, where A is an m x n real matrix of rank m (m

41
Degenerate Solution: A basic solution to the system Ax = b is called Degenerate if one or more of the basic variables vanish. Basic feasible solution: A feasible solution to the LPP which is also basic solution to the problem, is called a basic feasible solution to the LPP. Associated cost vector: Let x B be a basic feasible solution to the LPP: Maximize z= c T x, Subject to constraints: Ax =b, x ≥0. Then the vector c B = [ c B1, c B2, …, c Bn ], where c Bi are components of c associated with the basic variables, is called cost vector associated with the basic feasible solution x B. Optimum Basic Feasible Solution: A basic feasible solution x B to the LPP. Maximize z = c T x, subject to the constraints: Ax =b, x ≥0, is called an optimum basic feasible solution if z 0 = c B T x B ≥z *, where z* is the value of the objective function for any feasible solution.

42
Example: Find the basic feasible solution to the system of equations: 2x 1 +x 2 -x 3 = 2 3x 1 + 2x 2 +x 3 = 3 Solution. The given system of equations can be written as Ax =b where, A= x = and b=. Since rank of A is 2, the maximum number of linearly independent columns of A is 2. 2 X 2 sub matrices of A are; The variables not associated with the columns of these sub matrices are, respectively, x 3, x 1 and x 2. and Considering B=, a basic solution to the given system is obtained by setting x 3 = 0 and solving the system Thus a basic solution to the given problem is, (Basic) x 1 = 1, x 2 = 0; (Non-basic) x3= 0. Similarly the other two solutions are, (Basic) x 2 = 5/3, x 3 = -1/3; (Non-basic) x 1 = 0, and (Basic) x 1 = 1, x 3 =0; (Non-basic) x2=0. In each of the two basic solutions, at least one of the basic variables is zero. Hence two of the basic solutions are degenerate solutions. The solution [o, 5/3, -1/3] is not a feasible solution. The two basic solutions [1,0,0] and [1,0,0] are basic feasible solutions. =

43
Some Importance Theorems for Simplex Methods (without proof) Theorem1: The set of feasible solutions to an LPP is a convex set. Theorem2: [Fundamental theorem of LP] If the feasible region of an LPP is a convex polyhedron, then there exists an optimal solution to the LPP and at least one basic feasible solution must be optimal. Theorem3: [Reduction of feasible solution to a basic feasible solution] If an LPP has a feasible solution, then it also has a basic feasible solution. Theorem4: [Replacement of a Basic vector] Let an LPP have a basic feasible solution. If we drop one of the basic vectors and introduce a non-basic vector in the basic set, then the new solution obtained is also a basic feasible solution, provided these vectors are suitably selected.

44
x1x1 x2x2 xnxn Cost Vector c Bi Basic Variable x Bi Value of x Bi c1c1 c2c2 cncn c B1 c B2 c Bm Index Row z j -c j A (Coefficient Matrix) Sample of Simplex Table zj = Σ i=1 c Bi y ij is called the evaluation and the number z j -c j is called the net evaluation corresponding to each column.

45
Theorem 5: [Conditions of optimality] A sufficient condition for a basic feasible solution to an LPP to be an optimum (maximum) is that z j -c j ≥0 for all j for which the column vector a j ЄA is not, in the basis B. Theorem 6: [ Unbounded solution] Let there exist a basic feasible solution to a given LPP. If for at least one j, for which y ij ≤0 (I = 1,2,…, m), z j -c j is negative, then there does not exist any optimum solution to this LPP. Example: Solve the following LPP by Simplex Method: Max, z= x 1 +2x 2, S.t. x 1 -x 2 ≤4, x 1 -5x 2 ≤8, x 1, x 2 ≥0 Solution: Standard form of LPP is Objective function Max, z = x 1 +2x 2 +0.x 3 +0.x 4 Subject to the constrains: x 1 -x 2 +x 3 +0.x 4 =4 x 1 -5x x 3 + x 4 =8 x 1, x 2, x 3, x 4 ≥0 cjcj 1200 c Bi x Bi Value of x Bi x1x1 x2x2 x3x3 x4x4 0x3x x4x zjzj 00=z 1 0=z 2 0=z 3 0=z 4 Index row z j -c j z 1 -c 1 =-1Z 2 -c 2 =-2Z 3 -c 3 =0Z 4 - c 4 =0 Initial Simplex Table

46
FLOW CHART Simplex Algorithm for Maximization Reformulate the given LPP as standard form Obtain an initial basic feasible solution of the problem Compute the net Evaluation, set up in initial Table Examine the index row of net evaluation Is there any negative net evaluation? Optimum solution has been attained and STOP Are all elements of a column non-positives and corresponding net evaluation also negative? There exist an unbound solution to the problem, STOP Choose the most negative net evaluation. Corresponding the most negative evaluation column is KEY column. Select the positive element from key column and divide the corresponding values of the current basic variables by them Choose the smallest ratio. Row contains the smallest ratio is called Key row Identify the Pivot/Key element. Key column and Key row meet at Pivot/key element. Update the simplex table by appropriate operation (pivoting) NO YES

47
Example2: A firm manufactures 3 products A, B and C. The profits are $3, $2, and $4 respectively. The firm has 2 machines and below is the required processing time in minutes for each machine on each product. Machine G and H have 2000 and 2500 machine-minutes respectively. How many units of each product should be produced in order to gain maximum profit? MachineTime per unit (Minutes)Machine Capacity (Minutes) ABC G H

48
Solution: Define the objective Maximize profit. Define the decision variables: x 1 = number of products of type A, x 2 = number of products of type B and x 3 = number of products of type C. Objective Function, Max Z = 3x 1 + 2x 2 + 4x 3 Subject to the constraints, 4x 1 + 3x 2 + 5x 3 ≤ x 1 + 2x 2 + 4x 3 ≤ 2500 x 1, x 2, x 3 ≥ 0 The standard form of LPP is, Objective Function, Max Z = 3x 1 + 2x 2 + 4x 3 +0.x 4 +0.x 5 Subject to the constraints, 4x 1 + 3x 2 + 5x 3 +x 4 +0.x 5 = x 1 + 2x 2 + 4x 3 +0.x 4 +x 5 = 2500 x 1, x 2, x 3, x 4, x 5 ≥ 0 An initial basic feasible solution is: x 1 = x 2 = x 3 =0, then x 4 = 2000 and x 5 = 2500

49
cjcj 32400Minimum ratio c Bi x Bi Value of x Bi x1x1 x2x2 x3x3 x4x4 x5x5 x Bi /y ij ; y ij >0 0x4x /5=400 0x5x /4=625 zjzj Index Row (z j -c j ) Initial Simplex Table Since all elements in index row are non-positive, thus we have not attained optimum solution. The current solution can be further improved. Key Column Key Row 5 5 is Pivot/Key element

50
Algorithm for Iteration Step1: Identify the KEY Column, KEY row and the leading element (pivot/key element) from previous table. Step2: Determine the departing variable: The key row and corresponding basic variable will leave the basis. Step3: Determine the introducing variable: The corresponding non-basic variable of Key column will be replaced in basic for the next table. Step 4: Compute values for the key row (previous table) for new table: To do this, simply divide every number in the key row (previous table) by the element. [ y ^ kj =y kj /y kr ] Step 5: Compute values for each remaining row (rows) for new table: All remaining row (s) for the next simplex table can be calculated by the formula: New row number = (Number in old row) – [Corresponding number in key row] X [Corresponding fixed ratio], [y^ij = yij-(ykj/ykr)yir] where fixed ratio = old number in the key column/key element (number). Step6: Put the new values in new simplex table (First Iteration): Step7: Test for optimality and take decision after looking Index row.

51
Simplex Table1 (First Iteration) c Bi x Bi Value of x Bi x1x1 x2x2 x3x3 x4x4 x5x5 4x3x3 2000/5 =4004/53/55/5=11/50 0x5x X4/5 = X4/5 =-6/5 2-3X4/5 =-2/5 4-5X4/5 =0 0-1X4/5 =-4/5 1-0X4/5 =1 zjzj /512/544/50 Index Row (z j -c j )1/52/504/50 We observe that all values in index row of above table are non-negative thus we have reached optimum solution and the optimum solution is x 1 =0, x 2 =0, x 3 = 400. The maximum profit, z = $1600. Fixed Ratio = 4/5

52
52 Surplus Variables (for ≥ constraints) Write down the following LPP in standard form: Minimize Z = 3x 1 + 2x 2 +5x 3 Subject to the constraints: 2x 1 +3x 2 ≥3 x 1 +2x 2 +3x 3 ≥10 2x 1 +3x 3 ≥8 x 1, x 2, x 3 ≥0 The Standard Form LPP is, The Standard Form LPP is, Minimize Z = 3x 1 + 2x 2 +5x 3 +0.s 1 +0.s 2 +0.s 3 Subject to the constraints: 2x 1 +3x 2 –s 1 =3 x 1 +2x 2 +3x 3 –s 2 =10 2x 1 +3x 3 –s 3 =8 x 1, x 2, x 3, s 1, s 2, s 3 ≥0where, s 1, s 2, and s 3 are surplus variables Simplex Method II (≥ constraints, or ≤ and ≥ constraints, or ≥ and = constraints or ≥, ≤ and = constraints)

53
There are two procedures for solving the problem: Method of Penalties or Big M Method-Developed by A. Charnes Two-Phase Method-Developed by Dantzig, Orden and Wolfe Method of Penalties or Big M Method-Developed: The Standard Form LPP is, Minimize Z = 3x 1 + 2x 2 +5x 3 +0.s 1 +0.s 2 +0.s 3 +MA 1 +MA 2 +MA 3 Subject to the constraints: 2x 1 +3x 2 –s s s 3 +A 1 +0.A 2 +0.A 3 =3 x 1 +2x 2 +3x 3 +0.s 1 –s 2 +0.s 3 +0.A 1 +A 2 +0.A 3 =10 2x 1 +3x 3 +0.s 1 +0.s 2 –s 3 +0.A 1 +0.A 2 +A 3 =8 x 1, x 2, x 3, s 1, s 2, s 3, A 1, A 2, A 3 ≥0where, s 1, s 2 and s 3 are surplus variables, and A 1, A 2 and A 3 are artificial variables. For getting initial basic solution, we put, x 1 =x 2 =x 3 =s 1 =s 2 =s 3 =0, we get A 1 =3, A 2 =10 and A 3 = 8 [Basic]

54
54 Example3: A small jewelry manufacturing company employs a person who is a highly skilled gem cutter, and it wishes to use this person at least 6 hours per day for this purpose. On the other hand, the polishing facilities can be used in any amount up to 8 hours per day. The company specializes in three kinds of semiprecious stones P, Q and R. Relevant cutting, polishing and cost requirements are listed in the following table: PQR Cutting2 hr1 hr Polishing1 hr 2 hr Cost per stone$ 30 $ 10 How many gemstones of each type should be processed each day to minimize the cost of the finished stones? What is the minimum cost? Solve this problem by Simplex Method.

55
55 Solution: Le x 1, x 2 and x 3 represent the number of type p, q and r stones finished per day, respectively. The following mathematical problem to solve: Minimize, z = 30 x x x 3 Subject to constrains: 2x 1 + x 2 + x 3 ≥6 x 1 + x 2 + 2x 3 ≤8 x 1, x 2, x 3 ≥0 Introducing slack, surplus and artificial variables, the modified problem is: Minimize, z = 30 x x x 3 +0.s 1 +MA + 0. s 2 Subject to constrains: 2x 1 + x 2 + x 3 -s 1 +A =6 x 1 + x 2 + 2x 3 + s 2 =8 x 1, x 2, x 3, s 1, s 2 and A≥0 From the above system of equations, an initial basic feasible solution is: A = 6 and s 2 = 8 Which can be displayed in the following table:

56
56 Simplex Tableau I, II cjcj M0 C Bi Basic variables Solution values x1x1 x2x2 x3x3 s1s1 As2s2 Minimum ratio MA s2s C j -Z j 30-2M30-M10-MM00 30x1x1 311/2 -1/206 0s2s2 5110/3 C j -Z j x1x1 4/311/30-2/3-1/3 10x3x3 10/301/31 2/3 C j -Z j 220/3050/30 10/3 Thus, the optimal solution for the problem is: x 1 = 4/3, x 2 = 0, x 3 = 10/3 with min. z = 220/3. The minimum cost of $ 220/3 for gemstones will be produced if 4/3 unit of P, 0 unit of Q and 10/3 unit of R.

57
Artificial Variable—LPP in which one or more constraints are of “ ≥” or “=“ type (after ensuring that all all bs’ are non negative) in such cases we introduce another type of variable known as artificial variables in order to get the initial basic feasible solution. What is the physical meaning of artificial variable? Ans: These variables are fictitious and have no physical meaning or economic significance. They are merely a device to get the starting basic feasible solution so that simplex algorithm may be applied. These variables are required because in such problems the basic matrix is not obtained as identity matrix in the starting simplex table. Remark: By assigning a very large per unit penalty to each of the artificial variables in the objective function, these variables can be removed from the simplex table as soon as they become non-basic. Such a penalty is designated as –M for maximization problems and +M for minimization problems, where M≥0.

58
58 Procedure for Big M method Step1: If negative value of right hand side of any constraint, multiply -1 to obtain positive. Step2: Introduce slack variable (s) if “≤” constraint (s), surplus and artificial variables if “≥” constraints and artificial variable if “=“ constraint. Step3: For each artificial variable A i add -M A i to the objective function cease of maximization and + M A i in case of minimization problem. Use the same constant M for artificial variable. Step4: form the simplex tableau for the modified problem. Step5: Solve the modified problem by simplex method, as we described earlier. Step6: Relate the solution of the modified problem to the original problem: (i) If the modified problem has no solution then the original problem has no solution. (ii) If any artificial variables are non-zero in the solution to the modified problem though the all elements in index row are non-negative, then the original problem has no solution.

59
59 Two-Phase Simplex Method Two-phase simplex method is an alternative procedure to solve LPP’s involving artificial variables. Phase I: Test the feasibility (consistency) of the given problem. If the coefficient matrix A contains a unit sub-matrix I m, where m is the rank of A, then it will serve as an initial basis. Proceed on to Phase 2. We can consider the LPP determining (x, x art ) so as to maximize Z* = -(x a1 +x a2 +…+x ap ), subject to constraints: Ax+ I p x art =b; x≥0, x art ≥0. This will be called an auxiliary LPP in which artificial variables are assigned a cost -1 each and all others a cost 0. Clearly max. z*≤0 (since all x art ≥0), therefore this LPP cannot have an unbounded solution. Two cases now arise: Case1: If Max Z* <0, in this case the original LPP has no feasible solution. Case 2: If Max Z* = 0, in this case the original LPP has feasible solution.

60
Phase 2 The final simplex table of Phase 1 is converted into the initial simplex table of Phase2, just by deleting the non-basic artificial columns and re- computing z* and the net evaluations, in accordance with the original objective function. Assign the actual costs to the variables in the objective function and a zero cost to every artificial variable that appears in the basis at the zero level. This new objective function is now maximized subject to the given constraints by applying simplex method.

61
Example 4: Use two-Phase simplex method to maximize z= 2x 1 +x 2 -x 3 Subject to the constraints: 4x 1 +6x 2 +3x 3 ≤8 3x 1 -6x 2 -4x 3 ≤ 1 2x 1 +3x 2 -5x 3 ≥4 x 1, x 2, x 3 ≥0 Solution: The standard form of LPP is below: 4x 1 +6x 2 +3x 3 +x 4 =8 3x 1 -6x 2 -4x 3 +x 5 =1 2x 1 +3x 2 -5x 3 -x 6 +A=4 where, x 4, x 5, x 6 and A are slack, surplus and artificial variables, respectively, and x 1, x 2, x 3, x 4, x 5, x 6, A ≥0. The initial basic feasible solution is, x B = [8, 1, 4]. Phase 1: The objective function of the auxiliary LPP is Z* = Σ0.x j -A.

62
cjcj c Bi x Bi Value of x Bi x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 A 0x4x4 846*6* x5x A Index row (z j -c j ) Initial Simplex Table cjcj C Bi x Bi Value of x Bi x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 A 0X2X2 4/32/31½1/6000 0X5X A000-13/2-1/201 Index row (z j -c j )00013/21/2010 First iteration. Introduce x 2 and drop x 4

63
Since all numbers in index row are non-negative, a optimum solution has been attained in auxiliary LPP, and max Z* =0 and the artificial vector A appears in the basis at the zero level. Thus Phase I has found out a starting BFS to the given LPP. Phase II will start from this BFS and move towards optimality. Phase II: We consider the actual costs associated with the original variables and assign a cost zero to the artificial variable A, which appeared at zero level in Phase I. The new objective function is therefore, Z= 2x 1 +x 2 -x 3 +0.x 4 +0.x 5 +0.x 6 +0.A. We wish to maximize the new objective function subject to the given constraints by applying the simplex method. The optimum basic feasible solution if any, thus attained will be an optimum basic feasible solution to the original LPP. cjcj cBcB XBXB Value of X B x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 A 1X2X2 4/32/31½1/6000 0X5X5 97*7* A000-13/2-1/201 Index row4/3-4/303/21/6000 Staring Table

64
CjCj CBCB XBXB Value of X B x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 A 1X2X2 10/120125/421/14-2/2100 0X1X1 9/710-1/71/7 00 0A000-13/2-1/201 Index row4/30055/425/144/2100 First Iteration. Introduce x 1 and drop x 5 We observe from the above table, since all n umber in the index row are non-negative, therefore an optimum basic feasible solution has been attained. Hence an optimum basic feasible solution to the given LPP is x 1 = 9/7, x 2 = 10/12 and x 3 =0 and Max. Z= 64/21.

65
Duality The term duality implies that every linear programming problem whether of maximization or minimization has associated with it another linear programming problem based on the same data. The original (given) problem is called the primal problem while the other is called its dual problem. Given the primal problem (in standard form) Maximize Subject to The dual problem is the LPP Minimize Subject to

66
66 We thus note the following: 1. In the dual, there are as many (decision) variables as there are constraints in the primal. We usually say y i is the dual variable associated with the ith constraint of the primal. 2. There are as many constraints in the dual as there are variables in the primal. 3. If the primal is maximization then the dual is minimization and all constraints are If the primal is minimization then the dual is maximization and all constraints are . 4. In the primal, all variables are 0 while in the dual all the variables are unrestricted in sign. 5. The objective function coefficients c j of the primal are the RHS constants of the dual constraints. 6. The RHS constants b i of the primal constraints are the objective function coefficients of the dual. 7. The coefficient matrix of the constraints of the dual is the transpose of the coefficient matrix of the constraints of the primal.

67
67 Example 5: Obtain the dual of The primal problem Minimize z= 16x 1 +9x 2 +21x 3 Subject to the constraints: x 1 +x 2 +3x 3 ≥16 2x 1 +x 2 +x 3 ≥12 x 1, x 2, x 3 ≥0 Solution: Step1. Form the matrix A: A = Step2: Form the Matrix B (the transpose of A) B = A T = Step3: State the dual problem: Maximize z* = 16y y 2 Subject to the constraints: y 1 +2y 2 ≤16, y 1 +y 2 ≤9, 3y 1 +y 2 ≤21 y 1, y 2 ≥0

68
68 Some Important Theorems for Duality (without proof) Theorem: The dual of the dual is the primal Theorem: [Basic Duality Theorem] Let a primal problem be Maximize f(x) = c T x, subject to Ax≤b, x≥0 and x, c ЄR n and the associated dual be Minimize g(w) = b T w subject to A T w≥c, w ≥0; w, b ЄR m. If x 0 (w 0 ) is an optimum solution to the primal (dual), then there exists a feasible solution w 0 (x 0 ) to the dual (primal), such that c T x 0 = b T w 0 Theorem [Fundamental theorem of Duality] Let a primal problem be Maximize f(x) = c T x x subject to Ax≤b, x≥0 and x, c ЄR n, and its dual be Minimize g(w) = b T w subject to A T w≥c, w ≥0; w, b ЄR m. A neceassary and sufficient condition for a feasible solution x 0 (w 0 ) to the primal (dual) problem, to be an optimum solution is that there exists a feasible solution w 0 (x 0 ) to the dual (primal) problem such that c T x 0 =b T w 0. The feasible solution w 0 (x 0 ) then itself is an optimum solution to the dual (primal) problem, that is Max. f(x) = min. g(w) holds for all such pairs of x 0 and w 0.

69
Example 6: A firm makes two products A and B. Each product requires production on each of the two machines: ProductAvailable (in hours) MachineAB M1M M2M Total time available is 60 hours and 22 hour on machines M 1 and M 2 respectively. Product A and B contribute $3 and $ 4 per unit respectively. Determine the optimum product in order to gain the maximum profit. Write the dual of this problem and give its economic interpretation. Solution: Mathematical Formulation. The primal and its dual linear programming problems are: Primal problem Maximize z = 3x 1 +4x 2 Subject to constraints: 6x 1 +4x 2 ≤60 x 1 +2x 2 ≤22 x 1, x 2 ≥0 where x 1 =umber of units of product A and x 2 = number of units of product B Dual problem Minimize z* = 60 w w 2 Subject to the constraints: 6w 1 + w 2 ≥3 4w 1 + 2w 2 ≥4 w 1, w 2 ≥0 where w 1 = cost of one hour on machine M 1 and w 2 = cost of one hour on machine M 2.

70
70 Solution to the Primal problem The optimum solution of the primal problem can easily be obtained by simplex method. The optimum simplex table is given below: 3400 c Bi x Bi value of x Bi x1x1 x2x2 x3x3 x4x4 3x1x1 4101/4-1/2 4x2x /8-3/4 z j -c j Final Iteration: The optimum solution is: x 1 = 4, x 2 = 9 and the maximum profit = $ 48 Using duality, optimum solution to the dual problem can be obtained directly from the above table (optimum simplex table); w 1 = $0.25 per hour, w 2 = $ 1.50 per hour, and minimum cost = $48.

71
71 Economic Interpretation of Dual: As mentioned above shadow prices are the opportunity costs that indicate the potential profit that is lost by not having an additional unit of the respective right-hand side (resource), assuming that all right-hand side values are used optimally. Thus w 1 =0.25 and w 2 = 1.50 means that an additional processing hour on machine M 1 and M 2 will increase the profit by $0.25 and $1.50 respectively. If we increases the total available hours on machine M 1 from 60 hours to 61 hours; the new set of constraints will be: 6x 1 +4x 2 ≤60 and x 1 +2x 2 ≤22. On solving the primal problem with new set of constraints, the optimum solution becomes: x 1 = 0.25 and x 2 = with maximum z = $ Shadow Price: A shadow price is the maximum price that management is willing to pay for an extra unit of a given limited resource. Dual Simplex Algorithm Since any LPP can be solved by using simplex method, the method is applicable to both the primal as well as its dual.

72
References MacTutorMacTutor history of mathematics History of mathematical symbols and terms (J. Miller)symbolsterms History of economicsHistory of economics (The New School University) History of game theory History of game theory (P. Walker) IEEE history pages Historia MathematicaHistoria Mathematica journal Euclid's elements Wolfram's Mathworld Mathematical programming glossaryMathematical programming glossary (INFORMS) Von Neumann prize winnersVon Neumann prize winners (INFORMS) Brief hostory of feedback controlBrief hostory of feedback control (F.L. Lewis) Dantzig, G. B. (1963) Linear Programming and Extensions. Princeton University Press. Dantzig, G. B. (1963) Linear Programming and Extensions. Princeton University Press. Dewdney, A.K. (1989) Linear Programming. The New Turing Omnibus. Computer Science Press. Dewdney, A.K. (1989) Linear Programming. The New Turing Omnibus. Computer Science Press. Kapoor V.K. (1990) Operations Research. Sultan Chand & Sons press, New Delhi, India. Gupta P.K. and Mohan M. (2001) Linear programming and theory of game. Sultan Chand & Sons press, New Delhi, India. Links for Mathematical concepts for optimization:

73
Questions/Queries? Thank you for your kind attention and listening

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google