Chapter 5. The Duality Theorem

Slides:



Advertisements
Similar presentations
1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
Advertisements

Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
OR Simplex method (algebraic interpretation) Add slack variables( 여유변수 ) to each constraint to convert them to equations. (We may refer it as.
The Simplex Method: Standard Maximization Problems
Duality Dual problem Duality Theorem Complementary Slackness
Computational Methods for Management and Economics Carla Gomes
7(2) THE DUAL THEOREMS Primal ProblemDual Problem b is not assumed to be non-negative.
5.6 Maximization and Minimization with Mixed Problem Constraints
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
LINEAR PROGRAMMING SIMPLEX METHOD.
Linear Programming - Standard Form
Chapter 3 Linear Programming Methods 高等作業研究 高等作業研究 ( 一 ) Chapter 3 Linear Programming Methods (II)
Chapter 6 Linear Programming: The Simplex Method
Simplex method (algebraic interpretation)
Duality Theory LI Xiaolei.
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Duality Theory  Every LP problem (called the ‘Primal’) has associated with another problem called the ‘Dual’.  The ‘Dual’ problem is an LP defined directly.
OR Perturbation Method (tableau form) (after two iterations, optimal solution obtained) (0+2  1 ) (0+2  2 ) (1+  3 )
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
4  The Simplex Method: Standard Maximization Problems  The Simplex Method: Standard Minimization Problems  The Simplex Method: Nonstandard Problems.
 Minimization Problem  First Approach  Introduce the basis variable  To solve minimization problem we simple reverse the rule that is we select the.
1 Bob and Sue solved this by hand: Maximize x x 2 subject to 1 x x 2 ≤ x x 2 ≤ 4 x 1, x 2 ≥ 0 and their last dictionary was: X1.
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
EE/Econ 458 Duality J. McCalley.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
OR Simplex method (algebraic interpretation) Add slack variables( 여유변수 ) to each constraint to convert them to equations. (We may refer it as.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
OR Relation between (P) & (D). OR optimal solution InfeasibleUnbounded Optimal solution OXX Infeasible X( O )O Unbounded XOX (D) (P)
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Copyright © 2006 Brooks/Cole, a division of Thomson Learning, Inc. Linear Programming: An Algebraic Approach 4 The Simplex Method with Standard Maximization.
Approximation Algorithms based on linear programming.
An Introduction to Linear Programming
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Solving Linear Program by Simplex Method The Concept
The Duality Theorem Primal P: Maximize
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Chap 10. Sensitivity Analysis
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Perturbation method, lexicographic method
Duality for linear programming.
The Two-Phase Simplex Method
Chapter 5 Simplex-Based Sensitivity Analysis and Duality
Chapter 4 Linear Programming: The Simplex Method
Chap 9. General LP problems: Duality and Infeasibility
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
The Simplex Method: Standard Minimization Problems
Chapter 5. Sensitivity Analysis
Duality Theory and Sensitivity Analysis
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Dual simplex method for solving the primal
Chapter 4. Duality Theory
2. Generating All Valid Inequalities
Chapter 8. General LP Problems
I.4 Polyhedral Theory (NW)
Flow Feasibility Problems
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
I.4 Polyhedral Theory.
Chapter 8. General LP Problems
Chapter 2. Simplex method
Simplex method (algebraic interpretation)
DUALITY THEORY Reference: Chapter 6 in Bazaraa, Jarvis and Sherali.
Chapter 10: Iterative Improvement
Chapter 8. General LP Problems
Prepared by Po-Chuan on 2016/05/24
Chapter 2. Simplex method
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Presentation transcript:

Chapter 5. The Duality Theorem Given an LP, we define another LP derived from the same data, but with different structure. This LP is called the dual problem (쌍대문제). The main purpose to consider dual is to obtain an upper bound (estimate) on the optimal objective value of the given LP (for maximization problem) without solving it to optimality. Also dual problem provides optimality conditions of a solution 𝑥 ∗ for an LP and help to understand the behavior of the simplex method. Very important concept to understand the properties of the LP and the simplex method. OR-1 Opt. 2018

Preliminaries Taking nonnegative linear combination of inequality constraints: Consider two constraints 𝑥 1 +2 𝑥 2 ≤3 and 2 𝑥 1 + 𝑥 2 ≤4 …. (1’) In vector notation : 𝑎 1 ′𝑥≤3, 𝑎 2 ′𝑥≤4, where 𝑎 1 = 1 2 , 𝑎 2 = 2 1 If we multiply scalars 𝑦 1 ≥0 to both sides of the 1st constraint and 𝑦 2 ≥0 to the 2nd constraint and add the l.h.s. and r.h.s. respectively, we get 𝑦 1 +2 𝑦 2 𝑥 1 +(2 𝑦 1 + 𝑦 2 ) 𝑥 2 ≤(3 𝑦 1 +4 𝑦 2 ) ….. (2’) In vector notation, 𝑦 1 𝑎 1 ′+ 𝑦 2 𝑎 2 ′ 𝑥≤(3 𝑦 1 +4 𝑦 2 ) Any vector that satisfies (1’) also satisfies (2’), but converse is not true. Moreover, the coefficient vector in l.h.s. of (2’) is obtained by taking the nonnegative linear combination of the coefficient vectors in (1’) OR-1 Opt. 2018

x2 2x1+x24 y1(1, 2)+y2(2, 1), y1, y2  0 x1+2x23 (1, 2) (2, 1) (5/3, 2/3) (y1a1’+y2a2’)x  (3y1+4y2) x1 (0, 0) OR-1 Opt. 2018

Getting Dual Problem ex) max 4 𝑥 1 + 𝑥 2 +5 𝑥 3 +3 𝑥 4 s.t 𝑥 1 − 𝑥 2 − 𝑥 3 +3 𝑥 4 ≤1 5 𝑥 1 + 𝑥 2 +3 𝑥 3 +8 𝑥 4 ≤55 − 𝑥 1 +2 𝑥 2 +3 𝑥 3 −5 𝑥 4 ≤3 𝑥 1 , 𝑥 2 , 𝑥 3 , 𝑥 4 ≥0 Lower bound on the optimal value : consider feasible solution (0, 0, 1, 0)  𝑧 ∗ ≥5 (3, 0, 2, 0)  𝑧 ∗ ≥22 Upper bound : consider inequality obtained by multiplying 0 to the 1st , 1 to the 2nd, and 1 to the 3rd constraints and add the l.h.s. and r.h.s. respectively 4 𝑥 1 + 3𝑥 2 +6 𝑥 3 +3 𝑥 4 ≤58 …… (1) OR-1 Opt. 2018

Further, any feasible solution to the LP must satisfy 4 𝑥 1 + 3𝑥 2 +6 𝑥 3 +3 𝑥 4 ≤58 …… (1) Since we multiplied nonnegative numbers on both sides of the inequalities, any vector that satisfies the three constraints (which includes feasible solutions to the LP) also satisfies (1). Hence any feasible solution to LP, which satisfies the three constraints and nonnegativity, also satisfies (1). Note that all the points satisfying 4 𝑥 1 +3 𝑥 2 +6 𝑥 3 +3 𝑥 4 ≤58 do not necessarily satisfy the three constraints in the LP. Further, any feasible solution to the LP must satisfy 4 𝑥 1 + 𝑥 2 +5 𝑥 3 +3 𝑥 4 ≤4 𝑥 1 +3 𝑥 2 +6 𝑥 3 +3 𝑥 4 (  58) since any feasible solution must satisfy nonnegativity constraints on the variables and the coefficients in the second expression is greater than or equal to the corresponding coefficients in the first expression (objective function). So 58 is an upper bound on the optimal value of the LP. OR-1 Opt. 2018

Now, we may use nonnegative weight 𝑦 𝑖 for 𝑖−𝑡ℎ constraint . max 4 𝑥 1 + 𝑥 2 +5 𝑥 3 +3 𝑥 4 s.t 𝑥 1 − 𝑥 2 − 𝑥 3 +3 𝑥 4 ≤1 5 𝑥 1 + 𝑥 2 +3 𝑥 3 +8 𝑥 4 ≤55 − 𝑥 1 +2 𝑥 2 +3 𝑥 3 −5 𝑥 4 ≤3 𝑥 1 , 𝑥 2 , 𝑥 3 , 𝑥 4 ≥0 Now, we may use nonnegative weight 𝑦 𝑖 for 𝑖−𝑡ℎ constraint . 𝑦 1 +5 𝑦 2 − 𝑦 3 𝑥 1 + − 𝑦 1 + 𝑦 2 +2 𝑦 3 𝑥 2 + − 𝑦 1 +3 𝑦 2 +3 𝑦 3 𝑥 3 + 3 𝑦 1 +8 𝑦 2 −5 𝑦 3 𝑥 4 ≤ 𝑦 1 +55 𝑦 2 +3 𝑦 3 In vector notation, 𝑦 1 1,−1,−1,3 + 𝑦 2 5,1,3,8 + 𝑦 3 −1,2,3,−5 ′ 𝑥≤( 𝑦 1 +55 𝑦 2 +3 𝑦 3 ) Objective function of the LP is 4 𝑥 1 + 𝑥 2 +5 𝑥 3 +3 𝑥 4 OR-1 Opt. 2018

Hence as long as the nonnegative weights 𝑦 𝑖 ′𝑠 satisfy 𝑦 1 +5 𝑦 2 − 𝑦 3 ≥4, − 𝑦 1 + 𝑦 2 +2 𝑦 3 ≥1 − 𝑦 1 +3 𝑦 2 +3 𝑦 3 ≥5, 3 𝑦 1 +8 𝑦 2 −5 𝑦 3 ≥3 we can use 𝑦 1 +55 𝑦 2 +3 𝑦 3 as an upper bound on optimal value. To find more accurate upper bound (smallest upper bound), we want to solve Dual problem obtained. Note that the objective value of any feasible solution to the dual problem provides an upper bound on the optimal value of the given LP (called primal problem). OR-1 Opt. 2018

P: primal problem, D: dual problem General form P: primal problem, D: dual problem maximize 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 maximize 𝑐 ′ 𝑥 (P) subject to 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ≤ 𝑏 𝑖 , 𝑖=1,…,𝑚 subject to 𝐴𝑥≤𝑏 𝑥 𝑗 ≥0, 𝑗=1,2,…,𝑛 𝑥≥0 minimize 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 minimize 𝑏 ′ 𝑦 (D) subject to 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 ≥ 𝑐 𝑗 , 𝑗=1,…,𝑛 subject to 𝑦 ′ 𝐴≥𝑐′ 𝑦 𝑖 ≥0, 𝑖=1,2,…,𝑚 𝑦≥0 OR-1 Opt. 2018

Thm: (Weak duality relation) Suppose ( 𝑥 1 ,…, 𝑥 𝑛 ) is a feasible solution to the primal problem (P) and ( 𝑦 1 ,…, 𝑦 𝑚 ) is a feasible solution to the dual problem (D), then 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ≤ 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 (5.4) pf) 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ≤ 𝑗=1 𝑛 ( 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 ) 𝑥 𝑗 = 𝑖=1 𝑚 ( 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ) 𝑦 𝑖 ≤ 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 .  Cor: If we can find a feasible 𝑥 ∗ to (P) and a feasible 𝑦 ∗ to (D) such that 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ∗ = 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 ∗ , then x* is an optimal solution to (P) and y* is an optimal solution to (D). pf) For all feasible solution 𝑥 to (P), we have 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ≤ 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 ∗ = 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ∗ . Similarly, for all feasible y to (D), we have 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 ∗ = 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ∗ ≤ 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 .  OR-1 Opt. 2018

(i.e., no duality gap, dual optimal value – primal optimal value = 0) Thm 5.1: [Strong Duality Theorem] If (P) has an optimal solution ( 𝑥 1 ∗ ,…, 𝑥 𝑛 ∗ ), then (D) also has an optimal solution, say ( 𝑦 1 ∗ ,…, 𝑦 𝑚 ∗ ), and 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ∗ = 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 ∗ . (i.e., no duality gap, dual optimal value – primal optimal value = 0) Note that strong duality theorem says that if (P) has an optimal solution, the dual (D) is neither unbounded nor infeasible, but always has an optimal solution. OR-1 Opt. 2018

Idea of the proof: Read the optimal solution of the dual problem from the coefficients of the slack variables in the zeroth equation from the optimal dictionary (tableau). ex) Note that the dual variables 𝑦 1 , 𝑦 2 , 𝑦 3 matches naturally with slack variables 𝑥 5 , 𝑥 6 , 𝑥 7 . For example 𝑥 5 is slack variable for the first constraint and 𝑦 1 is dual variable for the first constraint, and so on. OR-1 Opt. 2018

At optimality, the tableau looks In the zeroth equation of the tableau, the coefficients of the slack variables are –11 for 𝑥 5 , 0 for 𝑥 6 , −6 for 𝑥 7 . Assigning these values with reversed signs to the corresponding dual variables, we obtain desired optimal solution of the dual: 𝑦 1 =11, 𝑦 2 =0, 𝑦 3 =6. OR-1 Opt. 2018

Suppose we performed row operations Idea of the proof: Note that the coefficients of 𝑥 5 , 𝑥 6 , and 𝑥 7 in the zeroth equation show what numbers we multiplied to the corresponding equation and add them to the zeroth equation in the elementary row operations (net effect of many row operations) ex) Suppose we performed row operations (row2)  2(row 1) + (row 2), and then (row 0)  3(row 2) + (row 0). The net effect in zeroth equation is adding 6(row 1) + 3(row 2) to the zeroth equation and the scalar we multiplied can be read from the coefficients of 𝑥 5 and 𝑥 6 in the zeroth equation. OR-1 Opt. 2018

(ex-continued) (row 1)2 + (row 2) OR-1 Opt. 2018

(ex-continued) (row 2)  3 + (row 0) OR-1 Opt. 2018

Example – Initial tableau Optimal tableau OR-1 Opt. 2018

𝑦 𝑖 ≤0, 𝑖=1,…,𝑚 (for slack variables) Let 𝑦 𝑖 be the scalar we multiplied to the i-th equation and add to the zeroth equation in the net effect of many simplex iterations (elementary row operations). Then the coefficients of slack variables in the zeroth equation represent the 𝑦 𝑖 values we multiplied to the i-th equation for 𝑖=1,…,𝑚. Also the coefficients of structural variables in the zeroth equation are given as 𝑐 𝑗 + 𝑖=1 𝑚 𝑦 𝑖 𝑎 𝑖𝑗 , 𝑗=1,…,𝑛 Now in the optimal tableau, all the coefficients in the zeroth equation are  0, which implies 𝑦 𝑖 ≤0, 𝑖=1,…,𝑚 (for slack variables) 𝑐 𝑗 + 𝑖=1 𝑚 𝑦 𝑖 𝑎 𝑖𝑗 ≤0, 𝑗=1,…,𝑛 (for stuctural variables) If we take (− 𝑦 𝑖 ), 𝑖=1,…,𝑚 as a dual solution, it is dual feasible. (− 𝑦 𝑖 )≥0, 𝑖=1,…,𝑚 𝑖=1 𝑚 (− 𝑦 𝑖 ) 𝑎 𝑖𝑗 ≥ 𝑐 𝑗 , 𝑗=1,…,𝑛 OR-1 Opt. 2018

It is the idea of the proof. Also the constant term in the zeroth equation gives the value 𝑖=1 𝑚 𝑦 𝑖 𝑏 𝑖 =− 𝑖=1 𝑚 (− 𝑦 𝑖 ) 𝑏 𝑖 . So it is the negative of the dual objective value of the dual solution (− 𝑦 𝑖 ), 𝑖=1,…,𝑚. Note that the constant term in the zeroth equation also gives the negative of the objective value of the current primal feasible solution. So we have found a feasible dual solution giving the dual objective value which is same as the objective value of the current primal feasible solution. From previous Corollary, the dual solution and the primal solution are optimal to the dual and the primal problem, respectively. It is the idea of the proof. OR-1 Opt. 2018

pf of strong duality theorem) Suppose we introduced slack variables 𝑥 𝑛+𝑖 = 𝑏 𝑖 − 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 (𝑖=1,…,𝑚) and solve the LP by simplex and obtain optimal dictionary with 𝑧= 𝑧 ∗ + 𝑘=1 𝑛+𝑚 𝑐 𝑘 𝑥 𝑘 , 𝑐 𝑘 ≤0 ∀𝑘, 𝑧 ∗ = 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ∗ . Let 𝑦 𝑖 ∗ =− 𝑐 𝑛+𝑖 , 𝑖=1,…,𝑚. We claim that 𝑦 𝑖 ∗ , 𝑖=1,…,𝑚 is an optimal dual solution, i.e. it satisfies dual constraints and 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 ∗ = 𝑧 ∗ . Equate 𝑧= 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 and 𝑧= 𝑧 ∗ + 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 + 𝑗=𝑛+1 𝑛+𝑚 𝑐 𝑗 𝑥 𝑗  𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 = 𝑧 ∗ + 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 + 𝑗=𝑛+1 𝑛+𝑚 𝑐 𝑗 𝑥 𝑗 . This equation must be satisfied by any 𝑥 that satisfies the dictionary (excluding the nonnegativity constraints) since the set of feasible solutions satisfying the equations does not change. OR-1 Opt. 2018

Have 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 = 𝑧 ∗ + 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 + 𝑗=𝑛+1 𝑛+𝑚 𝑐 𝑗 𝑥 𝑗 . Substitute 𝑥 𝑛+𝑖 = 𝑏 𝑖 − 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 , 𝑖=1,…,𝑚 into above. Since any feasible solution to the dictionary should satisfy this.  𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 = 𝑧 ∗ + 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 − 𝑖=1 𝑚 𝑦 𝑖 ∗ ( 𝑏 𝑖 − 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 )  𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 = 𝑧 ∗ − 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 ∗ + 𝑗=1 𝑛 ( 𝑐 𝑗 + 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 ∗ ) 𝑥 𝑗 Now this equation should hold for all feasible solutions to the dictionary (disregarding nonnegativity constraints). From the initial dictionary, we know that any feasible solution to the dictionary (disregarding nonnegativity constraints) can be obtained by assigning arbitrary values to 𝑥 1 ,…, 𝑥 𝑛 and setting 𝑥 𝑛+𝑖 = 𝑏 𝑖 − 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 , (𝑖=1,…,𝑚). Use these solutions. Note that, in the above equation, the variables 𝑥 𝑛+𝑖 ′𝑠 do not appear. So it must hold for any value of 𝑥 𝑗 , 𝑗=1,…,𝑛. OR-1 Opt. 2018

Equality must hold for all values of 𝑥 1 ,…, 𝑥 𝑛 . 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 = 𝑧 ∗ − 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 ∗ + 𝑗=1 𝑛 ( 𝑐 𝑗 + 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 ∗ ) 𝑥 𝑗 Equality must hold for all values of 𝑥 1 ,…, 𝑥 𝑛 . Hence 𝑧 ∗ = 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 ∗ , 𝑐 𝑗 = 𝑐 𝑗 + 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 ∗ , 𝑗=1,…,𝑛 𝑐 𝑘 ≤0 ∀𝑘  𝑐 𝑗 − 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 ∗ ≤0  𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 ∗ ≥ 𝑐 𝑗 , 𝑗=1,…,𝑛. − 𝑦 𝑖 ∗ ≤0  𝑦 𝑖 ∗ ≥0, 𝑖=1,…,𝑚 Hence 𝑦 ∗ is dual feasible. Also we have that 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 ∗ = 𝑧 ∗ = 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ∗ . Since 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 ∗ = 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ∗ ≤ 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 for all dual feasible 𝑦 (weak duality), 𝑦 𝑖 ∗ , 𝑖=1,…,𝑚 is an optimal dual solution and 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 ∗ = 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ∗ .  OR-1 Opt. 2018