Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Linear Programming Jeff Edmonds York University COSC 3101 Lecture 5 Def and Hot Dog Example Network Flow Def nNetwork Flow Def n Matrix View of Linear.

Similar presentations


Presentation on theme: "1 Linear Programming Jeff Edmonds York University COSC 3101 Lecture 5 Def and Hot Dog Example Network Flow Def nNetwork Flow Def n Matrix View of Linear."— Presentation transcript:

1 1 Linear Programming Jeff Edmonds York University COSC 3101 Lecture 5 Def and Hot Dog Example Network Flow Def nNetwork Flow Def n Matrix View of Linear Programming Hill Climbing Simplex Method Dual Solution Witness to Optimality Define Dual Problem Buy Fruit or Sell Vitamines Duality Primal-Dual Hill Climbing

2 2 Linear Programming Linear Program: An optimization problem whose constraints and cost function are linear functions Goal: Find a solution which optimizes the cost. E.g. Maximize Cost Function : 21x 1 - 6x 2 – 100x 3 - 100x 4 Constraint Functions: 5x 1 + 2x 2 +31x 3 - 20x 4  21 1x 1 - 4x 2 +3x 3 + 10x 1  56 6x 1 + 60x 2 - 31x 3 - 15x 4  200 ….. Applied in various industrial fields: Manufacturing, Supply-Chain, Logistics, Marketing… To save money and increase profits !

3 3 A Hotdog A combination of pork, grain, and sawdust, …

4 4 Constraints: Amount of moistureAmount of moisture Amount of protein,Amount of protein, …

5 5 The Hotdog Problem Given today’s prices, what is a fast algorithm to find the cheapest hotdog?

6 6 Abstract Out Essential Details Cost: 29, 8, 1, 2 Amount to add: x 1, x 2, x 3, x 4 pork grainwater sawdust 3x 1 + 4x 2 – 7x 3 + 8x 4 ³ 12 2x 1 - 8x 2 + 4x 3 - 3x 4 ³ 24 -8x 1 + 2x 2 – 3x 3 - 9x 4 ³ 8 x 1 + 2x 2 + 9x 3 - 3x 4 ³ 31 Constraints: moisturemoisture protein,protein, … 29x 1 + 8x 2 + 1x 3 + 2x 4 Cost of Hotdog:

7 7 3x 1 + 4x 2 – 7x 3 + 8x 4 ³ 12 2x 1 - 8x 2 + 4x 3 - 3x 4 ³ 24 -8x 1 + 2x 2 – 3x 3 - 9x 4 ³ 8 x 1 + 2x 2 + 9x 3 - 3x 4 ³ 31 29x 1 + 8x 2 + 1x 3 + 2x 4 Subject to: Minimize: Abstract Out Essential Details

8 8 Network Flow as a Linear Program Given an instance of Network Flow: > express it as a Linear Program: The variables: Maximize: Subject to: Flows f for each edge.  : F  c. (Flow can't exceed capacity)  v:  u F =  w F (flow in = flow out) rate(F) =  u F -  v F

9 M i,j XjXj NiNi  subject to Linear Programming Linear Program Minimize: C T  X Subject to: M  X  N n variable x j that we are looking for values of. An optimization function –Each has a variable has a coefficient c j. –The dot product C T  X gives one value to minimize. m constraints: –Some linear combination of the variables must be at least some set value. –  i M i  X  N i Generally implied that variables are positive. XjXj CjCj minimize XjXj  0

10 M i,j XjXj NiNi  subject to Linear Programming Linear Program Minimize: C T  X Subject to: M  X  N These are the linear programs in “standard” form –Minimize C T  X hence X subject to X  –Maximize C T  X hence X subject to X  But you could mix and match. XjXj CjCj minimize M i,j XjXj NiNi  subject to Maximize: C T  X Subject to: M  X  N XjXj CjCj maximize    

11 Simplex Algorithm Invented by George Dantzig in 1947 A hill climbing algorithm Local Max Global Max

12 Simplex Algorithm Invented by George Dantzig in 1947 A hill climbing algorithm Guaranteed to find an global optimal solution for Linear Programs Global Max

13 Simplex Algorithm Computes solution to a Linear Program by evaluating vertices where constraints intersect each other. Worst case exponential time. Practically very fast. Ellisoid algorithm (1979) First poly time O(n 4 L) algorithm.

14 C1 C2 (0,0) x2x2 x1x1 Minimize Cost = 5x 1 + 7x 2 Constraint Functions C1: 2x 1 + 4x 2  100 C2: 3x 1 + 3x 2  90 x 1, x 2  0 With n variables, x 1, x 2, …, x n, there are n dimensions. (Here n=2) Each constraint is an n-1 dimensional plain. (Here the 1-dim line) Each point in this space, is a solution. It is a valid solution if it is on the correct side of each constraint plain Simplex Algorithm

15 C1 C2 (0,0) x2x2 x1x1 Cost Function Cost = 5x 1 + 7x 2 Constraint Functions C1: 2x 1 + 4x 2  100 C2: 3x 1 + 3x 2  90 x 1, x 2  0 Solutions on this line have one value of the objective function This has another These are not valid This is the optimal value Simplex Algorithm

16 C1 C2 (0,0) x2x2 x1x1 Cost Function Cost = 5x 1 + 7x 2 Constraint Functions C1: 2x 1 + 4x 2  100 C2: 3x 1 + 3x 2  90 x 1, x 2  0 The arrow tells the direction that the optimal function increases. Note that the solution is a vertex (simplex). Each simplex is the intersection of n constraints. (Here n = #of variables = 2) Simplex Algorithm

17 The arrow tells the direction that the optimal function increases. Note that the solution is a vertex (simplex). Each simplex is the intersection of n constraints. (Here n = #of variables = 2) The simplex method takes hill climbing steps, from one simplex (valid solution) to another. Simplex Algorithm

18 Maximize Cost : 21x 1 - 6x 2 – 100x 3 Constraint Functions: 5x 1 + 2x 2 +31x 3  21 1x 1 - 4x 2 +3x 3  56 6x 1 + 60x 2 - 31x 3  200 -5x 1 + 3x 2 +4x 3  8 x 1, x 2, x 3  0 With n variables, x 1, x 2, x 3 …, x n, there are n dimensions. (Here n=3) Each constraint is an n-1 dimensional plain. (Here the 2-dim triangles) Each simplex (vertex) is the intersection of n such constraints. (Here looks like 6 but generally only n=3) Simplex Algorithm A simplex is specified by a subset of n of the m tight (=) constraints. All other constraints must be satisfied.

19 Maximize Cost : 21x 1 - 6x 2 – 100x 3 Constraint Functions: 5x 1 + 2x 2 +31x 3  21 1x 1 - 4x 2 +3x 3  56 6x 1 + 60x 2 - 31x 3  200 -5x 1 + 3x 2 +4x 3  8 x 1, x 2, x 3  0 If we slacken one of our n tight constrains, our solution slides along a 1-dim edge. Head in the direction that increases the potential function. Simplex Algorithm A simplex is specified by a subset of n of the m tight (=) constraints.

20 Maximize Cost : 21x 1 - 6x 2 – 100x 3 Constraint Functions: 5x 1 + 2x 2 +31x 3  21 1x 1 - 4x 2 +3x 3  56 6x 1 + 60x 2 - 31x 3  200 -5x 1 + 3x 2 +4x 3  8 x 1, x 2, x 3  0 If we slacken one of our n tight constrains, our solution slides along a 1-dim edge. Head in the direction that increases the potential function. Keep sliding until we tighten some constraint. This is one hill climbing step. Simplex Algorithm A simplex is specified by a subset of n of the m tight (=) constraints.

21 Maximize Cost : 21x 1 - 6x 2 – 100x 3 Constraint Functions: 5x 1 + 2x 2 +31x 3  21 1x 1 - 4x 2 +3x 3  56 6x 1 + 60x 2 - 31x 3  200 -5x 1 + 3x 2 +4x 3  8 x 1, x 2, x 3  0 If we slacken one of our n tight constrains, our solution slides along a 1-dim edge. Head in the direction that increases the potential function. Keep sliding until we tighten some constraint. This is one hill climbing step. Simplex Algorithm A simplex is specified by a subset of n of the m tight (=) constraints.

22 Do all Linear Programs have optimal solutions ? No ! Three types of Linear Programs: 1. Has an optimal solution with a finite cost value: e.g. nutrition problem 2. Unbounded: e.g maximize x, x  x  3. Infeasible: e.g maximize x, x  x  x  Simplex Algorithm

23 23 Primal Dual

24 24 Hill Climbing We have a valid solution. (not necessarily optimal) Take a step that goes up. measure progress Value of our solution. Problems: Exit Can't take a step that goes up. Running time? Initially have the “zero Local Max Global Max Can our Network Flow Algorithm get stuck in a local maximum? Make small local changes to your solution to construct a slightly better solution. If you take small step, could be exponential time.

25 25 Hill Climbing Avoiding getting stuck in a local maximum Good ExecutionBad Execution Made better choices of direction Hard Back up an retry Exponential time Define a bigger step

26 26 Network Flow Can our Simplex Algorithm get stuck in local max? Need to prove for every linear program for every choice of steps an optimal solution is found! No! How?

27 27 Primal-Dual Hill Climbing Mars settlement has hilly landscape and many layers of roofs.

28 28 Primal-Dual Hill Climbing Primal Problem: Exponential # of locations to stand. Find a highest one. Dual problem: Exponential # of roofs. Find a lowest one.

29 29 Primal-Dual Hill Climbing Prove: Every roof is above every location to stand.  R  L height(R)  height(L)  height(R min )  height(L max ) Is there a gap?

30 30 Primal-Dual Hill Climbing Prove: For every location to stand either: the alg takes a step up or the alg gives a reason that explains why not by giving a ceiling of equal height. i.e.  L [  L’ height(L’)  height(L) or  R height(R) = height(L)] or But  R  L height(R)  height(L) No Gap

31 31 Primal-Dual Hill Climbing Prove: For every location to stand either: the alg takes a step up or the alg gives a reason that explains why not by giving a ceiling of equal height. i.e.  L [  L’ height(L’)  height(L) or  R height(R) = height(L)] or Can't go up from this location and no matching ceiling. Can't happen! ?

32 32 Primal-Dual Hill Climbing Prove: For every location to stand either: the alg takes a step up or the alg gives a reason that explains why not by giving a ceiling of equal height. i.e.  L [  L’ height(L’)  height(L) or  R height(R) = height(L)] or No local maximum!

33 33 Primal-Dual Hill Climbing Claim: Primal and dual have the same optimal value. height(R min ) = height(L max ) Proved:  R  L, height(R)  height(L) Proved: Alg runs until it provides L alg and R alg height(R alg ) = height(L alg ) No Gap height(R min )  height(R alg )  height(L alg )  height(L max ) height(R min )  height(L max ) L alg witness that height(L max ) is no smaller. R alg witness that height(L max ) is no bigger. Exit

34 M i,j XjXj NiNi  subject to Linear Program Minimize: C T  X Subject to: M  X  N n variable x j that we are looking for values of. An optimization function –Each has a variable has a coefficient c j. –The dot product C T  X gives one value to minimize. m constraints: –Some linear combination of the variables must be at least some set value. –  i M i  X  N i Generally implied that variables are positive. XjXj CjCj minimize XjXj  0 Duality

35 M i,j XjXj NiNi  subject to Linear Program Minimize: C T  X Subject to: M  X  N These are the linear programs in “standard” form –Minimize C T  X hence X subject to X  –Maximize C T  X hence X subject to X  But you could mix and match. XjXj CjCj minimize M i,j XjXj NiNi  subject to Maximize: C T  X Subject to: M  X  N XjXj CjCj maximize     Duality

36 ? Dual Linear Program NiNi For every primal linear program, we define its dual linear program. Everything is turned upside down. The matrix of coefficients is transposed For each constraint, a variable. –Form the objective function vector from the constraint vector. –Generally, the constraint is ‘  ’ and then the variable is Y i  0 M T j,i subject to YiYi M i,j XjXj NiNi  subject to Primal Linear Program Minimize: C T  X Subject to: M  X  N XjXj CjCj minimize YiYi YiYi  0

37 ? Duality Dual Linear Program NiNi For every primal linear program, we define its dual linear program. Everything is turned upside down. The matrix of coefficients is transposed For each constraint, a variable. –Form constraint vector from the objective function vector. –Generally, the constraint is ‘  ’ and then the variable is Y i  0 –But if it is ‘=’, then the variable Y i is unconstrained. M T j,i subject to YiYi M i,j XjXj NiNi  subject to Primal Linear Program Minimize: C T  X Subject to: M  X  N XjXj CjCj minimize YiYi YiYi  0 

38 ? Duality Dual Linear Program NiNi CjCj  For every primal linear program, we define its dual linear program. Everything is turned upside down. The matrix of coefficients is transposed For each constraint, a variable. For each variable, a constraint. –Form constraint vector from the objective function vector. Max  Min and    maximize N T.y M T.y  C T M T j,i subject to maximize YiYi M i,j XjXj NiNi  subject to Primal Linear Program Minimize: C T  X Subject to: M  X  N XjXj CjCj minimize YiYi

39 Duality Dual Linear Program NiNi CjCj  For every primal linear program, we define its dual linear program. maximize N T.y M T.y  C T M T j,i subject to maximize YiYi M i,j XjXj NiNi  subject to Primal Linear Program Minimize: C T  X Subject to: M  X  N XjXj CjCj minimize YiYi Everything is turned upside down. Max Flow  Min Cut Buyer of nutrients  Seller of nutrients in fruit in vitamins

40 Duality Dual Linear Program NiNi CjCj  For every primal linear program, we define its dual linear program. maximize N T.y M T.y  C T M T j,i subject to maximize YiYi M i,j XjXj NiNi  subject to Primal Linear Program Minimize: C T  X Subject to: M  X  N XjXj CjCj minimize YiYi Every solution X of the primal is above every solution Y of the primal.  X  Y C T  X  N T  Y  C  X min  N  Y max  We will prove equality. Dual of the dual is itself!

41 The Nutrition Problem Each fruit contains different nutrients Each fruit has different cost An apple a day keeps the doctor away – but apples are costly! A customer’s goal is to fulfill daily nutrition requirements at lowest cost.

42 The Nutrition Problem (cont’d) Let’s take a simpler case of just apples and bananas. Must take at least 100 units of Calories & 90 units of Vitamins for good nutrition. A customer’s goal is to buy fruits in such a quantity that it minimizes cost but fulfills nutrition. CaloriesVitaminsCost($) 235 437 Cost Function Cost = 5x 1 + 7x 2 Constraint Functions C1: 2x 1 + 4x 2  100 C2: 3x 1 + 3x 2  90 x 1, x 2  0

43 The Nutrition Problem (cont’d) Matrix Representation Constraints: 2x 1 + 4x 2  100 3x 1 + 3x 2  90 Non-negativity: x 1, x 2  0 Cost function = 5x 1 + 7x 2 Real life problems may have many variables and constraints ! CjCj XjXj NiNi M i,j  XjXj

44 Semantics of Duality A customer’s goal is to buy fruits in such a quantity that it minimizes cost but fulfills nutrition. CjCj XjXj NiNi M i,j  XjXj Primal LP: minimize C.x Q.x  N Coefficients in each column represent the amount of nutrients in a particular food Cost of each fruit Daily nutrition Quantity of each fruit

45 Semantics of Duality Dual LP: maximize N T.y Q T.y  C T Coefficients in each row represent the amount of nutrients in a particular fruit NiNi YiYi CjCj M j,i  YiYi But what are Y i s in the dual ? Price of each nutrient! Daily nutrition Cost of each fruit Imagine a salesman trying to sell supplements for each fruit.

46 Semantics of Duality Primal Problem: A customer’s goal is to buy fruits in such a quantity that it minimizes cost but fulfills nutrition. Dual Problem: A salesman goal is to set a price on each nutrient, so that it maximizes profit but his supplements are cheaper than fruits. (Otherwise who will buy them?!) Primal (Customer) Dual (Salesman)

47 47 Primal-Dual Hill Climbing Prove: For every solution x of the primal either: the alg takes a step up to a better primal the alg gives a reason that explains why not by giving a solution y of the dual of equal value. i.e.  x [  x’ C T  x’  C T  x or  y C T  x  N T  y] or But  x  y C  x  N  y No Gap

48 48 Primal-Dual Hill Climbing or Can't go up from this location and no matching ceiling. Can't happen! ? Prove: For every solution x of the primal either: the alg takes a step up to a better primal the alg gives a reason that explains why not by giving a solution y of the dual of equal value. i.e.  x [  x’ C T  x’  C T  x or  y C T  x  N T  y]

49 49 Primal-Dual Hill Climbing or No local maximum! Prove: For every solution x of the primal either: the alg takes a step up to a better primal the alg gives a reason that explains why not by giving a solution y of the dual of equal value. i.e.  x [  x’ C T  x’  C T  x or  y C T  x  N T  y]

50 50 Primal-Dual Hill Climbing  Claim: Primal and dual have the same optimal value. C T  x min  N T  y max No Gap x alg witnesses that C T  x min is no smaller. y alg witnesses that C T  x min is no bigger. Exit

51 Maximize Cost : 21x 1 - 6x 2 – 100x 3 Constraint Functions: 5x 1 + 2x 2 +31x 3  21 1x 1 - 4x 2 +3x 3  56 6x 1 + 60x 2 - 31x 3  200 -5x 1 + 3x 2 +4x 3  8 x 1, x 2, x 3  0 Given any solution x of the primal. Simplex Algorithm A simplex is specified by a subset of n of the m tight (=) constraints. All other constraints must be satisfied.

52 Maximize Cost : 21x 1 - 6x 2 – 100x 3 Constraint Functions: 5x 1 + 2x 2 +31x 3  21 1x 1 - 4x 2 +3x 3  56 6x 1 + 60x 2 - 31x 3  200 -5x 1 + 3x 2 +4x 3  8 x 1, x 2, x 3  0 Given any solution x of the primal. If we slacken one of our n tight constrains, our solution slides along a 1-dim edge. Head in the direction that increases the potential function. Simplex Algorithm A simplex is specified by a subset of n of the m tight (=) constraints.

53 Maximize Cost : 21x 1 - 6x 2 – 100x 3 Constraint Functions: 5x 1 + 2x 2 +31x 3  21 1x 1 - 4x 2 +3x 3  56 6x 1 + 60x 2 - 31x 3  200 -5x 1 + 3x 2 +4x 3  8 x 1, x 2, x 3  0 Given any solution x of the primal. If we slacken one of our n tight constrains, our solution slides along a 1-dim edge. Head in the direction that increases the potential function. Keep sliding until we tighten some constraint. Giving new solution x’ Simplex Algorithm A simplex is specified by a subset of n of the m tight (=) constraints.

54 Maximize Cost : 21x 1 - 6x 2 – 100x 3 Constraint Functions: 5x 1 + 2x 2 +31x 3  21 1x 1 - 4x 2 +3x 3  56 6x 1 + 60x 2 - 31x 3  200 -5x 1 + 3x 2 +4x 3  8 x 1, x 2, x 3  0 Simplex Algorithm A simplex is specified by a subset of n of the m tight (=) constraints. Given any solution x of the primal. If we slacken one of our n tight constrains, our solution slides along a 1-dim edge. Head in the direction that increases the potential function. Keep sliding until we tighten some constraint. Giving new solution x’

55 Maximize Cost : 21x 1 - 6x 2 – 100x 3 Constraint Functions: 5x 1 + 2x 2 +31x 3  21 1x 1 - 4x 2 +3x 3  56 6x 1 + 60x 2 - 31x 3  200 -5x 1 + 3x 2 +4x 3  8 x 1, x 2, x 3  0 Simplex Algorithm A simplex is specified by a subset of n of the m tight (=) constraints. Given any solution x of the primal. Step to another x OR None of these “steps” increases the potential function. the alg gives a reason that explains why not by giving a solution y of the dual of equal value.

56 Maximize Cost : 21x 1 - 6x 2 – 100x 3 Constraint Functions: 5x 1 + 2x 2 +31x 3  21 1x 1 - 4x 2 +3x 3  56 6x 1 + 60x 2 - 31x 3  200 -5x 1 + 3x 2 +4x 3  8 x 1, x 2, x 3  0 Simplex Algorithm A simplex is specified by a subset of n of the m tight (=) constraints. But practically it tends to be fast. Bread and butter of optimization in industry.

57 57 Thank You! Questions ? End

58 Do all Linear Programs have optimal solutions ? No ! Three types of Linear Programs: 1. Has an optimal solution with a finite cost value: e.g. nutrition problem 2. Unbounded: e.g maximize x, x  x  3. Infeasible: e.g maximize x, x  x  x  Simplex Algorithm

59 Simplex C1 C2 (0,0) y x Cost Function P = 5x + 7y Constraint Functions C1: 2x + 4y  100 C2: 3x + 3y  90 Non-Negativity: x,y  0 Recall we need to evaluate our cost function at the vertices where the constraint functions intersect each other

60 Simplex (cont’d) Our Equations P = 5x + 7y C1: 2x + 4y  100 C2: 3x + 3y  90 x,y  0 Slack Form Can be re-written as: P = 5x + 7y s 1 = 100 - 2x - 4y s 2 = 90 - 3x - 3y x, y  0 s 1,, s 2  We introduce 2 new variables called slack variables We don’t want to deal with complex inequalities

61 Simplex (cont’d) Cost Function P = 5x + 7y s 1 = 100 - 2x - 4y s 2 = 90 - 3x - 3y s 1,, s 2, x, y  0 STEP 1: We want an initial point Let’s put x=0, y=0 Feasible solution x=0, y=0 P = 0

62 Simplex (cont’d) Cost Function P = 5x + 7y s 1 = 100 - 2x - 4y s 2 = 90 - 3x - 3y s 1,, s 2, x, y  0 STEP 2: We want next point Let's try to increase x. x can be increased maximum to 30 ( s 2 becomes zero) Rewrite equations (Pivoting) Now put y, s 2 = 0 Feasible solution x = 30 – y – s 2 /3 s 1 = 40 + 2/3s 2 – 2y P = 150 – 5/3s 2 + 2y x=30, y=0 P = 150

63 Simplex (cont’d) Cost Function P = 150 – 5/3s 2 + 2y x = 30 – y – s 2 /3 s 1 = 40 + 2/3s 2 – 2y s 1,, s 2, x, y  0 STEP 3: We want next point Let's try to increase y. y can be increased maximum to 20 ( s 1 becomes zero) Rewrite equations (Pivoting) Now put s 1, s 2 = 0 y = 20 + 1/3s 2 – 1/2s 1 x = 10 – 1/2s 1 - 2/3s 2 P = 190 - s 1 – s 2 x=10, y=20 P = 190 (We don’t increase s 2 because it will decrease P) Note that we cannot increase s1 & s2 without decreasing P. So we stop ! Feasible solution Is this solution optimal? Or have we run into a local minimum? ?


Download ppt "1 Linear Programming Jeff Edmonds York University COSC 3101 Lecture 5 Def and Hot Dog Example Network Flow Def nNetwork Flow Def n Matrix View of Linear."

Similar presentations


Ads by Google