Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 6. Large Scale Optimization

Similar presentations


Presentation on theme: "Chapter 6. Large Scale Optimization"— Presentation transcript:

1 Chapter 6. Large Scale Optimization
6.1 Delayed column generation min c’x, Ax = b, x  A is full row rank with a large number of columns. Impractical to have all columns initially. Want to generate (find) entering nonbasic variable (column) as needed. ( column generation technique ) If ci < 0, then xi can enter basis. Hence solve min ci over all i. If min ci < 0, have found an entering variable (column). If min ci  0, no entering column exists, hence current basis optimal. If entering col. found, solve restricted problem to optimality. min i  I , i  I Aixi = b, x  0 ( I : index set of variables we have at hand) Then continue to find entering columns. Linear Programming 2011

2 6.2. Cutting stock problem W = 70 17 17 17 15 scrap
Linear Programming 2011

3 bi rolls of width wi , i = 1, 2, … , m need to be produced.
Rolls of paper with width W ( called raw) to be cut into small pieces (called final). bi rolls of width wi , i = 1, 2, … , m need to be produced. How to cut the raws to minimize the number of raws used while satisfying order? ex) W = 70, then 3 of w1 = 17 and 1 of w2 = 15 can be produced from a raw. This way of production can be represented as pattern ( 3, 1, 0, 0, … , 0 ) ( a1j, a2j, … , amj )’ for j-th pattern is feasible if i aijwi  W. Linear Programming 2011

4 , where aij is the number of i-th finals produced in j-th pattern.
Formulation , where aij is the number of i-th finals produced in j-th pattern. Note that the number of possible cutting patterns can be very large. We need integer solution, but LP relaxation can be used to find good approximate solution if solution value large. For initial b.f.s., for j = 1, … , m, let j-th pattern consists of one final of width wj and none of the other widths. Linear Programming 2011

5  max p’Aj over all patterns ( integer knapsack problem )
After computing p vector, we try to find entering nonbasic variable (column). Candidate for entering column (pattern) is any nonbasic variable with reduced cost ( 1 – p’Aj ) < 0, hence solve min ( 1 – p’Aj ) over all possible patterns.  max p’Aj over all patterns ( integer knapsack problem ) max Dynamic programming algorithm for integer knapsack prob. ( can assume pi, wi > 0, integer. knapsack is NP-hard, so no polynomial time alg. is known. ) Linear Programming 2011

6 Let F(v) be optimal value of the problem when knapsack capacity is v.
wmin = mini { wi } For v < wmin , F(v) = 0 For v  wmin , F(v) = maxi = 1, … , m { F( v – wi ) + pi : v  wi } F(v) is true optimal value when knapsack capacity is v. Suppose a0 is opt. solution when r.h.s. is v – wi , then a0 + ei is a feasible solution when r.h.s. is v. Hence F(v)  F(v – wi) + pi , i = 1, … , m, v  wi Suppose a* is optimal solution when r.h.s. is v  wmin , then there exists some k with a*k > 0 and v  wk . Hence a* - ek is a feasible solution when r.h.s. is v – wk . So F( v-wk )  F(v) – pk ( F(v)  F( v-wk ) + pk for some k ) Linear Programming 2011

7 Actual solution recovered by backtracking the recursion.
Running time of the algorithm is O(mW) which is not polynomial of the length of encoding. Called pseudopolynomial running time ( polynomial of data W itself). Note : the running time becomes polynomial if it is polynomial with respect to m and log2 W, but W = 2logW which is not polynomial of log2 W. Many practical problems can be naturally formulated similar to the cutting stock problem. Especially in 0-1 IP with many columns. For cutting stock problem, we only obtained approximate fractional solution. But for 0-1 IP, fractional solution can be of little help and we need a mechanism to find optimal integer solution ( branch-and-price approach, column generation combined with branch-and-bound ). Linear Programming 2011

8 6.3. Cutting plane methods Dual of column generation (constraint generation) Consider max p’b, p’Ai  ci , i = 1, … , n (1) ( n can be very large ) Solve max p’b, p’Ai  ci , i  I, I  { 1, … , n } (2) and get optimal solution p* to (2) If p* is feasible to (1), then it is also optimal to (1) If p* is infeasible to (1), find a violated constraint in (1) and add it to (2), then reoptimize (2) again. Repeat it. Recall the TSP formulation with subtour elimination constraints. Linear Programming 2011

9 Solve min ci – (p*)’Ai over all i. If optimal value  0  p*  P
Separation problem : Given a polyhedron P (described with possibly many inequalities) and a vector p* , determine if p*  P. If p*  P, find a (valid) inequality violated by p*. Solve min ci – (p*)’Ai over all i. If optimal value  0  p*  P If optimal value < 0  ci < (p*)’Ai (violated) Linear Programming 2011

10 6.4. Dantzig-Wolfe decomposition
Use of decomposition theorem to represent a specially structured LP problem in different form. Column generation is used to solve the problem. Consider a LP in the following form min x1, x2 : dimension n1, n2, b0, b1, b2 : m0, m1, m2 Let Pi = { xi  0 : Fix = bi }, i = 1, Assume Pi  . Note that the nonnegativity constraints guarantee that Pi is pointed, hence S = {0} ( P = S + K + Q ) Linear Programming 2011

11 xi  Pi can be represented as
 min xi  Pi can be represented as Plug into (2)  get master problem min Linear Programming 2011

12 Alternatively, its columns can be viewed as
The new formulation has many variables (columns), but it can be solved by column generation technique. Actual solution x1, x2 can be recovered from  and . xi is expressed as convex combination of extreme points of Pi + conical combination of extreme rays of Pi. Linear Programming 2011

13 Decomposition algorithm
Suppose having a b.f.s. to the master problem, dual value p = ( q, r1, r2), qRm0, r1, r2  R. Then reduced costs are (for 1, 1 ) Entering variable if reduced cost < 0. Hence solve min ( c1’ – q’D1 )x1, x1  P1 (subproblem) Linear Programming 2011

14 simplex returns extreme ray w1k with ( c1’ – q’D1 )w1k < 0.
(a) optimal cost is -   simplex returns extreme ray w1k with ( c1’ – q’D1 )w1k < 0. Generate column for 1k, i.e. [ D1w1k ’, 0 , 0 ]’ (b) optimal finite and < r1  returns extreme point x1j with ( c1’ – q’D1 )x1j < r1. Generate column for 1j, i.e. [ D1x1j ’, 1 , 0 ]’ (c) optimal cost  r1  ( c1’ – q’D1 )x1j  r1  x1j , ( c1’ – q’D1 )w1k  0  w1k no entering variable among 1j , 1k Perform the same for 2j , 2k The method can also be used when there are more than 2 blocks or just one block in the constraints. Linear Programming 2011

15 Starting the algorithm
Find extreme points x11, x21 of P1 and P2 May assume that D1x11 + D2x21  b, then solve. min Linear Programming 2011

16 Termination and computational experience
Fast improvement in early iterations, but convergence becomes slow in the tail of the sequence. Revised simplex is more competitive in terms of running time. Suitable for large, structured problems. Linear Programming 2011

17 Bounds on the optimal cost
Thm 6.1 : Suppose optimal z* is finite. Let z be the current best solution (upper bound on z* ), ri dual variable value for i-th convexity constraint and zi finite optimal cost for i-th subproblem. Then z + i ( zi – ri )  z*  z. pf) Modify the current dual solution to a dual feasible solution by decreasing the value of ri to zi . Dual of master problem is max Linear Programming 2011

18 Suppose have a b.f.s. to master problem with z and ( q, r1, r2 ).
Have q’b0 + r1 + r2 = z Optimal cost z1 to the first subproblem finite  minj  J1 (c1’x1j – q’D1x1j ) = z1 mink  k1 (c1’w1k – q’D1w1k )  0 Note that currently we have minj ( c1’ – q’D1 )x1j < r1 ( reduced cost (c1’ – q’D1 )x1j - r1 < 0 for entering variable). If we use z1 in place of r1, get dual feasibility for the first two dual constraints. Similarly, use z2 in place of r2. Cost is q’b0 + z1 + z2  z*  q’b0 + z1 + z2 = q’b0 +r1 + r2 + ( z1 – r1 ) + ( z2 – r2 ) = z + ( z1 – r1 ) + ( z2 – r2 )  Linear Programming 2011


Download ppt "Chapter 6. Large Scale Optimization"

Similar presentations


Ads by Google