Linear Programming 2012 1 Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.

Slides:



Advertisements
Similar presentations
Duality for linear programming. Illustration of the notion Consider an enterprise producing r items: f k = demand for the item k =1,…, r using s components:
Advertisements

Nonlinear Programming McCarl and Spreen Chapter 12.
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
C&O 355 Mathematical Programming Fall 2010 Lecture 15 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Separating Hyperplanes
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
The Most Important Concept in Optimization (minimization)  A point is said to be an optimal solution of a unconstrained minimization if there exists no.
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
MIT and James Orlin © Nonlinear Programming Theory.
Duality Dual problem Duality Theorem Complementary Slackness
7(2) THE DUAL THEOREMS Primal ProblemDual Problem b is not assumed to be non-negative.
Unconstrained Optimization Problem
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Tier I: Mathematical Methods of Optimization
KKT Practice and Second Order Conditions from Nash and Sofer
Chapter 3 Linear Programming Methods 高等作業研究 高等作業研究 ( 一 ) Chapter 3 Linear Programming Methods (II)
Chapter 11 Nonlinear Programming
Duality Theory 對偶理論.
C&O 355 Mathematical Programming Fall 2010 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Simplex method (algebraic interpretation)
Linear Programming System of Linear Inequalities  The solution set of LP is described by Ax  b. Gauss showed how to solve a system of linear.
Nonlinear Programming Models
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
3.3 Implementation (1) naive implementation (2) revised simplex method
Advanced Operations Research Models Instructor: Dr. A. Seifi Teaching Assistant: Golbarg Kazemi 1.
Chapter 4 Linear Programming: The Simplex Method
I.4 Polyhedral Theory 1. Integer Programming  Objective of Study: want to know how to describe the convex hull of the solution set to the IP problem.
L8 Optimal Design concepts pt D
C&O 355 Mathematical Programming Fall 2010 Lecture 5 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
Exact Differentiable Exterior Penalty for Linear Programming Olvi Mangasarian UW Madison & UCSD La Jolla Edward Wild UW Madison December 20, 2015 TexPoint.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
OR Relation between (P) & (D). OR optimal solution InfeasibleUnbounded Optimal solution OXX Infeasible X( O )O Unbounded XOX (D) (P)
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Linear Programming Back to Cone  Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
OR II GSLM
Approximation Algorithms based on linear programming.
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Chap 10. Sensitivity Analysis
Chapter 11 Optimization with Equality Constraints
Computational Optimization
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Chapter 4 Linear Programming: The Simplex Method
Chap 9. General LP problems: Duality and Infeasibility
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chapter 6. Large Scale Optimization
Chapter 5. Sensitivity Analysis
Chap 3. The simplex method
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chapter 4. Duality Theory
Chapter 5. The Duality Theorem
CS5321 Numerical Optimization
I.4 Polyhedral Theory (NW)
Flow Feasibility Problems
I.4 Polyhedral Theory.
(Convex) Cones Def: closed under nonnegative linear combinations, i.e.
CS5321 Numerical Optimization
Chapter 2. Simplex method
CS5321 Numerical Optimization
Chapter 6. Large Scale Optimization
Chapter 2. Simplex method
Constraints.
Presentation transcript:

Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential reduction algorithm - poly time  Path following algorithm - poly time, good performance, theoretically elegant

Linear Programming The primal path following algorithm  min c’xmax p’b Ax = b p’A + s’ = c’ x  0 s  0 Nonnegativity makes the problem difficult, hence use barrier function in the objective and consider unconstrained problem ( in the affine space Ax = b, p’A + s’ = c’ )  Barrier function: B  (x) = c’x -   j=1 n log x j,  > 0 B  (x)  +  if x j  0 for some j Solve min B  (x), Ax = b(9.15) B  (x) is strictly convex, hence has unique min point if min exists.

Linear Programming ex) min x, s. t. x  0 B  (x) = x -  log x, 1 -  /x = 0  min at x =  x B  (x) log x 

Linear Programming  min B  (x) = c’x -   j=1 n log x j s.t. Ax = b Let x(  ) is optimal solution given  > 0. x(  ), when  varies, is called the central path ( hence the name path following ) It can be shown that lim   0 x(  ) = x* optimal solution to LP. When  = , x(  ) is called the analytic center.  For dual problem, the barrier problem is max p’b +   j = 1 m log s j, p’A + s’ = c’(9.16) ( equivalent to min –p’b -   j = 1 m log s j, minimizing convex function)

Linear Programming Figure 9.4: The central path and the analytic center x(10) x(1) x(0.1) x(0.01) x* analytic center c

Linear Programming  Results from nonlinear programming (NLP)min f(x) s.t. g i (x)  0, i = 1, …, m h i (x) = 0. i = 1, …, p f, g i, h i : R n  R, all twice continuously differentiable ( gradient is given as a column vector)  Thm (Karush 1939, Kuhn-Tucker 1951, first order necessary optimality condition) If x* is a local minimum for (NLP) and some condition (called constraint qualification) holds at x*, then there exist u  R + m, v  R p such that (1)  f(x*) +  i = 1 m u i  g i (x*) +  i = 1 p v i  h i (x*) = 0 (2) u  0, g i (x*)  0, i = 1, …, m,  i = 1 m u i g i (x*) = 0 (3) h i (x*) = 0. i = 1, …, p

Linear Programming  Remark:  (2) is CS conditions and it implies that u i = 0 for non-active constraint g i.  (1) says  f(x*) is a nonnegative linear combination of -  g i (x*) for active constraints and  h i (x*) (compare to strong duality theorem in p. 173 and its Figure )  CS conditions for LP are KKT conditions  KKT conditions are necessary conditions for optimality, but it is also sufficient in some situations. One case is when objective function is convex and constraints are linear, which includes our barrier problem.

Linear Programming  Deriving KKT for barrier problem: min B  (x) = c’x -   j=1 n log x j, s.t. Ax = b ( x > 0 )  f(x) = c -  X -1 e,  h i (x) = a i ( a i is i-th row vector of A, expressed as a column vector and X -1 = diag( 1/x 1, …, 1/x n ), e is the vector having 1 in all components.) Using (Lagrangian) multiplier p i for  h i (x), we get c -  X -1 e = A’p (ignoring the sign of p) Note that h i (x) = a i ’x – b i and  h i (x) = a i. ( h(x) = Ax – b : R n  R m ) If we define s =  X -1 e, KKT becomes A’p + s = c, Ax = b, XSe =  e, ( x > 0, s > 0), where S = diag ( s 1, …, s n ).

Linear Programming  For dual barrier problem, min - p’b -   j = 1 m log s j, s.t. p’A + s’ = c’ ( s > 0 ) ( A i is i-th column vector of A and e i is i-th unit vector.) Using (Lagrangian) multiplier – x i for  h i (p, s), we get Now  i x i e i = Xe, hence we have the conditions A’p + s = c, Ax = b, XSe =  e, ( x > 0, s > 0 ) which is the same conditions we obtained from the primal barrier function.

Linear Programming  The conditions are given in the text as Ax(  ) = b, x(  )  0 A’p(  ) + s(  ) = c, s(  )  0(9.17) X(  )S(  )e = e , where X(  ) = diag ( x 1 (  ), …, x n (  ) ), S(  ) = diag ( s 1 (  ), …, s n (  ) ). Note that when  = 0, they are primal, dual feasibility and complementary slackness conditions.  Lemma 9.5: If x*, p*, and s* satisfy conditions (9.17), then they are optimal solutions to problems (9.15) and (9.16)

Linear Programming  Pf) Let x*, p*, and s* satisfy (9.17), and let x be an arbitrary vector that satisfies x  0 and Ax = b. Then B  (x) = c’x -   j=1 n log x j = c’x – (p*)’(Ax – b) -   j=1 n log x j = (s*)’x + (p*)’b -   j=1 n log x j,   n + (p*)’b -   j=1 n log (  / s j *) s j *x j -  log x j attains min at x j =  / s j *. equality holds iff x j =  / s j * = x j * Hence B  (x*)  B  (x) for all feasible x. In particular, x* is the unique optimal solution and x* = x(  ). Similarly for p* and s* for dual barrier problem. 

Linear Programming Primal path following algorithm  Starting from some  0 and primal and dual feasible x 0 > 0, s 0 > 0, p 0, find solution of the barrier problem iteratively while   0.  To solve the barrier problem, we use quadratic approximation (2 nd order Taylor expansion) of the barrier function and use the minimum of the approximate function as the next iterates. Taylor expansion is Also need to satisfy A(x + d) = b  Ad = 0

Linear Programming  Using KKT, solution to this problem is d(  ) = ( I – X 2 A’(AX 2 A’) -1 A )( Xe – (1/  )X 2 c ) p(  ) = (AX 2 A’) -1 A ( X 2 c -  Xe ) The duality gap is c’x – p’b = ( p’A + s’ )x – p’(Ax) = s’x Hence stop the algorithm if (s k )’x k <  Need a scheme to have initial feasible solution

Linear Programming  The primal path following algorithm 1. (Initialization) Start with some primal and dual feasible x 0 > 0, s 0 > 0, p 0, and set k = (Optimality test) If (s k )’x k <  stop; else go to Step LetX k = diag ( x 1 k, …, x n k ),  k+1 =  k ( 0<  <1) 4. (Computation of directions) Solve the linear system  k+1 X k -2 d – A’p =  k+1 X k -1 e – c, Ad = 0, for p and d. 5. (Update of solutions) Let x k+1 = x k + d, p k+1 = p, s k+1 = c – A’p. 6. Let k := k+1 and go to Step 2.

Linear Programming The primal-dual path following algorithm  Find Newton directions both in the primal and dual space. Instead of finding min of quadratic approximation of barrier function, it finds the solution for KKT system. Ax(  ) = b, ( x(  )  0 ) A’p(  ) + s(  ) = c, ( s(  )  0 )(9.26) X(  )S(  )e = e , System of nonlinear equations because of the last ones.  Let F: R r  R r. Want z* such that F(z*) = 0 We use first order Taylor approximation around z k, F( z k + d) ~ F(z k ) + J(z k )d. Here J(z k ) is the r  r Jacobian matrix whose (i, j)th element is

Linear Programming  Try to find d that satisfies F(z k ) + J(z k )d = 0 d is called a Newton direction. Here F(z) is given by This is equivalent to Ad x k = 0(9.28) A’d p k + d s k = 0(9.29) S k d x k + X k d s k =  k e - X k S k e (9.30)

Linear Programming  Solution to the previous system is d x k = D k ( I – P k )v k (  k ), d p k = - (AD k 2 A’) -1 AD k v k (  k ), d s k = D k -1 P k v k (  k ), where D k 2 = X k S k -1, P k = D k A’ (AD k 2 A’) -1 AD k, v k (  k ) = X k -1 D k (  k e – X k S k e). Also limit the step length to ensure x k+1 > 0, s k+1 > 0.

Linear Programming  The primal-dual path following algorithm 1. (Initialization) Start with some feasible x 0 > 0, s 0 > 0, p 0, and set k = (Optimality test) If (s k )’x k <  stop; else go to Step (Computation of Newton directions) Let  k = (s k )’x k / n, X k = diag ( x 1 k, …, x n k ), S k = diag ( s 1 k, …, s n k ). Solve the linear system (9.28) – (9.30) for d x k, d p k, and d s k. 4. (Find step lengths) Let ( 0 <  < 1 )

Linear Programming (continued) 5. (Solution update) Update the solution vectors according to x k+1 = x k +  P k d x k, p k+1 = p k +  D k d p k, s k+1 = s k +  D k d s k. 6. Let k := k+1 and go to Step 2.

Linear Programming Infeasible primal-dual path following methods  A variation of primal-dual path following. Starts from x 0 > 0, s 0 > 0, p 0, which is not necessarily feasible for either the primal or the dual, i.e. Ax 0  b and/or A’p 0 + s 0  c. Iteration same as the primal-dual path following except feasibility not maintained in each iteration. Excellent performance.

Linear Programming Self-dual method  Alternative method to find initial feasible solution w/o using big-M. Given an initial possibly infeasible point (x 0, p 0, s 0 ) with x 0 > 0 and s 0 > 0, consider the problem minimize ( (x 0 )’s 0 + 1)  subject to Ax - b  + b  = 0 -A’p + c  - c  - s = 0(9.33) b’p – c’x + z  -  = 0 - b’p + c’x - z  = - ((x 0 )’s 0 + 1) x  0,   0, s  0,   0 where b = b – Ax 0, c = c – A’p 0 – s 0, z = c’x – b’p 0. This LP is self-dual. Note that ( x, p, s, , ,  ) = ( x 0, p 0, s 0, 1, 1, 1) is a feasible interior solution to (9.33)

Linear Programming  Since both the primal and dual are feasible, they have optimal solutions and the optimal value is 0.  Primal-dual path following method finds an optimal solution ( x*, p*, s*,  *,  *,  *) that satisfies  * = 0, x* + s* > 0,  * +  * > 0, (s*)’x* = 0,  *  * = 0 ( satisfies strict complementarity )  Can find optimal solution or determine unboundedness depending on the value of  *,  *. (see Thm 9.8)  Running time : worst case : observed : O( log n log(  0 /  ))