Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge Nocedal November 6, 2006 INFORMS Annual Meeting 2006
Outline Introduction Problem formulation Motivation for inexactness Unconstrained optimization and nonlinear equations Algorithm Development Step computation Step acceptance Global Analysis Merit function and sufficient decrease Satisfying first-order conditions Conclusions/Final remarks
Outline Introduction Problem formulation Motivation for inexactness Unconstrained optimization and nonlinear equations Algorithm Development Step computation Step acceptance Global Analysis Merit function and sufficient decrease Satisfying first-order conditions Conclusions/Final remarks
Goal: solve the problem Equality constrained optimization Define: the Lagrangian Define: the derivatives Goal: solve KKT conditions
Equality constrained optimization Algorithm: Newton’s methodAlgorithm: the SQP subproblem Two “equivalent” step computation techniques
Equality constrained optimization Algorithm: Newton’s methodAlgorithm: the SQP subproblem Two “equivalent” step computation techniques KKT matrix Cannot be formed Cannot be factored
Equality constrained optimization Algorithm: Newton’s methodAlgorithm: the SQP subproblem Two “equivalent” step computation techniques KKT matrix Cannot be formed Cannot be factored Linear system solve Iterative method Inexactness
Unconstrained optimization Goal: minimize a nonlinear objective Algorithm: Newton’s method (CG) Note: choosing any intermediate step ensures global convergence to a local solution of NLP (Steihaug, 1983)
Note: choosing any step with and ensures global convergence Nonlinear equations Goal: solve a nonlinear system Algorithm: Newton’s method (Eisenstat and Walker, 1994) (Dembo, Eisenstat, and Steihaug, 1982)
Outline Introduction/Motivation Unconstrained optimization Nonlinear equations Constrained optimization Algorithm Development Step computation Step acceptance Global Analysis Merit function and sufficient decrease Satisfying first-order conditions Conclusions/Final remarks
Equality constrained optimization Algorithm: Newton’s methodAlgorithm: the SQP subproblem Two “equivalent” step computation techniques Question: can we ensure convergence to a local solution by choosing any step into the ball?
Globalization strategy: exact merit function … with Armijo line search condition Globalization strategy Step computation: inexact SQP step
First attempt Proposition: sufficiently small residual 1e-81e-71e-61e-51e-41e-31e-21e-1 Success100% 97% 90%85%72%38% Failure0% 3% 10%15%28%62% Test: 61 problems from CUTEr test set
First attempt… not robust Proposition: sufficiently small residual … not enough for complete robustness We have multiple goals (feasibility and optimality) Lagrange multipliers may be completely off
Recall the line search condition Second attempt Step computation: inexact SQP step We can show
Recall the line search condition Second attempt Step computation: inexact SQP step We can show... but how negative should this be?
Quadratic/linear model of merit function Create model Quantify reduction obtained from step
Quadratic/linear model of merit function Create model Quantify reduction obtained from step
Exact case
Exact step minimizes the objective on the linearized constraints
Exact case Exact step minimizes the objective on the linearized constraints … which may lead to an increase in the objective (but that’s ok)
Inexact case
Option #1: current penalty parameter
Step is acceptable if for
Option #2: new penalty parameter
Step is acceptable if for
Option #2: new penalty parameter Step is acceptable if for
for k = 0, 1, 2, … Iteratively solve Until Update penalty parameter Perform backtracking line search Update iterate Algorithm outline or
Observe KKT conditions Termination test
Outline Introduction/Motivation Unconstrained optimization Nonlinear equations Constrained optimization Algorithm Development Step computation Step acceptance Global Analysis Merit function and sufficient decrease Satisfying first-order conditions Conclusions/Final remarks
The sequence of iterates is contained in a convex set over which the following hold: the objective function is bounded below the objective and constraint functions and their first and second derivatives are uniformly bounded in norm the constraint Jacobian has full row rank and its smallest singular value is bounded below by a positive constant the Hessian of the Lagrangian is positive definite with smallest eigenvalue bounded below by a positive constant Assumptions
Sufficient reduction to sufficient decrease Taylor expansion of merit function yields Accepted step satisfies
Intermediate results is bounded below by a positive constant is bounded above
Sufficient decrease in merit function
Step in dual space (for sufficiently small and ) Therefore, We converge to an optimal primal solution, and
Outline Introduction/Motivation Unconstrained optimization Nonlinear equations Constrained optimization Algorithm Development Step computation Step acceptance Global Analysis Merit function and sufficient decrease Satisfying first-order conditions Conclusions/Final remarks
Conclusion/Final remarks Review Defined a globally convergent inexact SQP algorithm Require only inexact solutions of KKT system Require only matrix-vector products involving objective and constraint function derivatives Results also apply when only reduced Hessian of Lagrangian is assumed to be positive definite Future challenges Implementation and appropriate parameter values Nearly-singular constraint Jacobian Inexact derivative information Negative curvature etc., etc., etc….