Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.

Slides:



Advertisements
Similar presentations
Zhen Lu CPACT University of Newcastle MDC Technology Reduced Hessian Sequential Quadratic Programming(SQP)
Advertisements

Optimization with Constraints
Engineering Optimization
Optimization Methods TexPoint fonts used in EMF.
Page 1 Page 1 ENGINEERING OPTIMIZATION Methods and Applications A. Ravindran, K. M. Ragsdell, G. V. Reklaitis Book Review.
1 TTK4135 Optimization and control B.Foss Spring semester 2005 TTK4135 Optimization and control Spring semester 2005 Scope - this you shall learn Optimization.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
by Rianto Adhy Sasongko Supervisor: Dr.J.C.Allwright
Review + Announcements 2/22/08. Presentation schedule Friday 4/25 (5 max)Tuesday 4/29 (5 max) 1. Miguel Jaller 8:031. Jayanth 8:03 2. Adrienne Peltz 8:202.
Separating Hyperplanes
Richard Tapia (Research joint with John Dennis) Rice University
The Most Important Concept in Optimization (minimization)  A point is said to be an optimal solution of a unconstrained minimization if there exists no.
1 Logic-Based Methods for Global Optimization J. N. Hooker Carnegie Mellon University, USA November 2003.
Optimization in Engineering Design 1 Lagrange Multipliers.
Numerical Optimization
Transformation Methods MOM (Method of Multipliers) Study of Engineering Optimization Guanyao Huang RUbiNet - Robust and Ubiquitous Networking Research.
Nonlinear Optimization for Optimal Control
Engineering Optimization
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods Numerical.
1 Chapter 8: Linearization Methods for Constrained Problems Book Review Presented by Kartik Pandit July 23, 2010 ENGINEERING OPTIMIZATION Methods and Applications.
The Widrow-Hoff Algorithm (Primal Form) Repeat: Until convergence criterion satisfied return: Given a training set and learning rate Initial:  Minimize.
Reformulated - SVR as a Constrained Minimization Problem subject to n+1+2m variables and 2m constrains minimization problem Enlarge the problem size and.
Support Vector Machines and Kernel Methods
Unconstrained Optimization Problem
Engineering Optimization
Branch and Bound Algorithm for Solving Integer Linear Programming
1 Multiple Kernel Learning Naouel Baili MRL Seminar, Fall 2009.
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Newton's Method for Functions of Several Variables
Ch. 9: Direction Generation Method Based on Linearization Generalized Reduced Gradient Method Mohammad Farhan Habib NetLab, CS, UC Davis July 30, 2010.
Tier I: Mathematical Methods of Optimization
Computational Optimization
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
KKT Practice and Second Order Conditions from Nash and Sofer
1. Problem Formulation. General Structure Objective Function: The objective function is usually formulated on the basis of economic criterion, e.g. profit,
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal February 12, 2007 Inexact Methods for PDE-Constrained Optimization.
ENCI 303 Lecture PS-19 Optimization 2
General Nonlinear Programming (NLP) Software
Remarks: 1.When Newton’s method is implemented has second order information while Gauss-Newton use only first order information. 2.The only differences.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal January 31, 2007 Inexact Methods for PDE-Constrained Optimization.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
1 Algorithms and Software for Large-Scale Nonlinear Optimization OTC day, 6 Nov 2003 Richard Waltz, Northwestern University Project I: Large-scale Active-Set.
Exact Differentiable Exterior Penalty for Linear Programming Olvi Mangasarian UW Madison & UCSD La Jolla Edward Wild UW Madison December 20, 2015 TexPoint.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
CS B553: A LGORITHMS FOR O PTIMIZATION AND L EARNING Constrained optimization.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Inequality Constraints Lecture 7. Inequality Contraints (I) n A Review of Lagrange Multipliers –As we discussed last time, the first order necessary conditions.
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
1 Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 23, 2010 Piotr Mirowski Based on slides by Sumit.
Exact Differentiable Exterior Penalty for Linear Programming
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
5.3 Mixed-Integer Nonlinear Programming (MINLP) Models
Computational Optimization
Nuclear Norm Heuristic for Rank Minimization
CS5321 Numerical Optimization
1. Problem Formulation.
Chap 3. The simplex method
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Section 3: Second Order Methods
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Constraints.
Presentation transcript:

Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge Nocedal November 6, 2006 INFORMS Annual Meeting 2006

Outline Introduction  Problem formulation  Motivation for inexactness  Unconstrained optimization and nonlinear equations Algorithm Development  Step computation  Step acceptance Global Analysis  Merit function and sufficient decrease  Satisfying first-order conditions Conclusions/Final remarks

Outline Introduction  Problem formulation  Motivation for inexactness  Unconstrained optimization and nonlinear equations Algorithm Development  Step computation  Step acceptance Global Analysis  Merit function and sufficient decrease  Satisfying first-order conditions Conclusions/Final remarks

Goal: solve the problem Equality constrained optimization Define: the Lagrangian Define: the derivatives Goal: solve KKT conditions

Equality constrained optimization Algorithm: Newton’s methodAlgorithm: the SQP subproblem Two “equivalent” step computation techniques

Equality constrained optimization Algorithm: Newton’s methodAlgorithm: the SQP subproblem Two “equivalent” step computation techniques KKT matrix Cannot be formed Cannot be factored

Equality constrained optimization Algorithm: Newton’s methodAlgorithm: the SQP subproblem Two “equivalent” step computation techniques KKT matrix Cannot be formed Cannot be factored Linear system solve Iterative method Inexactness

Unconstrained optimization Goal: minimize a nonlinear objective Algorithm: Newton’s method (CG) Note: choosing any intermediate step ensures global convergence to a local solution of NLP (Steihaug, 1983)

Note: choosing any step with and ensures global convergence Nonlinear equations Goal: solve a nonlinear system Algorithm: Newton’s method (Eisenstat and Walker, 1994) (Dembo, Eisenstat, and Steihaug, 1982)

Outline Introduction/Motivation  Unconstrained optimization  Nonlinear equations  Constrained optimization Algorithm Development  Step computation  Step acceptance Global Analysis  Merit function and sufficient decrease  Satisfying first-order conditions Conclusions/Final remarks

Equality constrained optimization Algorithm: Newton’s methodAlgorithm: the SQP subproblem Two “equivalent” step computation techniques Question: can we ensure convergence to a local solution by choosing any step into the ball?

Globalization strategy: exact merit function … with Armijo line search condition Globalization strategy Step computation: inexact SQP step

First attempt Proposition: sufficiently small residual 1e-81e-71e-61e-51e-41e-31e-21e-1 Success100% 97% 90%85%72%38% Failure0% 3% 10%15%28%62% Test: 61 problems from CUTEr test set

First attempt… not robust Proposition: sufficiently small residual … not enough for complete robustness  We have multiple goals (feasibility and optimality)  Lagrange multipliers may be completely off

Recall the line search condition Second attempt Step computation: inexact SQP step We can show

Recall the line search condition Second attempt Step computation: inexact SQP step We can show... but how negative should this be?

Quadratic/linear model of merit function Create model Quantify reduction obtained from step

Quadratic/linear model of merit function Create model Quantify reduction obtained from step

Exact case

Exact step minimizes the objective on the linearized constraints

Exact case Exact step minimizes the objective on the linearized constraints … which may lead to an increase in the objective (but that’s ok)

Inexact case

Option #1: current penalty parameter

Step is acceptable if for

Option #2: new penalty parameter

Step is acceptable if for

Option #2: new penalty parameter Step is acceptable if for

for k = 0, 1, 2, …  Iteratively solve  Until  Update penalty parameter  Perform backtracking line search  Update iterate Algorithm outline or

Observe KKT conditions Termination test

Outline Introduction/Motivation  Unconstrained optimization  Nonlinear equations  Constrained optimization Algorithm Development  Step computation  Step acceptance Global Analysis  Merit function and sufficient decrease  Satisfying first-order conditions Conclusions/Final remarks

The sequence of iterates is contained in a convex set over which the following hold:  the objective function is bounded below  the objective and constraint functions and their first and second derivatives are uniformly bounded in norm  the constraint Jacobian has full row rank and its smallest singular value is bounded below by a positive constant  the Hessian of the Lagrangian is positive definite with smallest eigenvalue bounded below by a positive constant Assumptions

Sufficient reduction to sufficient decrease Taylor expansion of merit function yields Accepted step satisfies

Intermediate results is bounded below by a positive constant is bounded above

Sufficient decrease in merit function

Step in dual space (for sufficiently small and ) Therefore, We converge to an optimal primal solution, and

Outline Introduction/Motivation  Unconstrained optimization  Nonlinear equations  Constrained optimization Algorithm Development  Step computation  Step acceptance Global Analysis  Merit function and sufficient decrease  Satisfying first-order conditions Conclusions/Final remarks

Conclusion/Final remarks Review  Defined a globally convergent inexact SQP algorithm  Require only inexact solutions of KKT system  Require only matrix-vector products involving objective and constraint function derivatives  Results also apply when only reduced Hessian of Lagrangian is assumed to be positive definite Future challenges  Implementation and appropriate parameter values  Nearly-singular constraint Jacobian  Inexact derivative information  Negative curvature  etc., etc., etc….