Computational Optimization

Slides:



Advertisements
Similar presentations
Curved Trajectories towards Local Minimum of a Function Al Jimenez Mathematics Department California Polytechnic State University San Luis Obispo, CA
Advertisements

Zhen Lu CPACT University of Newcastle MDC Technology Reduced Hessian Sequential Quadratic Programming(SQP)
Optimization with Constraints
Engineering Optimization
1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Optimization. f(x) = 0 g i (x) = 0 h i (x)
1 TTK4135 Optimization and control B.Foss Spring semester 2005 TTK4135 Optimization and control Spring semester 2005 Scope - this you shall learn Optimization.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Constrained optimization Indirect methods Direct methods.
Review + Announcements 2/22/08. Presentation schedule Friday 4/25 (5 max)Tuesday 4/29 (5 max) 1. Miguel Jaller 8:031. Jayanth 8:03 2. Adrienne Peltz 8:202.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Separating Hyperplanes
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Numerical Optimization
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
Nonlinear Optimization for Optimal Control
Design Optimization School of Engineering University of Bradford 1 Numerical optimization techniques Unconstrained multi-parameter optimization techniques.
Optimization Methods One-Dimensional Unconstrained Optimization
Unconstrained Optimization Problem
Advanced Topics in Optimization
Why Function Optimization ?
Optimization Methods One-Dimensional Unconstrained Optimization
Ch. 9: Direction Generation Method Based on Linearization Generalized Reduced Gradient Method Mohammad Farhan Habib NetLab, CS, UC Davis July 30, 2010.
Tier I: Mathematical Methods of Optimization

Computational Optimization
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
KKT Practice and Second Order Conditions from Nash and Sofer
1 Chapter 8 Nonlinear Programming with Constraints.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal February 12, 2007 Inexact Methods for PDE-Constrained Optimization.
ENCI 303 Lecture PS-19 Optimization 2
General Nonlinear Programming (NLP) Software
Nonlinear Programming Peter Zörnig ISBN: © 2014 by Walter de Gruyter GmbH, Berlin/Boston Abbildungsübersicht / List of Figures Tabellenübersicht.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Chapter 7 Handling Constraints
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal January 31, 2007 Inexact Methods for PDE-Constrained Optimization.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
L8 Optimal Design concepts pt D
1 Algorithms and Software for Large-Scale Nonlinear Optimization OTC day, 6 Nov 2003 Richard Waltz, Northwestern University Project I: Large-scale Active-Set.
Exact Differentiable Exterior Penalty for Linear Programming Olvi Mangasarian UW Madison & UCSD La Jolla Edward Wild UW Madison December 20, 2015 TexPoint.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
1 OR II GSLM Outline  equality constraint  tangent plane  regular point  FONC  SONC  SOSC.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
OR II GSLM
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 117 Penalty and Barrier Methods General classical constrained.
Optimal Control.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Goal We present a hybrid optimization approach for solving global optimization problems, in particular automated parameter estimation models. The hybrid.
Chapter 11 Optimization with Equality Constraints
CSLT ML Summer Seminar (13)
L11 Optimal Design L.Multipliers
Computational Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
CS5321 Numerical Optimization
Optimization Methods TexPoint fonts used in EMF.
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Part 4 Nonlinear Programming
Section 3: Second Order Methods
Transformation Methods Penalty and Barrier methods
Constraints.
Presentation transcript:

Computational Optimization General Nonlinear Equality Constraints

General Equality Problem

Sequential Quadratic Programming (SQP) Basic Idea: QP with constraints are easy. For any guess of active constraints, just have to solve system of equations. So why not solve general problem as a series of constrained QPs. Which QP should be used?

Use KKT Problem Use Lagrangian

Solve using Newton’s Method Newton step where

SQP Equations are first order KKT of First Algorithm: Solve QP subproblem for (p,) Add to iterate. But needs line search like Newton’s Method

Usual Tricks Use linesearch but with merit function to force toward feasibility Use Modify Cholesky to insure descent directions Use Quasi Newton approximation of Hessian of Lagrangian Can add linearization of g(x) for inequality constraints too.

General Problem Case -SQP Use QP Merit function

Reduced Gradient Works similarly to SQP but maintains feasibility at each iteration. In some sense moves in the SQP direction but then corrects for feasibility by solving system of equation. Maintains feasibility at each iteration but more expensive at each iteration.

Penalty and Barrier Methods Active set methods have inherent combinatorial difficulty – which constraints are active? Plus nonlinear constraints can be tricky. Idea of penalty and barrier methods is to trade combinatorial search for active constraints for a more difficult objective function.

Barrier Methods Create function that is infinite at boundaries and has min that asymptotically approaches min of original problem.

Barrier Methods The most famous example of a Barrier Method is the Interior Point Method for linear programming and other problems.

Exterior Penalties Idea: Construct a function that penalizes infeasibilities but has no penalty in feasible region. Asymptotically, minimum solution of such penalty function approach a soln of the original from exterior of the feasible region.

Sample Penalty Function Penalty positive for all infeasible points and 0 for feasible points

Exterior Penalties Pros Handles nonlinear equality constraints that have no interior Don’t need to maintain feasible. Don’t need highly non-linear transforms needed for simple constraints

Example Transformed problem FONC

Exterior Point Algorithm For max solve the penalized problem Can show that algorithm converges Asymptotically as  goes to infinity May require infinite penalty Large  can make problem ill-conditioned

Exact Penalty Functions Avoid problem  going to infinity creating ill-conditioning Can show that P(x) for solves problem exactly for r sufficient large (finite)

In Class Exercise Consider The exact penalty problem is Plot f(x), P(x,10), P(x,100), P(x,1000) Try using the ezplot command with hold between plots Compare these functions near x*

In Class Exercise Consider The exact penalty problem is Plot f(x), P(x,10), P(x,100), P(x,1000) Try using the ezplot command with hold between plots Compare these functions near x*

Exact Penalty Pros: Cons: Finite penalty parameter Solution of penalty problem solves the original Makes problem more convex Cons: Function is not differentiable Must use nonsmooth optimization methods e.g. Subgradient methods

NLP Family of Algorithms Basic Method Sequential Quadratric Programming Sequential Linear Prog Augmented Lagrangian Projection or Reduced Gradient Directions Steepest Descent Newton Quasi Newton Conjugate Gradient Space Direct Null Range Constraints Active Set Barrier Penalty Step Size Line Search Trust Region

NLP Family of Algorithms Basic Method Sequential Quadratric Programming Sequential Linear Augmented Lagrangian Projection or Reduced Gradient Directions Steepest Descent Newton Quasi Newton Conjugate Gradient Space Direct Null Range Constraints Active Set Barrier/ Interior Point Penalty Step Size Line Search Trust Region

Augmented Lagrangian Consider min f(x) s.t h(x)=0 Start with L(x, )=f(x)-’h(x) Add penalty L(x, ,c)=f(x)-’h(x)+c/2||h(x)||2 The penalty helps insure that the point is feasible.

Lagrangian Multiplier Estimate L(x, ,c)=f(x)-’h(x)+c/2||h(x)||2 If Looks like the Lagrangian Multiplier!

In Class Exercise Consider Find x*, * satisfying the KKT conditions The augmented Lagrangian is L(x, *, c)= Plot f(x), L(x, *), L(x*, * ,4), L(x*, * ,16) L(x*, * ,40) Compare these functions near x*

AL has Nice Properties Penalty function can improve conditioning and convexity. If x* is regular and SOSC at x* the Hessian of A is p.d. for c k sufficiently large Automatically gives estimates of Lagrangian Multipliers

AL : Method of Multipliers x0, 0, c0, For k=1 to max iterations If stop (for both x, ) xk+1 = minx L(xk,k, c k) Update k+1= k - ch(xk+1 ) c k+1 c k

Inequality Problems Method of multiplier can be extended to this case using penalty parameter t If strict complementarity holds this function is twice differentiable.

Inequality Problems KKT point of Augmented Lagrangian is KKT point of original problem Estimate of Lagrangian Multiplier is

Inequality Problems

NLP Family of Algorithms Basic Method Sequential Quadratric Programming Sequential Linear Prog Augmented Lagrangian Projection or Reduced Gradient Directions Steepest Descent Newton Quasi Newton Conjugate Gradient Space Direct Null Range Constraints Active Set Barrier Penalty Step Size Line Search Trust Region

Hybrid Approaches Method can be any combination of these algorithms MINOS: For linear program utilizes a simplex method. The generalization of this to nonlinear programs with linear constraints is the reduced gradient method. Nonlinear constraints are handled by utilizing the augmented Lagrangian. A BFGS estimate of the Hessian is used.