CS5321 Numerical Optimization

Slides:



Advertisements
Similar presentations
Solve a System Algebraically
Advertisements

Nonlinear Programming McCarl and Spreen Chapter 12.
Engineering Optimization
Page 1 Page 1 ENGINEERING OPTIMIZATION Methods and Applications A. Ravindran, K. M. Ragsdell, G. V. Reklaitis Book Review.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
by Rianto Adhy Sasongko Supervisor: Dr.J.C.Allwright
Review + Announcements 2/22/08. Presentation schedule Friday 4/25 (5 max)Tuesday 4/29 (5 max) 1. Miguel Jaller 8:031. Jayanth 8:03 2. Adrienne Peltz 8:202.
Linear Programming Fundamentals Convexity Definition: Line segment joining any 2 pts lies inside shape convex NOT convex.
Separating Hyperplanes
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
The Most Important Concept in Optimization (minimization)  A point is said to be an optimal solution of a unconstrained minimization if there exists no.
Linear Inequalities and Linear Programming Chapter 5
Numerical Optimization
Methods For Nonlinear Least-Square Problems
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods Numerical.
Unconstrained Optimization Problem
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
Martin Burger Total Variation 1 Cetraro, September 2008 Numerical Schemes Wrap up approximate formulations of subgradient relation.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Linear Programming - Standard Form
CS774. Markov Random Field : Theory and Application Lecture 08 Kyomin Jung KAIST Sep
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal February 12, 2007 Inexact Methods for PDE-Constrained Optimization.
Chapter 6 Linear Programming: The Simplex Method
General Nonlinear Programming (NLP) Software
Simplex Algorithm.Big M Method
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 6.4 The student will be able to set up and solve linear programming problems.
Duality Theory  Every LP problem (called the ‘Primal’) has associated with another problem called the ‘Dual’.  The ‘Dual’ problem is an LP defined directly.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal January 31, 2007 Inexact Methods for PDE-Constrained Optimization.
Systems of Equations and Inequalities Systems of Linear Equations: Substitution and Elimination Matrices Determinants Systems of Non-linear Equations Systems.
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
Optimization unconstrained and constrained Calculus part II.
L8 Optimal Design concepts pt D
Part 4 Nonlinear Programming 4.1 Introduction. Standard Form.
Exact Differentiable Exterior Penalty for Linear Programming Olvi Mangasarian UW Madison & UCSD La Jolla Edward Wild UW Madison December 20, 2015 TexPoint.
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
CS B553: A LGORITHMS FOR O PTIMIZATION AND L EARNING Constrained optimization.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Chapter 4 The Maximum Principle: General Inequality Constraints.
Inequality Constraints Lecture 7. Inequality Contraints (I) n A Review of Lagrange Multipliers –As we discussed last time, the first order necessary conditions.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Sullivan Algebra and Trigonometry: Section 12.9 Objectives of this Section Set Up a Linear Programming Problem Solve a Linear Programming Problem.
Generalization Error of pac Model  Let be a set of training examples chosen i.i.d. according to  Treat the generalization error as a r.v. depending on.
Simplex Method Review. Canonical Form A is m x n Theorem 7.5: If an LP has an optimal solution, then at least one such solution exists at a basic feasible.
Optimal Control.
1 Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 23, 2010 Piotr Mirowski Based on slides by Sumit.
Function Optimization
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Part 4 Nonlinear Programming
Chapter 11 Optimization with Equality Constraints
CSLT ML Summer Seminar (13)
Computational Optimization
L11 Optimal Design L.Multipliers
Chap 9. General LP problems: Duality and Infeasibility
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
ENGM 435/535 Optimization Adapting to Non-standard forms.
CS5321 Numerical Optimization
Part 4 Nonlinear Programming
CS5321 Numerical Optimization
Linear Programming Example: Maximize x + y x and y are called
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Chapter 2. Simplex method
CS5321 Numerical Optimization
Chapter 2. Simplex method
CS5321 Numerical Optimization
Constraints.
L8 Optimal Design concepts pt D
Presentation transcript:

CS5321 Numerical Optimization 17 Penalty and Augmented Lagrangian Methods 11/28/2018

Outline Both active set methods and interior point methods require a feasible initial point. Penalty methods need not a feasible initial point. Quadratic penalty method Nonsmooth exact penalty method Augmented Lagrangian methods 11/28/2018

1. Quadratic penalty function For the quadratic penalty function is  is the penalty parameter m i n x f ( ) s . t c = ; 2 E Q ( x ; ¹ ) = f + 2 X i E c m i n x f ( ) s . t c = ; 2 E ¸ I Q ( x ; ¹ ) = f + 2 X i E c I [ ] ¡ 11/28/2018

Quadratic penalty method Given 0, {k|k0, k >0}, starting point x0 For k = 0,1,2,… Find a solution xk of Q (:, k) s.t. ||xQ(:, k)|| k. If converged, stop Choose k+1 > k and another starting xk. Theorem 17.1: If xk is a global exact solution to step 2(a), and k, xk converges to the global solution x* of the original problem. 11/28/2018

The Hessian matrix Let A(x)T=[ci(x)]iE. The Hessian of Q is Step 2(a) needs to solve ATA only has rank m (m<n). As k increases, the system becomes ill-conditioned Solve a larger system with a better condition r 2 x Q ( ; ¹ k ) = f + X i E c A T r 2 x Q ( ; ¹ k ) p = ¡ 2 4 r f + X i E ¹ k c A ( x ) T ¡ 1 = u I 3 5 · p ³ ¸ Q 11/28/2018

2. Nonsmooth Penalty function [y]−=max{0,−y}, which is not differentiable. But the functions inside are differentiable. Approximate it by linear functions Á 1 ( x ; ¹ ) = f + X i 2 E j c I [ ] ¡ q ( p ; ¹ ) = f x + r T 1 2 W X i E j c I [ ] ¡ 11/28/2018

Smoothed object function The object function can be rewritten as Á 1 ( x ; ¹ ) = f + X i 2 E j c I [ ] ¡ m i n p ; r s t f ( x ) T + 1 2 W ¹ X E I . c = ¸ ¡ 11/28/2018

3. Augmented Lagrangian For , define the augmented Lagrangian function Theorem 17.5: If x* is a solution of the original problem, and ci(x) are linearly independent, and the second order optimality conditions are satisfied, then there is a *, such that for all *, x* is a local minimizer of LA(x,,) m i n x f ( ) s . t c = ; 2 E L A ( x ; ¸ ¹ ) = f ¡ X i 2 E c + 11/28/2018

Lagrangian multiplier The gradient of LA(x,,) is In the Lagrangian function property, the Lagrangian multiplier Thus, . To satisfy ci(x)=0, either    or Previous penalty methods only use  to make ci(x)=0 Parameter  can be updated as r L A ( x k ; ¸ ¹ ) = f ¡ X i 2 E [ c ] ¸ ¤ i = ( ) k ¡ ¹ c x c i ( x ) = ¡ [ ¸ ¤ k ] ¹ ( ¸ i ) k ! ¤ ( ¸ i ) k + 1 = ¡ ¹ c x 11/28/2018

Inequality constrains For inequality constraints, add slack variables Bounded constrained Lagrangian (BCL) How to solve this will be discussed in chap 18. (gradient projection method) c i ( x ) ¡ s = ; ¸ f o r a l 2 I L A ( x ; ¸ ¹ ) = f ¡ m X i 1 c + 2 n s . t l · u 11/28/2018