Optimization with Constraints

Slides:



Advertisements
Similar presentations
Curved Trajectories towards Local Minimum of a Function Al Jimenez Mathematics Department California Polytechnic State University San Luis Obispo, CA
Advertisements

Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions.
Survey of gradient based constrained optimization algorithms Select algorithms based on their popularity. Additional details and additional algorithms.
Assoc. Prof. Dr.Pelin Gundes
Engineering Optimization
Adam Networks Research Lab Transformation Methods Penalty and Barrier methods Study of Engineering Optimization Adam (Xiuzhong) Chen 2010 July 9th Ref:
R OBERTO B ATTITI, M AURO B RUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Feb 2014.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Constrained optimization Indirect methods Direct methods.
SE- 521: Nonlinear Programming and Applications S. O. Duffuaa.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
On Constrained Optimization Approach To Object Segmentation Chia Han, Xun Wang, Feng Gao, Zhigang Peng, Xiaokun Li, Lei He, William Wee Artificial Intelligence.
Optimization in Engineering Design 1 Lagrange Multipliers.
Numerical Optimization
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Transformation Methods MOM (Method of Multipliers) Study of Engineering Optimization Guanyao Huang RUbiNet - Robust and Ubiquitous Networking Research.
Motion Analysis (contd.) Slides are from RPI Registration Class.
Design Optimization School of Engineering University of Bradford 1 Numerical optimization techniques Unconstrained multi-parameter optimization techniques.
SVM QP & Midterm Review Rob Hall 10/14/ This Recitation Review of Lagrange multipliers (basic undergrad calculus) Getting to the dual for a QP.
Optimization Methods Unconstrained optimization of an objective function F Deterministic, gradient-based methods Running a PDE: will cover later in course.
Support Vector Machine (SVM) Classification
Reformulated - SVR as a Constrained Minimization Problem subject to n+1+2m variables and 2m constrains minimization problem Enlarge the problem size and.
Unconstrained Optimization Problem
Engineering Optimization
Lecture 10: Support Vector Machines
Advanced Topics in Optimization
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Optimization Methods One-Dimensional Unconstrained Optimization
Introduction to Optimization (Part 1)

MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal February 12, 2007 Inexact Methods for PDE-Constrained Optimization.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Survey of gradient based constrained optimization algorithms Select algorithms based on their popularity. Additional details and additional algorithms.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 9: Ways of speeding up the learning and preventing overfitting Geoffrey Hinton.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal January 31, 2007 Inexact Methods for PDE-Constrained Optimization.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
L25 Numerical Methods part 5 Project Questions Homework Review Tips and Tricks Summary 1.
Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.
Survey of unconstrained optimization gradient based algorithms
Mathematical Analysis of MaxEnt for Mixed Pixel Decomposition
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Linear Programming Interior-Point Methods D. Eiland.
By Liyun Zhang Aug.9 th,2012. * Method for Single Variable Unconstraint Optimization : 1. Quadratic Interpolation method 2. Golden Section method * Method.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 117 Penalty and Barrier Methods General classical constrained.
Regularized Least-Squares and Convex Optimization.
Optimal Control.
Excel’s Solver Use Excel’s Solver as a tool to assist the decision maker in identifying the optimal solution for a business decision. Business decisions.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Computational Optimization
Local Search Algorithms
Roberto Battiti, Mauro Brunato
CS5321 Numerical Optimization
Non-linear Least-Squares
CS5321 Numerical Optimization
Chapter 7 Optimization.
CS5321 Numerical Optimization
Instructor :Dr. Aamer Iqbal Bhatti
Solving Quadratic Equations by Factoring
Chapter 11 Reliability-Based Optimum Design
L23 Numerical Methods part 3
Transformation Methods Penalty and Barrier methods
Constraints.
Presentation transcript:

Optimization with Constraints Standard formulation Why are constraints a problem? Canyons in design space Equality constraints are particularly bad!

Derivative-based optimizers All are predicated on the assumption that function evaluations are expensive and gradients can be calculated. Similar to a person put at night on a hill and directed to find the lowest point in an adjacent valley using a flashlight with limited battery Basic strategy: Flash light to get derivative and select direction. Go straight in that direction until you start going up or hit constraint. Repeat until converged. Steepest descent does not work well. For unconstrained problems use quasi-Newton methods.

Gradient projection and reduced gradient methods Find good direction tangent to active constraints Move a distance and then restore to constraint boundaries How do we decide when to stop and restore?

The Feasible Directions method Compromise constraint avoidance and objective reduction

Penalty function methods Quadratic penalty function Gradual rise of penalty parameter leads to sequence of unconstrained minimization technique (SUMT). Why is it important?

Example 5.7.1

Contours for r=1 . >> x=linspace(1,4,40); >> y=linspace(0.1,0.5,41); >> [X,Y]=meshgrid(x,y); >> Z=X.^2+10*Y.^2+r*(4-X-Y).^2; >> cs=contour(X,Y,Z); >> clabel(cs);

Contours for r=10 .

Contours for r=100 .

Contours for r=1000 . For non-derivative methods can avoid this by having penalty proportional to absolute value of violation instead of its square!

Multiplier methods By adding the Lagrange multipliers to penalty term can avoid ill-conditioning associated with high penalty

Can estimate Lagrange multipliers Stationarity Without penalty So: Iterate See example in textbook

5.9: Projected Lagrangian methods Sequential quadratic programming Find direction by solving Find alpha by minimizing

Matlab function fmincon FMINCON attempts to solve problems of the form: min F(X) subject to: A*X <= B, Aeq*X = Beq (linear cons) X C(X) <= 0, Ceq(X) = 0 (nonlinear cons) LB <= X <= UB X=FMINCON(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON) subjects the minimization to the constraints defined in NONLCON. The function NONLCON accepts X and returns the vectors C and Ceq, representing the nonlinear inequalities and equalities respectively. (Set LB=[] and/or UB=[] if no bounds exist.) [X,FVAL]=FMINCON(FUN,X0,...) returns the value of the objective function FUN at the solution X. What does the order of arguments in fmincon tell you about the expectations of Matlab on the most common problems to be solved?

Useful additional information [X,FVAL,EXITFLAG] = fmincon(FUN,X0,...) returns an EXITFLAG that describes the exit condition of fmincon. Possible values of EXITFLAG 1 First order optimality conditions satisfied. 0 Too many function evaluations or iterations. -1 Stopped by output/plot function. -2 No feasible point found. [X,FVAL,EXITFLAG,OUTPUT,LAMBDA] = fmincon(FUN,X0,...) returns Lagrange multipliers at the solution X: LAMBDA.lower for LB, LAMBDA.upper for UB, LAMBDA.ineqlin is for the linear inequalities, LAMBDA.eqlin is for the linear equalities, LAMBDA.ineqnonlin is for the nonlinear inequalities, and LAMBDA.eqnonlin is for the nonlinear equalities.

Ring Example Quadratic function and constraint function f=quad2(x) Global a f=x(1)^2+a*x(2)^2; function [c,ceq]=ring(x) global ri ro c(1)=ri^2-x(1)^2-x(2)^2; c(2)=x(1)^2+x(2)^2-ro^2; ceq=[]; x0=[1,10]; a=10;ri=10.; ro=20; [x,fval]=fmincon(@quad2,x0,[],[],[],[],[],[],@ring) x =10.0000 -0.0000 fval =100.0000

Making it harder for fmincon [x,fval]=fmincon(@quad2,x0,[],[],[],[],[],[],@ring) Warning: Trust-region-reflective method does not currently solve this type of problem, using active-set (line search) instead. > In fmincon at 437 Maximum number of function evaluations exceeded; increase OPTIONS.MaxFunEvals. x =4.6355 8.9430 fval =109.4628

Restart sometimes helps >>x0=x x0 = 4.6355 8.9430 >> [x,fval]=fmincon(@quad2,x0,[],[],[],[],[],[],@ring) x =10.0000 0.0000 fval =100.0000 Why restarting a solution is often more effective than having a large number of iterations in the original search?