Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions.

Slides:



Advertisements
Similar presentations
3.6 Support Vector Machines
Advertisements

Chapter 8 The Maximum Principle: Discrete Time 8.1 Nonlinear Programming Problems We begin by starting a general form of a nonlinear programming problem.
Fin500J: Mathematical Foundations in Finance
Optimization with Constraints
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
Nonlinear Programming McCarl and Spreen Chapter 12.
Introduction to Sensitivity Analysis Graphical Sensitivity Analysis
Engineering Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Page 1 Page 1 ENGINEERING OPTIMIZATION Methods and Applications A. Ravindran, K. M. Ragsdell, G. V. Reklaitis Book Review.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
Separating Hyperplanes
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
© 2007 Pearson Education Chapter 14: Solving and Analyzing Optimization Models.
EE 553 Introduction to Optimization
Optimization in Engineering Design 1 Lagrange Multipliers.
Numerical Optimization
Engineering Optimization
Economics 214 Lecture 37 Constrained Optimization.
Design Optimization School of Engineering University of Bradford 1 Formulation of a design improvement problem as a formal mathematical optimization problem.
Unconstrained Optimization Problem
MAE 552 – Heuristic Optimization Lecture 1 January 23, 2002.
D Nagesh Kumar, IIScOptimization Methods: M2L5 1 Optimization using Calculus Kuhn-Tucker Conditions.
5.6 Maximization and Minimization with Mixed Problem Constraints
1 Optimization. 2 General Problem 3 One Independent Variable x y (Local) maximum Slope = 0.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Chapter 4 Simplex Method for Linear Programming Shi-Shang Jang Chemical Engineering Department National Tsing-Hua University.
Introduction to Optimization (Part 1)
1. The Simplex Method.
Linear Programming Chapter 13 Supplement.
Optimization of Process Flowsheets S,S&L Chapter 24 T&S Chapter 12 Terry A. Ring CHEN 5253.
CAPRI Mathematical programming and exercises Torbjörn Jansson* *Corresponding author Department for Economic and Agricultural.
Survey of gradient based constrained optimization algorithms Select algorithms based on their popularity. Additional details and additional algorithms.
D Nagesh Kumar, IIScOptimization Methods: M2L4 1 Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints.
L8 Optimal Design concepts pt D
Part 4 Nonlinear Programming 4.1 Introduction. Standard Form.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
(iii) Lagrange Multipliers and Kuhn-tucker Conditions D Nagesh Kumar, IISc Introduction to Optimization Water Resources Systems Planning and Management:
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Inequality Constraints Lecture 7. Inequality Contraints (I) n A Review of Lagrange Multipliers –As we discussed last time, the first order necessary conditions.
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Optimal Control.
1 Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 23, 2010 Piotr Mirowski Based on slides by Sumit.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Another sufficient condition of local minima/maxima
Part 4 Nonlinear Programming
Chapter 11 Optimization with Equality Constraints
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Module E3-a Economic Dispatch.
Constrained Optimization
Optimization of Process Flowsheets
Optimality conditions constrained optimisation
AOE/ESM 4084 Engineering Design Optimization
3-3 Optimization with Linear Programming
Part 3. Linear Programming
The Lagrange Multiplier Method
PRELIMINARY MATHEMATICS
Part 4 Nonlinear Programming
Outline Unconstrained Optimization Functions of One Variable
Lecture 18. SVM (II): Non-separable Cases
Part 3. Linear Programming
CS5321 Numerical Optimization
EE/Econ 458 Introduction to Optimization
Constraints.
L8 Optimal Design concepts pt D
Presentation transcript:

Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions

Constrained optimization x1x1 x2x2 Infeasible regions Feasible region Optimum Decreasing f(x) g1(x) g2(x) Inequality constraints

Equality constraints We will develop the optimality conditions for equality constraints and then generalize them for inequality constraints Give an example of an engineering equality constraint.

Lagrangian function where j are unknown Lagrange multipliers Stationary point conditions for equality constraints: Lagrangian and stationarity

Example Quadratic objective and constraint Lagrangian Stationarity conditions Two stationary points

Inequality constraints require transformation to equality constraints: This yields the following Lagrangian: Why is the slack variable squared? Inequality constraints

Karush-Kuhn-Tucker conditions Conditions for stationary points are then: If inequality constraint is inactive (t ≠ 0) then Lagrange multiplier = 0 For minimum, non-negative multipliers

Convex optimization problem has –convex objective function –convex feasible domain if All inequality constraints are convex (or g j = convex) All equality constraints are linear –only one optimum Karush-Kuhn-Tucker conditions necessary and will also be sufficient for global minimum Why do the equality constraints have to be linear? Convex problems

Example extended to inequality constraints Minimize quadratic objective in a ring Will use

Matlab’s fmincon function f=quad2(x) f=x(1)^2+10*x(2)^2; function [c,ceq]=ring(x) global ri ro c(1)=ri^2-x(1)^2-x(2)^2; c(2)=x(1)^2+x(2)^2-ro^2; ceq=[]; x0=[1,10];ri=10.; ro=20;

Output message Warning: The default trust-region-reflective algorithm does not solve problems with the constraints you have specified. FMINCON will use the active-set algorithm instead. Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the default value of the function tolerance, and constraints are satisfied to within the default value of the constraint tolerance. What assumption do you think Matlab makes in selec ting the default value of the constraint tolerance?

Solution x = fval = lambda = lower: [2x1 double] upper: [2x1 double] eqlin: [0x1 double] eqnonlin: [0x1 double] ineqlin: [0x1 double] ineqnonlin: [2x1 double] lambda.ineqnonlin’=

Sensitivity of optimum solution to problem parameters Assuming problem fitness and constraints depend on parameter p The solution is and the corresponding fitness value

Sensitivity of optimum solution to problem parameters (contd.) We would like to obtain derivatives of f * w.r.t. p After manipulating governing equations we obtain Lagrange multipliers called “shadow prices” because they provide the price of imposing constraints Why do we have ordinary derivative on the left side and partial on the right side?

Example A simpler version of ring problem For p=100 we found Here it is easy to see that solution is Which agrees with