Presentation is loading. Please wait.

Presentation is loading. Please wait.

Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions.

Similar presentations


Presentation on theme: "Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions."— Presentation transcript:

1

2 Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions

3 Constrained optimization x1x1 x2x2 Infeasible regions Feasible region Optimum Decreasing f(x) g1(x) g2(x) Inequality constraints

4 Equality constraints We will develop the optimality conditions for equality constraints and then generalize them for inequality constraints Give an example of an engineering equality constraint.

5 Lagrangian function where j are unknown Lagrange multipliers Stationary point conditions for equality constraints: Lagrangian and stationarity

6 Example Quadratic objective and constraint Lagrangian Stationarity conditions Two stationary points

7 Inequality constraints require transformation to equality constraints: This yields the following Lagrangian: Why is the slack variable squared? Inequality constraints

8 Karush-Kuhn-Tucker conditions Conditions for stationary points are then: If inequality constraint is inactive (t ≠ 0) then Lagrange multiplier = 0 For minimum, non-negative multipliers

9 Convex optimization problem has –convex objective function –convex feasible domain if All inequality constraints are convex (or g j = convex) All equality constraints are linear –only one optimum Karush-Kuhn-Tucker conditions necessary and will also be sufficient for global minimum Why do the equality constraints have to be linear? Convex problems

10 Example extended to inequality constraints Minimize quadratic objective in a ring Will use

11 Matlab’s fmincon function f=quad2(x) f=x(1)^2+10*x(2)^2; function [c,ceq]=ring(x) global ri ro c(1)=ri^2-x(1)^2-x(2)^2; c(2)=x(1)^2+x(2)^2-ro^2; ceq=[]; x0=[1,10];ri=10.; ro=20; [x,fval,exitflag,output,lambda]=fmincon(@quad2,x0,[],[],[],[],[ ],[],@ring)

12 Output message Warning: The default trust-region-reflective algorithm does not solve problems with the constraints you have specified. FMINCON will use the active-set algorithm instead. Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the default value of the function tolerance, and constraints are satisfied to within the default value of the constraint tolerance. What assumption do you think Matlab makes in selec ting the default value of the constraint tolerance?

13 Solution x =10.0000 -0.0000 fval =100.0000 lambda = lower: [2x1 double] upper: [2x1 double] eqlin: [0x1 double] eqnonlin: [0x1 double] ineqlin: [0x1 double] ineqnonlin: [2x1 double] lambda.ineqnonlin’=1.0000 0

14 Sensitivity of optimum solution to problem parameters Assuming problem fitness and constraints depend on parameter p The solution is and the corresponding fitness value

15 Sensitivity of optimum solution to problem parameters (contd.) We would like to obtain derivatives of f * w.r.t. p After manipulating governing equations we obtain Lagrange multipliers called “shadow prices” because they provide the price of imposing constraints Why do we have ordinary derivative on the left side and partial on the right side?

16 Example A simpler version of ring problem For p=100 we found Here it is easy to see that solution is Which agrees with


Download ppt "Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions."

Similar presentations


Ads by Google