Presentation on theme: "Engineering Optimization"— Presentation transcript:
1 Engineering Optimization Concepts and ApplicationsPictures show convex polyhedra, found at These resemble the feasible design spaces of linear programming problems in 3-dimensional design problems.Fred van KeulenMatthijs LangelaarCLA H21.1
3 Inequality constrained problems Consider problem with only inequality constraints:g2g1x1x2fg3At optimum, only active constraints matter:Optimality conditions similar to equality constrained case
4 Inequality constraints First order optimality:Consider feasible local variation around optimum:(feasible perturbation)Since(boundary optimum)
5 Optimality condition Multipliers must be non-negative: x1x2f-fThis interpretation is given in Haftka.Interpretation: negative gradient (descent direction) lies in cone spanned by positive constraint gradients-f
6 Optimality condition (2) g2Feasible conex2Feasible direction:g1f-fDescent direction:x1This interpretation is given in Belegundu.Equivalent interpretation: no descent direction exists within the cone of feasible directions
7 Karush-Kuhn-Tucker conditions First order optimality conditions for constrained problem:Lagrangian:Note, this condition applies only to regular points, I.e. were the constraint gradients are not linearly dependent.
8 SufficiencyKKT conditions are necessary conditions for local constrained minimaFor sufficiency, consider the sufficiency conditions based on the active constraints:on tangent subspace of h and active g.Special case: convex objective & convex feasible region: KKT conditions sufficient for global optimality
9 Significance of multipliers Consider case where optimization problem depends on parameter a:Lagrangian:KKT:Looking for:
10 Significance of multipliers (3) Lagrange multipliers describe the sensitivity of the objective to changes in the constraints:Similar equations can be derived for multiple constraints and inequalitiesMultipliers give “price of raising the constraint”Note, this makes it logical that at an optimum, multipliers of inequality constraints must be positive!
12 Constrained optimization methods Approaches:Transformation methods (penalty / barrier functions) plus unconstrained optimization algorithmsRandom methods / Simplex-like methodsFeasible direction methodsReduced gradient methodsApproximation methods (SLP, SQP)Penalty and barrier methods treated before.Note, constrained problems can also have interior optima!
13 Augmented Lagrangian method Recall penalty method:Disadvantages:High penalty factor needed for accurate resultsHigh penalty factor causes ill-conditioning, slow convergence
14 Augmented Lagrangian method Basic idea:Add penalty term to LagrangianUse estimates and updates of multipliersAlso possible for inequality constraintsMultiplier update rules determine convergenceExact convergence for moderate values of pThe penalty term helps to make the Hessian of L positive definite.
16 Feasible direction methods Moving along the boundaryRosen’s gradient projection methodZoutendijk’s method of feasible directionsBasic idea:move along steepest descent direction until constraints are encounteredstep direction obtained by projecting steepest descent direction on tangent planerepeat until KKT point is foundBoth methods involve line searches along the feasible directions.The picture shows professor J.B. Rosen
17 1. Gradient projection method x3Iterations follow the constraint boundary:h = 0For nonlinear constraints, mapping back to the constraint surface is needed, in normal spacex1x2For simplicity, consider linear equality constrained problem:
19 Gradient projection method (3) Search direction in tangent space:Projection matrixNonlinear case:Correction in normal space:
20 Correction to constraint boundary Correction in normal subspace, e.g. using Newton iterations:xkx’k+1skxk+1First order Taylor approximation:Iterations:
21 Practical aspects How to deal with inequality constraints? Use active set strategy:Keep set of active inequality constraintsTreat these as equality constraintsUpdate the set regularly (heuristic rules)In gradient projection method, if s = 0:Check multipliers: could be KKT pointIf any mi < 0, this constraint is inactive and can be removed from the active set
22 Slack variablesAlternative way of dealing with inequality constraints: using slack variables:Disadvantages: all constraints considered all the time, + increased number of design variables
23 2. Zoutendijk’s feasible directions Basic idea:move along steepest descent direction until constraints are encounteredat constraint surface, solve subproblem to find descending feasible directionrepeat until KKT point is foundSubproblem:Descending:Subproblem is LP problem, can be solved efficiently. Gives best search direction. (But alpha must be negative!!!)See Belegundu p. 168 for detailsFeasible:
24 Zoutendijk’s method Subproblem linear: efficiently solved Determine active set before solving subproblem!When a = 0: KKT point foundMethod needs feasible starting point.Dr. Zoutendijk worked at the University of Leiden, and invented this method around 1970.Nonlinear equality constraints have no interior, and this method requires an interior. This is because for a descent direction, alpha must be slightly negative, which means that the design is pushed slightly into the feasible region. See Belegundu for details.
26 Reduced gradient methods Basic idea:Choose set of n - m decision variables dUse reduced gradient in unconstr. gradient-based methodRecall:reduced gradientFor the iterations, h(d,s)=0 must be written as a first order Taylor approximation, and then an iterative procedure for s can be made (basically this is a Newton method).State variables s can be determined from:(iteratively fornonlinear constraints)
27 Reduced gradient method Nonlinear constraints: Newton iterations to return to constraint surface (determine s):until convergenceA note on the selection of the variables is given in Papalambros, p The cost of the back-to-constraint-mapping procedure depends strongly on the partitioning.Variants using 2nd order information also existDrawback: selection of decision variables (but some procedures exist)
31 SLP points of attention Solves LP problem in every cycle: efficient only when analysis cost is relatively highTendency to divergeSolution: trust region (move limits)x2x1
32 SLP points of attention (2) Infeasible starting point can result in unsolvable LP problemSolution: relaxing constraints in first cyclesk sufficiently large to force solution into feasible regionThe feasible domain is enlarged by beta, which allows a certain amount of constraint violation.
33 SLP points of attention (3) Cycling can occur when optimum lies on curved constraintSolution: move limit reduction strategyfx2x1
34 Method of Moving Asymptotes First order method, by Svanberg (1987)Builds convex approximate problem, approximating responses using:R, Pi, Qi, Ui and Li are determined base don the values of the gradient and objective, and the history of the optimization process. See also p. 325 of Papalambros.Approximate problem solved efficientlyPopular method in topology optimization
35 Sequential Approximate Optimization Zeroth order method:Determine initial trust regionGenerate sampling points (design of experiments)Build response surface (e.g. Least Squares, Kriging, …)Optimize approximate problemCheck convergence, update trust region, repeate from 2Many variants!See also Lecture 4
36 Sequential Approximate Optimization Good approach for expensive modelsRS dampens noiseVersatileDesign domainOptimumResponse surfaceSub-optimal pointTrust region
40 Note: KKT conditions of: SQP (3)Note: KKT conditions of:Quadratic subproblem for finding search direction sk
41 Quadratic subproblemQuadratic subproblem with linear constraints can be solved efficiently:General case:KKT condition:Efficient specialized algorithms exist (Papalambros p. 318) to solve this system of equations.Solution:
42 Basic SQP algorithmChoose initial point x0 and initial multiplier estimates l0Set up matrices for QP subproblemSolve QP subproblem sk , lk+1Set xk+1 = xk + skCheck convergence criteria Finished
43 SQP refinementsFor convergence of Newton method, must be positive definiteLine search along sk improves robustnessTo avoid computation of Hessian information for , quasi-Newton approaches (DFP, BFGS) can be used (also ensure positive definiteness)Active set strategies operate either on the original problem or on the quadratic subproblem, and the line search is also a special line search which uses a special “merit function” to locate the best point.For dealing with inequality constraints, various active set strategies exist
44 ComparisonMethod AugLag Zoutendijk GRG SQPFeasible starting point? No Yes Yes NoNonlinear constraints? Yes Yes Yes YesEquality constraints? Yes Hard Yes YesUses active set? Yes Yes No YesIterates feasible? No Yes No NoDerivatives needed? Yes Yes Yes YesSQP generally seen as best general-purpose method for constrained problems