Engineering Optimization

Slides:



Advertisements
Similar presentations
Lecture 5 Newton-Raphson Method
Advertisements

Line Search.
Optimization.
CSE 330: Numerical Methods
Engineering Optimization
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Optimization 吳育德.
Optimization methods Review
Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x.
Engineering Optimization
Optimization Introduction & 1-D Unconstrained Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Nonlinear programming: One dimensional minimization methods
Engineering Optimization – Concepts and Applications Engineering Optimization Concepts and Applications Fred van Keulen Matthijs Langelaar CLA H21.1
MIT and James Orlin © Nonlinear Programming Theory.
Chapter 4 Roots of Equations
PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
Roots of Equations Open Methods (Part 2).
Page 1 Page 1 Engineering Optimization Second Edition Authors: A. Rabindran, K. M. Ragsdell, and G. V. Reklaitis Chapter-2 (Functions of a Single Variable)
458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8.
Engineering Optimization
Engineering Optimization
1 Chapter 8: Linearization Methods for Constrained Problems Book Review Presented by Kartik Pandit July 23, 2010 ENGINEERING OPTIMIZATION Methods and Applications.
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Revision.
Optimization Mechanics of the Simplex Method
Optimization Methods One-Dimensional Unconstrained Optimization
Engineering Optimization
Optimization Methods One-Dimensional Unconstrained Optimization
Roots of Equations Open Methods Second Term 05/06.
Tier I: Mathematical Methods of Optimization
Chapter 3 Root Finding.
Computational Optimization
UNCONSTRAINED MULTIVARIABLE
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 8. Nonlinear equations.
Instructor: Prof.Dr.Sahand Daneshvar Presented by: Seyed Iman Taheri Student number: Non linear Optimization Spring EASTERN MEDITERRANEAN.
Numerical Methods Applications of Loops: The power of MATLAB Mathematics + Coding 1.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
1 Nonlinear Equations Jyun-Ming Chen. 2 Contents Bisection False Position Newton Quasi-Newton Inverse Interpolation Method Comparison.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Numerical Methods for Engineering MECN 3500
4 Numerical Methods Root Finding Secant Method Modified Secant
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
One Dimensional Search
Chapter 10 Minimization or Maximization of Functions.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Optimization of functions of one variable (Section 2)
Optimality Conditions for Unconstrained optimization One dimensional optimization –Necessary and sufficient conditions Multidimensional optimization –Classification.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm One double-sided cheat sheet (8.5in x 11in) allowed Bring your calculator to the exam Chapters.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
Part 4 Nonlinear Programming 4.3 Successive Linear Programming.
Root Finding Methods Fish 559; Lecture 15 a.
Newton’s Method for Systems of Non Linear Equations
CS B553: Algorithms for Optimization and Learning
Chapter 6.
Computers in Civil Engineering 53:081 Spring 2003
Chapter 7 Optimization.
Optimization Part II G.Anuradha.
SOLUTION OF NONLINEAR EQUATIONS
Math 175: Numerical Analysis II
Part 4 - Chapter 13.
Chapter 6.
Bracketing.
Presentation transcript:

Engineering Optimization Concepts and Applications Fred van Keulen Matthijs Langelaar CLA H21.1 A.vanKeulen@tudelft.nl

Recap / overview Optimization problem Definition Checking Negative null form Model Special topics Linear / convex problems Sensitivity analysis Topology optimization Solution methods Unconstrained problems Constrained problems Optimality criteria Optimality criteria Optimization algorithms Optimization algorithms

Summary optimality conditions Conditions for local minimum of unconstrained problem: First Order Necessity Condition: Second Order Sufficiency Condition: H positive definite For convex f in convex feasible domain: condition for global minimum: Sufficiency Condition:

Stationary point nature summary Definiteness H Nature x* Positive d. Minimum Positive semi-d. Valley Indefinite Saddlepoint Negative semi-d. Ridge Negative d. Maximum

Complex eigenvalues? Question: what is the nature of a stationary point when H has complex eigenvalues? Answer: this situation never occurs, because H is symmetric by definition. Symmetric matrices have real eigenvalues (spectral theory).

Nature of stationary points Nature of initial position depends on load (buckling): F k1 k2 l

Nature of stationary points (2)

Unconstrained optimization algorithms Single-variable methods 0th order (involving only f ) 1st order (involving f and f ’ ) 2nd order (involving f, f ’ and f ” ) Multiple variable methods

Why optimization algorithms? Optimality conditions often cannot be used: Function not explicitly known (e.g. simulation) Conditions cannot be solved analytically Example: Stationary points:

0th order methods: pro/con Weaknesses: (Usually) less efficient than higher order methods (many function evaluations) Strengths: No derivatives needed Work also for discontinuous / non- differentiable functions Easy to program Robust

Minimization with one variable Why? Simplest case: good starting point Used in multi-variable methods during line search Setting: f x Model Optimizer Iterative process:

Termination criteria Stop optimization iterations when: Solution is sufficiently accurate (check optimality criteria) Progress becomes too slow: Maximum resources have been spent The solution diverges Cycling occurs xa xb

Brute-force approach Simple approach: exhaustive search Disadvantage: rather inefficient f x L0 n points: Final interval size = Ln

Basic strategy of 0th order methods for single-variable case Find interval [a0, b0] that contains the minimum (bracketing) Iteratively reduce the size of the interval [ak, bk] (sectioning) Approximate the minimum by the minimum of a simple interpolation function over the interval [aN, bN] Sectioning methods: Dichotomous search Fibonacci method Golden section method

Bracketing the minimum f x4 = x3+g2D x1 [a0, b0] x2 = x1+D x3 = x2+gD x Starting point x1, stepsize D, expansion parameter g: user-defined

Unimodality Bracketing and sectioning methods work best for unimodal functions: “An unimodal function consists of exactly one monotonically increasing and decreasing part”

Dichotomous search Conceptually simple idea: Main Entry: di·chot·o·mous Pronunciation: dI-'kät-&-m&s also d&- Function: adjective : dividing into two parts Conceptually simple idea: Try to split interval in half in each step L0 a0 b0 L0/2 d << L0

Dichotomous search (2) Interval size after 1 step (2 evaluations): Interval size after m steps (2m evaluations): Proper choice for d :

Dichotomous search (3) Example: m = 10 Ideal interval reduction m

Sectioning - Fibonacci Situation: minimum bracketed between x1 and x3 : x4 x4 x1 x2 x3 Test new points and reduce interval Optimal point placement?

Optimal sectioning Fibonacci method: optimal sectioning method Given: Initial interval [a0, b0] Predefined total number of evaluations N, or: Desired final interval size e

Fibonacci sectioning - basic idea Start at final interval and use symmetry and maximum interval reduction: d << IN IN IN-1 = 2IN IN-2 = 3IN IN-3 = 5IN IN-4 = 8IN IN-5 = 13IN Yellow point is point that has been added in the previous iteration. Fibonacci number

Sectioning – Golden Section For large N, Fibonacci fraction b converges to golden section ratio f (0.618034…): Golden section method uses this constant interval reduction ratio f f 1

Sectioning - Golden Section Origin of golden section: I1 I2 = fI1 I2 = fI1 I3 = fI2 Final interval:

Comparison sectioning methods Ideal dichotomous interval reduction Fibonacci Golden section Evaluations N Dichotomous 12 Golden section 9 Fibonacci 8 (Exhaustive 99) Example: reduction to 2% of original interval: Conclusion: Golden section simple and near-optimal

Quadratic interpolation Three points of the bracket define interpolating quadratic function: ai+1 bi+1 xnew New point evaluated at minimum of parabola: ai bi For minimum: a > 0! Shift xnew when very close to existing point

Unconstrained optimization algorithms Single-variable methods 0th order (involving only f ) 1st order (involving f and f ’ ) 2nd order (involving f, f ’ and f ” ) Multiple variable methods

Cubic interpolation Similar to quadratic interpolation, but with 2 points and derivative information: ai bi

Bisection method Optimality conditions: minimum at stationary point  Root finding of f ’ Similar to sectioning methods, but uses derivative: f f’ Interval is halved in each iteration. Note, this is better than any of the direct methods.

Secant method f ’ Also based on root finding of f ’ Uses linear interpolation f ’ Interval possibly even more than halved in each iteration. Best.

Unconstrained optimization algorithms Single-variable methods 0th order (involving only f ) 1st order (involving f and f ’ ) 2nd order (involving f, f ’ and f ” ) Multiple variable methods

Newton’s method Again, root finding of f ’ Basis: Taylor approximation of f ’: Linear approximation New guess:

Newton’s method f’ f’ Best convergence of all methods: xk+1 xk+1 xk+2 xk xk+2 xk Note, jumping from point to point, not contained in an interval. Dangerous, may diverge. Unless it diverges

Summary single variable methods Bracketing + Dichotomous sectioning Fibonacci sectioning Golden ratio sectioning Quadratic interpolation Cubic interpolation Bisection method Secant method Newton method In practice: additional “tricks” needed to deal with: Multimodality Strong fluctuations Round-off errors Divergence 0th order 1st order 2nd order And many, many more!