Optimization Introduction & 1-D Unconstrained Optimization

Slides:



Advertisements
Similar presentations
Line Search.
Advertisements

Optimization.
Yunfei duan Hui Pan.  A parabola through three points f(a) f(b) f(c)  for the derivation of this formula is from  denominator should not be zero.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Optimization 吳育德.
Optimization methods Review
Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Nonlinear programming: One dimensional minimization methods
MIT and James Orlin © Nonlinear Programming Theory.
Second Term 05/061 Roots of Equations Bracketing Methods.
Curve-Fitting Interpolation
Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods
Optimization Methods One-Dimensional Unconstrained Optimization
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Roots of Equations Bracketing Methods.
Revision.
Optimization Mechanics of the Simplex Method
Optimization Methods One-Dimensional Unconstrained Optimization
Optimization Linear Programming and Simplex Method
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Advanced Topics in Optimization
Optimization Methods One-Dimensional Unconstrained Optimization
Roots of Equations Open Methods Second Term 05/06.
UNCONSTRAINED MULTIVARIABLE
Chapter 4 Section 4-1 Solving Quadratic Equations in Calculator.
Chapter 2 Single Variable Optimization
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
Solving Non-Linear Equations (Root Finding)
Part 2 Chapter 7 Optimization
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Review Taylor Series and Error Analysis Roots of Equations
Optimization of Process Flowsheets S,S&L Chapter 24 T&S Chapter 12 Terry A. Ring CHEN 5253.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
The Quadratic Formula. What does the Quadratic Formula Do ? The Quadratic formula allows you to find the roots of a quadratic equation (if they exist)
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Numerical Methods for Engineering MECN 3500
Numerical Methods.
CHAPTER 3 NUMERICAL METHODS
Example Ex. Find Sol. So. Example Ex. Find (1) (2) (3) Sol. (1) (2) (3)
ZEIT4700 – S1, 2015 Mathematical Modeling and Optimization School of Engineering and Information Technology.
Solving Non-Linear Equations (Root Finding)
Numerical Methods Solution of Equation.
Today’s class Numerical differentiation Roots of equation Bracketing methods Numerical Methods, Lecture 4 1 Prof. Jinbo Bi CSE, UConn.
One Dimensional Search
Chapter 10 Minimization or Maximization of Functions.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
By Liyun Zhang Aug.9 th,2012. * Method for Single Variable Unconstraint Optimization : 1. Quadratic Interpolation method 2. Golden Section method * Method.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm One double-sided cheat sheet (8.5in x 11in) allowed Bring your calculator to the exam Chapters.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
1 Optimization Linear Programming and Simplex Method.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
CSE 330: Numerical Methods. What is true error? True error is the difference between the true value (also called the exact value) and the approximate.
Answers for Review Questions for Lectures 1-4. Review Lectures 1-4 Problems Question 2. Derive a closed form for the estimate of the solution of the equation.
CHAPTER 3 NUMERICAL METHODS
Optimization of Process Flowsheets
Chapter 7 Optimization.
Chapter 7: Optimization
Optimization and Some Traditional Methods
3.8 Newton’s Method How do you find a root of the following function without a graphing calculator? This is what Newton did.
Part 4 - Chapter 13.
Part 2 Chapter 7 Optimization
Bracketing.
Presentation transcript:

Optimization Introduction & 1-D Unconstrained Optimization

Mathematical Background Objective: Maximize or Minimize f(x) subject to x = {x1, x2, …, xn} f(x): objective function di(x): inequality constraints ei(x): equality constraints ai and bi are constants

Classification of Optimization Problems If f(x) and the constraints are linear, we have linear programming. e.g.: Maximize x + y subject to 3x + 4y ≤ 2 y ≤ 5 If f(x) is quadratic and the constraints are linear, we have quadratic programming. If f(x) is not linear or quadratic and/or the constraints are nonlinear, we have nonlinear programming.

Classification of Optimization Problems When constraints (equations marked with *) are included, we have a constrained optimization problem. Otherwise, we have an unconstrained optimization problem.

Optimization Methods One-Dimensional Unconstrained Optimization Golden-Section Search Quadratic Interpolation Newton's Method Multi-Dimensional Unconstrained Optimization Non-gradient or direct methods Gradient methods Linear Programming (Constrained) Graphical Solution Simplex Method

Global and Local Optima A function is said to be multimodal on a given interval if there are more than one minimum/maximum point in the interval.

Characteristics of Optima To find the optima, we can find the zeroes of f'(x).

Newton’s Method Let g(x) = f'(x) Thus the zeroes of g(x) is the optima of f(x). Substituting g(x) into the updating formula of Newton-Rahpson method, we have Note: Other root finding methods will also work.

Shortcomings Advantages Newton’s Method Need to derive f'(x) and f"(x). May diverge May "jump" to another solution far away Advantages Fast convergent rate near solution Hybrid approach: Use bracketing method to find an approximation near the solution, then switch to Newton's method.

Bracketing Method f(x) xl xu x Suppose f(x) is unimodal on the interval [xl, xu]. That is, there is only one local maxima in [xl, xu]. Objective: Gradually narrowing down the interval by eliminating the sub-interval that does not contain the maxima.

Let xa and xb be two points in (xl, xu) where xa < xb. Bracketing Method xl xa xb xu x xl xa xb xu x Let xa and xb be two points in (xl, xu) where xa < xb. If f(xa) > f(xb), then the maximum point will not reside in the interval [xb, xu] and as a result we can eliminate the portion toward the right of xb. In other words, in the next iteration we can make xb the new xu

Generic Bracketing Method (Pseudocode) // xl, xu: Lower and upper bounds of the interval // es: Acceptable relative error function BracketingMax(xl, xu, es) { do { prev_optimal = optimal; Select xa and xb s.t. xl <= xa < xb <= xu; if (f(xa) < f(xb)) xl = xa; else xu = xb; optimal = max(f(xa), f(xb)); ea = abs((max – prev_max) / max); } while (ea < es); return max; }

Bracketing Method How would you suggest we select xa and xb (with the objective to minimize computation)? Eliminate as much interval as possible in each iteration Set xa and xb close to the center so that we can halve the interval in each iteration Drawbacks: function evaluation is usually a costly operation. Minimize the number of function evaluations Select xa and xb such that one of them can be reused in the next iteration (so that we only need to evaluate f(x) once in each iteration). How should we select such points?

Current iteration Objective: l1 l1 lo If we can calculate xa and xb based on the ratio R w.r.t. the current interval length in each iteration, then we can reuse one of xa and xb in the next iteraton. In this example, xa is reused as x'b in the next iteration so in the next iteration we only need to evaluate f(x'a). xl xa xb xu Next iteration l'1 l'1 l'o x'l x'a x'b x'u xl xa xb xu

Current iteration l1 l1 lo xl xa xb xu Next iteration l'1 l'1 l'o x'l x'a x'b x'u xl xa xb xu Golden Ratio

Golden-Section Search Starts with two initial guesses, xl and xu Two interior points xa and xb are calculated based on the golden ratio as In the first iteration, both xa and xb need to be calculated. In subsequent iteration, xl and xu are updated accordingly and only one of the two interior points needs to be calculated. (The other one is inherited from the previous iteration.)

Golden-Section Search In each iteration the interval is reduced to about 61.8% (Golden ratio) of its previous length. After 10 iterations, the interval is shrunk to about (0.618)10 or 0.8% of its initial length. After 20 iterations, the interval is shrunk to about (0.618)20 or 0.0066%.

Quadratic Interpolation Optima of g(x) Optima of f(x) f(x) x0 x1 x3 x2 x Idea: (i) Approximate f(x) using a quadratic function g(x) = ax2+bx+c (ii) Optima of f(x) ≈ Optima of g(x)

At the optimum point of g(x), g'(x) = 2ax + b = 0. Quadratic Interpolation Shape near optima typically appears like a parabola. We can approximate the original function f(x) using a quadratic function g(x) = ax2 + bx + c. At the optimum point of g(x), g'(x) = 2ax + b = 0. Let x3 be the optimum point, then x3 = -b/2a. How to compute b and a? 2 points => unique straight line (1st-order polynomial) 3 points => unique parabola (2nd-order polynomial) So, we need to pick three points that surround the optima. Let these points be x0, x1, x2 such that x0 < x1 < x2

a and b can be obtained by solving the system of linear equations Quadratic Interpolation a and b can be obtained by solving the system of linear equations Substitute a and b into x3 = -b/2a yields

The process can be repeated to improve the approximation. Quadratic Interpolation The process can be repeated to improve the approximation. Next step, decide which sub-interval to discard Since f(x3) > f(x1) if x3 > x1, discard the interval toward the left of x1 i.e., Set x0 = x1 and x1 = x3 if x3 < x1, discard the interval toward the right of x1 i.e., Set x2 = x1 and x1 = x3 Calculate x3 based on the new x0, x1, x2

Summary Basics Bracketing methods Minimize f(x) = Maximize -f(x) If f'(x) exists, then to find the optima of f(x), we can find the zero of f'(x). Beware of inflection points of f(x) Bracketing methods Golden-Section Search and Quadratic Interpolation How to select points and discard intervals