Optimization Methods One-Dimensional Unconstrained Optimization

Slides:



Advertisements
Similar presentations
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 14.
Advertisements

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Optimization : The min and max of a function
Siddharth Choudhary.  Refines a visual reconstruction to produce jointly optimal 3D structure and viewing parameters  ‘bundle’ refers to the bundle.
Optimization of thermal processes
Optimization 吳育德.
Optimization methods Review
Optimization Introduction & 1-D Unconstrained Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
MIT and James Orlin © Nonlinear Programming Theory.
Numerical Optimization
Function Optimization Newton’s Method. Conjugate Gradients
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8.
Tutorial 12 Unconstrained optimization Conjugate gradients.
Optimization Methods One-Dimensional Unconstrained Optimization
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Revision.
Gradient Methods May Preview Background Steepest Descent Conjugate Gradient.
Optimization Mechanics of the Simplex Method
Tutorial 5-6 Function Optimization. Line Search. Taylor Series for Rn
Optimization Methods One-Dimensional Unconstrained Optimization
Optimization Linear Programming and Simplex Method
Nonlinear programming
Function Optimization. Newton’s Method Conjugate Gradients Method
Advanced Topics in Optimization
Why Function Optimization ?

Computational Optimization
UNCONSTRAINED MULTIVARIABLE
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Nonlinear programming Unconstrained optimization techniques.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
296.3Page :Algorithms in the Real World Linear and Integer Programming II – Ellipsoid algorithm – Interior point methods.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Example Ex. Find Sol. So. Example Ex. Find (1) (2) (3) Sol. (1) (2) (3)
Local Search and Optimization Presented by Collin Kanaley.
Particle Swarm Optimization by Dr. Shubhajit Roy Chowdhury Centre for VLSI and Embedded Systems Technology, IIIT Hyderabad.
One Dimensional Search
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Chapter 10 Minimization or Maximization of Functions.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Optimization of functions of one variable (Section 2)
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm
Gradient Methods In Optimization
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm One double-sided cheat sheet (8.5in x 11in) allowed Bring your calculator to the exam Chapters.
Ch. Eick: Num. Optimization with GAs Numerical Optimization General Framework: objective function f(x 1,...,x n ) to be minimized or maximized constraints:
Lecture 4 Chapter 3 Improving Search
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
1 Optimization Linear Programming and Simplex Method.
Non-linear Minimization
Local Search Algorithms
Chapter 14.
Chapter 7 Optimization.
Part 4 - Chapter 13.
Local Search Algorithms
What are optimization methods?
Presentation transcript:

Optimization Multi-Dimensional Unconstrained Optimization Part I: Non-gradient Methods

Optimization Methods One-Dimensional Unconstrained Optimization Golden-Section Search Quadratic Interpolation Newton's Method Multi-Dimensional Unconstrained Optimization Non-gradient or direct methods Gradient methods Linear Programming (Constrained) Graphical Solution Simplex Method

Multidimensional Unconstrained Optimization Techniques to find minimum and maximum of f(x1, x2, 3,…, xn) 2 classes of techniques: Do not require derivative evaluation Non-gradient or direct methods Require derivative evaluation Gradient or descent (or ascent) methods

2-D Contour View of f(x, y)

DIRECT METHODS — Random Search max = -∞ for i = 1 to N for each xi xi = a value randomly selected from a given interval if max < f(x1, x2, 3,…, xn) max = f(x1, x2, 3,…, xn) N has to be sufficiently large Random numbers have to be evenly distributed.

Advantages Disadvantages Random Search Advantages Works even for discontinuous and nondifferentiable functions. Always finds the global optimum rather than the global minimum. Disadvantages As the number of independent variables grows, the task can become onerous. Not efficient, it does not account for the behavior of underlying function.

Finding the Optimum Systematically Basic Idea (Like climbing a mountain) If we keep moving upward, we will eventually reach the peak. Which path should you take? Question If we start from an arbitrary point, how should we "move" so that we can locate the peak in the shortest amount of time? Good guess of direction toward the peak Minimize computation ? You are here. Peak is covered by the cloud.

General Optimization Algorithm All the methods discussed subsequently are iterative methods that can be generalized as: Start at xo = { x1, x2, …, xn } Repeat Select a direction Si xi+1 = Optimal point reached by traveling from xi in the direction of Si Until (|(f(xi+1) – f(xi)) / f(xi+1)| < es1 or || xi+1 – xi||/||xi+1|| < es2)

Univariate Search Idea: Travel in alternating directions that are parallel to the coordinate axes. In each direction, we travel until we reach the peak along that direction and then select a new direction.

Univariate Search More efficient than random search and still doesn’t require derivative evaluation. The basic strategy is: Change one variable at a time while the other variables are held constant. Thus problem is reduced to a sequence of one-dimensional searches The search becomes less efficient as you approach the maximum. (Why?)

Univariate Search – Example f(x, y) = y – x – 2x2 – 2xy – y2 Step 1: set x = 0 (pick a value as starting point) Want to maximize f(0, y) = y – y2 Solving f' = 0 => 1 – 2y = 0 => ymax = 0.5 Step 2: set y = 0.5 Want to maximize f(x, 0.5) = 0.5 – x – 2x2 – x – 0.25 Solving f' = 0 => -1 – 4x – 1 = 0 => xmax = -0.5 Step 3: set x = -0.5 Want to maximize f(-0.5, y) = y + 0.5 – 0.5 + y - y2 Solving f' = 0 => 1 + 1 – 2y = 0 => ymax = 1 … Repeat until xi+1 = xi or yi+1 = yi or ea < es.

Pattern Search Methods Observation: Lines connecting alternating points (1:3, 2:4, 3:5, etc.) give better indication where the peak is (as compared to the lines parallel to the coordinate axes). The general directions that point toward the optima is also known as the pattern directions. Optimization methods that utilize the pattern directions to improve convergent rate are known as pattern search methods.

Powell's Method Powell’s method (a well-known pattern search methods) is based on the observation that if points 1 and 2 are obtained by one-dimensional searches in the same direction but from different starting points, then, the line formed by 1 and 2 will be directed toward the maximum. The directions represented by such lines are called conjugate directions.

Start at point 0, and pre-select 2 initial directions S1 and S2 Start at point 0, and pre-select 2 initial directions S1 and S2. Let's S1 be parallel to y-axis and S2 be parallel to X-axis. (Each direction will only be used twice)

Move from point 0 in direction S1 to point 1. (Each direction has been traversed once so far.) 1 2 S2 S1

Move from point 2 in direction S1 to point 3. S3 = conjugate direction formed by point 1 and point 3 Drop S1 (as it has been traverse twice) but add S3. 3 1 2 S2

Move from point 3 in direction S3 to point 4 Move from point 3 in direction S3 to point 4. (We want to move in conjugate direction whenever it becomes available). S3 3 4 1 2 S2

Move from point 4 in direction S2 to point 5 (we already moved in direction S3 in previous step). S4 = conjugate direction formed by point 2 and point 5 Drop S2 (as it has been used two times) but add S4. S3 5 3 4 1 2 S4

Move from point 5 in direction S4 to point 6. 3 4 1 2 S4

Move from point 6 in direction S3 to point 7. S5 = conjugate direction formed by point 4 and point 7 Drop S3 (as it has been used two times) but add S5. The process continue until it converges 7 6 5 3 4 1 2 S5 S4

Quadratically Convergent Definition: If an optimization method, using exact arithmetic, can find the optima point in n steps while optimizing a quadratic function in n variables, the method is called a quadratically convergent method. If f(x) is a quadratic function, sequential search along conjugate directions will converge quadratically. That is, in a finite number of steps regardless of the starting points.

Conjugate-based Methods Since general non-linear functions can often be reasonably approximated by a quadratic function, methods based on conjugate directions are usually quite efficient and are in fact quadratically convergent as they approach the optimum.

Summary Random Search General algorithm for locating optimum point Guess direction Find maximum point in the guessed direction Univariate Search Conjugate direction Powell's Method