1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.

Slides:



Advertisements
Similar presentations
Curved Trajectories towards Local Minimum of a Function Al Jimenez Mathematics Department California Polytechnic State University San Luis Obispo, CA
Advertisements

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 14.
Optimization.
Engineering Optimization
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Optimization of thermal processes
Optimization 吳育德.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Jonathan Richard Shewchuk Reading Group Presention By David Cline
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
Numerical Optimization
Function Optimization Newton’s Method. Conjugate Gradients
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
Nonlinear Optimization for Optimal Control
Tutorial 12 Unconstrained optimization Conjugate gradients.
Optimization Methods One-Dimensional Unconstrained Optimization
Optimization Mechanics of the Simplex Method
Tutorial 5-6 Function Optimization. Line Search. Taylor Series for Rn
Optimization Methods One-Dimensional Unconstrained Optimization
Unconstrained Optimization Problem
MAE 552 – Heuristic Optimization Lecture 1 January 23, 2002.
Function Optimization. Newton’s Method Conjugate Gradients Method
Advanced Topics in Optimization
Why Function Optimization ?
Optimization Methods One-Dimensional Unconstrained Optimization

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Computational Optimization
UNCONSTRAINED MULTIVARIABLE
Geometry Optimisation Modelling OH + C 2 H 4 *CH 2 -CH 2 -OH CH 3 -CH 2 -O* 3D PES.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Nonlinear programming Unconstrained optimization techniques.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
LTSI (1) Faculty of Mech. & Elec. Engineering, University AL-Baath, Syria Ahmad Karfoul (1), Julie Coloigner (2,3), Laurent Albera (2,3), Pierre Comon.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Lecture 13. Geometry Optimization References Computational chemistry: Introduction to the theory and applications of molecular and quantum mechanics, E.
Chapter 10 Minimization or Maximization of Functions.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
L24 Numerical Methods part 4
Optimization of functions of one variable (Section 2)
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm
Steepest Descent Method Contours are shown below.
Gradient Methods In Optimization
Signal & Weight Vector Spaces
Performance Surfaces.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Computational Biology BS123A/MB223 UC-Irvine Ray Luo, MBB, BS.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Chapter 14.
Optimization Part II G.Anuradha.
~ Least Squares example
~ Least Squares example
CIS 700: “algorithms for Big Data”
Performance Optimization
Outline Preface Fundamentals of Optimization
Outline Preface Fundamentals of Optimization
Conjugate Direction Methods
Presentation transcript:

1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No constraints

2 Outline General optimization strategy Optimization of second degree polynomials Zero order methods –Random search –Powell’s method First order methods –Steepest descent –Conjugate gradient Second order methods

3 General optimization strategy Start q=0 q=q+1 Pick search direction, S q One dimensional search x q =x q-1 +  * q S q Converged ? Exit

4 Optimization of second-degree polynomials Quadratic: F(X)=a 11 x 1 2 +a 12 x 1 x 2 +…+a nn x n 2 = {X} T [A]{X} [A] is equal to one half the Hessian matrix, [H] There is a linear transformation {X}=[S]{Y} such that: F(Y)= 1 y n y n 2 (no coupling terms) [S]: columns are eigenvectors of [A], S 1, …,S n S 1, …,S n are also eigenvectors of [H]

5 Optimization of Second-degree polynomials Define conjugate directions S 1, …,S n S 1, …,S n are otrhogonal ( i.e. their dot products are zero) because matrix [A] is symmetric –Note that conjugate directions are also linearly independent. The orthogonality property is stronger than the linear independence property; orthogonal vectors are always linearly independent but linearly independent vectors are not necessarily orthogonal. i : eigenvalues of [A], which are equal to one half of the eigenvalues of the Hessian matrix

6 Optimization of second-degree polynomials We can find the exact minimum of a second degree polynomial by performing n one- dimensional searches in the conjugate directions S 1, …,S n If all eigenvalues of [A] are positive then a second degree polynomial has a unique minimum

7 Zero-order methods; random search Random number generator: generates sample of values of variables drawn for a spcified probbility distribution. Available in most programming languages. Idea: For F(x 1,…, x n ), generate random n- tuples {x 1 1,…,x n 1 }, {x 1 2,…,x n 2 },…, {x 1 N,…,x n N }. Find minimum.

8 Powell’s method Efficient, reliable, popular Based on conjugate directions, although it does not use Hessian matrix

9 Searching for optimum in Powell’s method S1S1 S2S2 S3S3 S4S4 S5S5 S6S6 First iteration:S 1 -S 3 Second iteration: S 4 -S 6 Directions S 3, S 6 conjugate Present iteration: use last two search directions from previous iteration

10 Powell’s method: algorithm x0x0 Define set of n search directions S q coordinate unit vectors, q=1,…,n x =x 0, y=x q=0 q=q+1 Find  * to min F(x q-1 +  * S q ) x q = (x q-1 +  * S q ) q=n ? N Find conjugate direction S q+1 =x q -y Find  * to min F(x q +  * S q+1 ) Converged ? Y Y Exit Update search directions S q =S q+1 q=1,…,n y=x q+1 N One iteration, n+1 one dimensional searches x q+1 = (x q +  * S q+1 )

11 Powell’s method Second degree polynomial; optimum in n iterations Each iteration involves n+1 one- dimensional searches n(n+1) one dimensional searches total

12 First-order methods: Steepest Descent Idea: Search in the direction of the negative gradient, –Starting from a design move by a small amount. Objective function reduces most along the direction of

13 Algorithm Perform one-dimesnional minimization in steepest descent direction x0x0 S= - Find  * to min F(x+  * S) x=x+  * S Converged ? Stop Yes No Determine steepest descent direction Update design

14 Steepest Descent Pros: Easy to implement, robust, makes quick progress in the beginning of optimization. Cons: Too slow toward end of optimization