UNCONSTRAINED MULTIVARIABLE

Slides:



Advertisements
Similar presentations
Optimization.
Advertisements

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Optimization of thermal processes
Optimization 吳育德.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Nonlinear programming: One dimensional minimization methods
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
Numerical Optimization
Function Optimization Newton’s Method. Conjugate Gradients
Unconstrained Optimization Rong Jin. Recap  Gradient ascent/descent Simple algorithm, only requires the first order derivative Problem: difficulty in.
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
Tutorial 12 Unconstrained optimization Conjugate gradients.
Methods For Nonlinear Least-Square Problems
Design Optimization School of Engineering University of Bradford 1 Numerical optimization techniques Unconstrained multi-parameter optimization techniques.
Optimization Methods One-Dimensional Unconstrained Optimization
Revision.
Optimization Methods One-Dimensional Unconstrained Optimization
Nonlinear programming
Engineering Optimization
Function Optimization. Newton’s Method Conjugate Gradients Method
Advanced Topics in Optimization
Why Function Optimization ?
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Optimization Methods One-Dimensional Unconstrained Optimization
Unconstrained Optimization Rong Jin. Logistic Regression The optimization problem is to find weights w and b that maximizes the above log-likelihood How.
Tier I: Mathematical Methods of Optimization

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
Collaborative Filtering Matrix Factorization Approach
Chapter 2 Single Variable Optimization
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 101 Quasi-Newton Methods.
Nonlinear programming Unconstrained optimization techniques.
Fin500J: Mathematical Foundations in Finance
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
MODELING MATTER AT NANOSCALES 3. Empirical classical PES and typical procedures of optimization Geometries from energy derivatives.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Lecture 13. Geometry Optimization References Computational chemistry: Introduction to the theory and applications of molecular and quantum mechanics, E.
One Dimensional Search
Chapter 10 Minimization or Maximization of Functions.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
L24 Numerical Methods part 4
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm
Variations on Backpropagation.
Survey of unconstrained optimization gradient based algorithms
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Non-linear Minimization
Click to edit Master title style
CS5321 Numerical Optimization
Collaborative Filtering Matrix Factorization Approach
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
~ Least Squares example
~ Least Squares example
Performance Optimization
Outline Preface Fundamentals of Optimization
L23 Numerical Methods part 3
Section 3: Second Order Methods
Numerical Analysis – Solving Nonlinear Equations
Conjugate Direction Methods
Presentation transcript:

UNCONSTRAINED MULTIVARIABLE Chapter 6 UNCONSTRAINED MULTIVARIABLE OPTIMIZATION Chapter 6

Chapter 6 6.1 Function Values Only 6.2 First Derivatives of f (gradient and conjugate direction methods) 6.3 Second Derivatives of f (e.g., Newton’s method) 6.4 Quasi-Newton methods Chapter 6

Chapter 6

Chapter 6

Chapter 6

Chapter 6

Chapter 6

Chapter 6

General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Chapter 6 Steepest Descent Search Direction Don’t need to normalize Method terminates at any stationary point. Why?

Chapter 6 So procedure can stop at saddle point. Need to show is positive definite for a minimum. Step Length How to pick a analytically numerically Chapter 6

Chapter 6

Chapter 6

Chapter 6

Chapter 6 Analytical Method How does one minimize a function in a search direction using an analytical method? It means s is fixed and you want to pick a, the step length to minimize f(x). Note Chapter 6 (6.9) This yields a minimum of the approximating function.

Chapter 6 Numerical Method Use coarse search first (1) Fixed a (a = 1) or variable a (a = 1, 2, ½, etc.) Options for optimizing a (1) Use interpolation such as quadratic, cubic (2) Region Elimination (Golden Search) (3) Newton, Secant, Quasi-Newton (4) Random (5) Analytical optimization (1), (3), and (5) are preferred. However, it may not be desirable to exactly optimize a (better to generate new search directions). Chapter 6

Suppose we calculate the gradient at the point xT = [2 2] Chapter 6

Chapter 6

Chapter 6

Chapter 6 Termination Criteria f(x) Big change in f(x) but little change in x. Code will stop if Dx is sole criterion. x f(x) Big change in x but little change in f(x). Code will stop if Dx is sole criterion. Chapter 6 x For minimization you can use up to three criteria for termination: (1) (2) (3)

Conjugate Search Directions Improvement over gradient method for general quadratic functions Basis for many NLP techniques Two search directions are conjugate relative to Q if To minimize f(xnx1) when H is a constant matrix (=Q), you are guaranteed to reach the optimum in n conjugate direction stages if you minimize exactly at each stage (one-dimensional search) Chapter 6

Chapter 6

Conjugate Gradient Method by minimizing f(x) with respect to a in the s0 direction (i.e., carry out a unidimensional search for a0). Step 3. Calculate The new search direction is a linear combination of Chapter 6 For the kth iteration the relation is (6.6) For a quadratic function it can be shown that these successive search directions are conjugate. After n iterations (k = n), the quadratic function is minimized. For a nonquadratic function, the procedure cycles again with xn+1 becoming x0. Step 4. Test for convergence to the minimum of f(x). If convergence is not attained, return to step 3. Step n. Terminate the algorithm when is less than some prescribed tolerance.

Chapter 6

Chapter 6

Chapter 6

Chapter 6

Chapter 6 Chapter 6 Minimize using the method of conjugate gradients with as an initial point. In vector notation, For steepest descent, Chapter 6 Chapter 6 Steepest Descent Step (1-D Search) The objective function can be expressed as a function of a0 as follows: Minimizing f(a0), we obtain f = 3.1594 at a0 = 0.0555. Hence

Chapter 6 Chapter 6 Calculate Weighting of Previous step The new gradient can now be determined as and w 0 can be computed as Generate New (Conjugate) Search Direction Chapter 6 Chapter 6 and One dimensional Search Solving for a1 as before [i.e., expressing f(x1) as a function of a1 and minimizing with respect to a1] yields f = 5.91 x 10-10 at a1 = 0.4986. Hence which is the optimum (in 2 steps, which agrees with the theory).

Chapter 6

Chapter 6

Chapter 6

Fletcher – Reeves Conjugate Gradient Method Chapter 6 Derivation:

Chapter 6 and solving for the weighting factor:

Linear vs. Quadratic Approximation of f(x) Chapter 6

Chapter 6

Chapter 6

Chapter 6

Chapter 6

Marquardt’s Method Chapter 6

Step 1 Step 2 Chapter 6 Step 3 Step 4

Step 5 Step 6 Chapter 6 Step 7 Step 8 Step 9

Chapter 6 Secant Methods Recall for one dimensional search the secant method only uses values of f(x) and f ′(x). Chapter 6

Chapter 6

Chapter 6 • Probably the best update formula is the BFGS update (Broyden – Fletcher – Goldfarb – Shanno) – ca. 1970 • BFGS is the basis for the unconstrained optimizer in the Excel Solver • Does not require inverting the Hessian matrix but approximates the inverse with values of Chapter 6

Chapter 6

Chapter 6

Chapter 6

Chapter 6