Engineering Optimization Chapter 3 : Functions of Several Variables (Part 1) Presented by: Rajesh Roy Networks Research Lab, University of California,

Slides:



Advertisements
Similar presentations
Curved Trajectories towards Local Minimum of a Function Al Jimenez Mathematics Department California Polytechnic State University San Luis Obispo, CA
Advertisements

Lect.3 Modeling in The Time Domain Basil Hamed
Engineering Optimization
1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Optimization : The min and max of a function
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Optimization of thermal processes
Optimization methods Review
Introduction to Function Minimization. Motivation example Data on height of a group of people, men and women Data on gender not recorded, not known.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
CHAPTER 2 D IRECT M ETHODS FOR S TOCHASTIC S EARCH Organization of chapter in ISSO –Introductory material –Random search methods Attributes of random search.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Numerical Optimization
Function Optimization Newton’s Method. Conjugate Gradients
OPTIMAL CONTROL SYSTEMS
Page 1 Page 1 Engineering Optimization Second Edition Authors: A. Rabindran, K. M. Ragsdell, and G. V. Reklaitis Chapter-2 (Functions of a Single Variable)
Design Optimization School of Engineering University of Bradford 1 Numerical optimization techniques Unconstrained multi-parameter optimization techniques.
Chapter 1 Introduction The solutions of engineering problems can be obtained using analytical methods or numerical methods. Analytical differentiation.
Constrained Optimization
Gradient Methods May Preview Background Steepest Descent Conjugate Gradient.
Optimization Methods One-Dimensional Unconstrained Optimization
Optimization Linear Programming and Simplex Method
Nonlinear programming
Advanced Topics in Optimization
Linear Discriminant Functions Chapter 5 (Duda et al.)
Why Function Optimization ?
Optimization Methods One-Dimensional Unconstrained Optimization
Tier I: Mathematical Methods of Optimization
What is Optimization? Optimization is the mathematical discipline which is concerned with finding the maxima and minima of functions, possibly subject.

Collaborative Filtering Matrix Factorization Approach
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Instructor: Prof.Dr.Sahand Daneshvar Presented by: Seyed Iman Taheri Student number: Non linear Optimization Spring EASTERN MEDITERRANEAN.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Nonlinear programming Unconstrained optimization techniques.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Chem Math 252 Chapter 5 Regression. Linear & Nonlinear Regression Linear regression –Linear in the parameters –Does not have to be linear in the.
Lecture 8 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Engineering Analysis – Computational Fluid Dynamics –
Monte-Carlo method for Two-Stage SLP Lecture 5 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Evolutionary Computation (P. Koumoutsakos) 1 Mathematics vs. “Heuristics”  Heuristics/Experimental Optimisation : When a functional.
Part 4 Nonlinear Programming 4.1 Introduction. Standard Form.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
Signal & Weight Vector Spaces
Performance Surfaces.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Introduction to Optimization
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Computational Biology BS123A/MB223 UC-Irvine Ray Luo, MBB, BS.
1 Optimization Linear Programming and Simplex Method.
Chapter 4 The Simplex Algorithm and Goal Programming
Non-linear Minimization
Collaborative Filtering Matrix Factorization Approach
Tutorial 3 Applications of the Derivative
Section 3: Second Order Methods
Presentation transcript:

Engineering Optimization Chapter 3 : Functions of Several Variables (Part 1) Presented by: Rajesh Roy Networks Research Lab, University of California, Davis June 18, 2010

Introduction x is a vector of design variables of dimension N No constraints on x ƒ is a scalar objective function ƒ and its derivatives exist and are continuous everywhere We will be satisfied to identify local optima x* Static Question Test candidate points to see whether they are (or are not) minima, maxima, saddlepoints, or none of the above. Static Question Test candidate points to see whether they are (or are not) minima, maxima, saddlepoints, or none of the above. Dynamic Question Given x (0), a point that does not satisfy the above-mentioned optimality criteria, what is a better estimate x (1) of the solution x*? Dynamic Question Given x (0), a point that does not satisfy the above-mentioned optimality criteria, what is a better estimate x (1) of the solution x*?

Introduction The nonlinear objective ƒ will typically not be convex and therefore will be multimodal. Example:

Optimality Criteria We examine optimality criteria for basically two reasons: (1) because they are necessary to recognize solutions (2) because they provide motivation for most of the useful methods Consider the Taylor expansion of a function of several variables:

Necessary and Sufficient Conditions Necessary Conditions: Sufficient Conditions:

Static Question: Example

Dynamic Question : Searching x* The methods can be classified into three broad categories: 1.Direct-search methods, which use only function values  The S 2 (Simplex Search) Method  Hooke–Jeeves Pattern Search Method  Powell’s Conjugate Direction Method 2. Gradient methods, which require estimates of the first derivative of ƒ(x)  Cauchy’s Method 3. Second-order methods, which require estimates of the first and second derivatives of ƒ(x)  Newton’s Method  …….  Available computer storage is limited  Function evaluations are very time consuming  Great accuracy in the final solution is desired  Sometimes its either impossible or else very time consuming to obtain analytical expressions for derivatives Motivation behind different methods:

The S 2 (Simplex Search) Method * In N dimensions, a regular simplex is a polyhedron composed of N+1 equidistant points, which form its vertices. 1.Set up a regular simplex* in the space of the independent variables and evaluate the function at each vertex. 2.The vertex with highest functional value is located. 3.This ‘‘worst’’ vertex is then reflected through the centroid to generate a new point, which is used to complete the next simplex 4.Jump to Step 2 if the performance index decreases smoothly Reflection:Suppose x (j) is the point to be reflected. Then the centroid of the remaining N points is All points on the line from x ( j) through x c are given by New Vertex Point:

Hooke-Jeeves Pattern Search Method Exploratory Moves: 1.Given a specified step size the exploration proceeds from an initial point by the specified step size in each coordinate direction. 2.If the function value does not increase, the step is considered successful. 3.Otherwise, the step is retracted and replaced by a step in the opposite direction, which in turn is retained depending upon whether it succeeds or fails. Pattern Moves: 1.Single step from the present base point along the line from the previous to the current base point.

Hooke-Jeeves Pattern Search Method

Hooke-Jeeves Pattern Search Method Example:

Powell’s Conjugate Direction Method  Given a quadratic function q(x), two arbitrary but distinct points x (1) and x (2), and a direction d, if y (1) is the solution to min q(x (1), d) and y (2) is the solution to min q(x (2), d), then the direction y (2) –y (1) is C conjugate to d. If a quadratic function in N variables can be transformed so that it is just the sum of perfect squares, then the optimum can be found after exactly N single-variable searches, one with respect to each of the transformed variables. Theorem :

Gradient-Based Methods All of the methods considered here employ a similar iteration procedure:

Cauchy’s Method Taylor expansion of the objective about x: The greatest negative scalar product results from the choice This is the motivation for the simple gradient method: Example:

Newton’s Method Consider again the Taylor expansion of the objective: We form a quadratic approximation to ƒ(x) by dropping terms of order 3 forcing x(k1), the next point in the sequence, to be a point where the gradient of the approximation is zero. Therefore, So according to Newton’s optimization method: