Method of Hooke and Jeeves

Slides:



Advertisements
Similar presentations
Cyclic Coordinate Method
Advertisements

Optimization : The min and max of a function
Optimization of thermal processes
Optimization 吳育德.
Optimization methods Review
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Nonlinear programming: One dimensional minimization methods
1.2 Row Reduction and Echelon Forms
Linear Equations in Linear Algebra
Visual Recognition Tutorial
Function Optimization Newton’s Method. Conjugate Gradients
458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8.
Motion Analysis (contd.) Slides are from RPI Registration Class.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Optimization Methods One-Dimensional Unconstrained Optimization
Fast Evolutionary Optimisation Temi avanzati di Intelligenza Artificiale - Lecture 6 Prof. Vincenzo Cutello Department of Mathematics and Computer Science.
Optimization Methods One-Dimensional Unconstrained Optimization
Nonlinear programming
Advanced Topics in Optimization
Why Function Optimization ?
Math for CSLecture 51 Function Optimization. Math for CSLecture 52 There are three main reasons why most problems in robotics, vision, and arguably every.
Optimization Methods One-Dimensional Unconstrained Optimization
Name: Mehrab Khazraei(145061) Title: Penalty or Exterior penalty function method professor Name: Sahand Daneshvar.
Radial Basis Function Networks
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
February 17, 2015Applied Discrete Mathematics Week 3: Algorithms 1 Double Summations Table 2 in 4 th Edition: Section th Edition: Section th.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Additional Topics in Differential Equations
Instructor: Prof.Dr.Sahand Daneshvar Presented by: Seyed Iman Taheri Student number: Non linear Optimization Spring EASTERN MEDITERRANEAN.
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Nonlinear programming Unconstrained optimization techniques.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
Department of Electrical Engineering, Southern Taiwan University Robotic Interaction Learning Lab 1 The optimization of the application of fuzzy ant colony.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ENM 503 Lesson 1 – Methods and Models The why’s, how’s, and what’s of mathematical modeling A model is a representation in mathematical terms of some real.
Practical Dynamic Programming in Ljungqvist – Sargent (2004) Presented by Edson Silveira Sobrinho for Dynamic Macro class University of Houston Economics.
Course: Logic Programming and Constraints
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
Numerical Methods.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Vaida Bartkutė, Leonidas Sakalauskas
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Vector Quantization Vector quantization is used in many applications such as image and voice compression, voice recognition (in general statistical pattern.
Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Chapter 15: Recursion. Recursive Definitions Recursion: solving a problem by reducing it to smaller versions of itself – Provides a powerful way to solve.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 117 Penalty and Barrier Methods General classical constrained.
Business Mathematics MTH-367 Lecture 14. Last Lecture Summary: Finished Sec and Sec.10.3 Alternative Optimal Solutions No Feasible Solution and.
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
1 1.2 Linear Equations in Linear Algebra Row Reduction and Echelon Forms © 2016 Pearson Education, Ltd.
CSE 330: Numerical Methods. What is true error? True error is the difference between the true value (also called the exact value) and the approximate.
Chapter 5: Integration Section 5.1 An Area Problem; A Speed-Distance Problem An Area Problem An Area Problem (continued) Upper Sums and Lower Sums Overview.
Function Optimization
Non-linear Minimization
Lecture 3.
Dr. Arslan Ornek IMPROVING SEARCH
James B. Orlin Presented by Tal Kaminker
Chap 3. The simplex method
Chapter 3 The Simplex Method and Sensitivity Analysis
Chapter 7 Optimization.
Linear Equations in Linear Algebra
Introduction to Simulated Annealing
Chapter 6 Network Flow Models.
Linear Equations in Linear Algebra
1 Newton’s Method.
Presentation transcript:

Method of Hooke and Jeeves

Multidimensional Search Without Using Derivatives In this section we consider the problem of minimizing a function f of several variables without using derivatives. The methods described here proceed in the following manner.

Multidimensional Search Without Using Derivatives Given a vector x, a suitable direction d is first determined, and then f is minimized from x in the direction d by one of the techniques discussed earlier. X = ( x1 , x2 , … , xn ) Direction = dx Minimizing F(x) in dx direction

What is the problem? We are required to solve a line search problem of the form to: minimize f(x + λd) subject to: λ L where L is typically of the form: L = { λ : λ ≥ 0 } or L = R L = { λ : a ≤ λ ≤ b }

What is the problem? In the statements of the algorithms, for the purpose of simplicity we have assumed that a minimizing point exists. However, this may not be the case. Here, the optimal objective value of the line search problem may be: Unbounded: Then the original problem is unbounded and we may stop. Finite but not achieved at any particular λ: Then λ could be chosen as such that f(x+ d) is sufficiently close to the value: inf { f(x+λd): λ L }.

Method of Hooke and Jeeves The method of Hooke and Jeeves performs two types of search: Exploratory search Pattern search

Method of Hooke and Jeeves Given x1, an exploratory search along the coordinate directions produces the point x2. Now a pattern search along the direction x2- x1 leads to the point y. y Pattern Search x2 Exploratory search along the coordinate axes x1

Method of Hooke and Jeeves Another exploratory search starting from y gives the point x3. The next pattern search is conducted along the direction x3 –x2, yielding y'. The process is then repeated. y Pattern Search y' x2 x3 x1 Exploratory search along the coordinate axes

Summary of the Method of Hooke and Jeeves Using Line Searches As originally proposed by Hooke and Jeeves, the method does not perform any line search but rather takes discrete steps along the search directions, as we discuss later. Here we present a continuous version of the method using line searches along the coordinate directions d1 ,..., dn and the pattern direction.

Initialization Step Choose a scalar ε > 0 to be used in terminating the algorithm. Choose a starting point x1 let y1 = x1 let k = j = 1 Go to the Main Step.

Main Step Step 1: minimize f(yj +λdj) subject to: λ R yj+1 = yj +λj dj Let λj be an optimal solution to the problem to: minimize f(yj +λdj) subject to: λ R yj+1 = yj +λj dj If j < n, replace j by j + 1, and repeat Step 1. Otherwise, if j = n, let xk+1=yn+1 . If ||xk+1- xk|| < ε , stop; otherwise, go to Step2.

Main Step Step 2: minimize f(xk+1+ λd) Subject to: λ R Let d = xk+1 – xk Let be an optimal solution to the problem to: minimize f(xk+1+ λd) Subject to: λ R Let y1 = xk+1+ d Let j = 1 replace k by k+1 Go to Step 1.

Example I Consider the following problem: Minimize (x1 – 2)4 +(x1 – 2x2)2 Note that the optimal solution is (2, 1) with objective value equal to zero. At each iteration: An exploratory search along the coordinate directions gives the points y2 and y3 A pattern search along the direction d = xk+1 – xk gives the point y1 Except at iteration k = 1, where y1 = x1 Four iterations were required to move from the initial point to the optimal point (2, 1) whose objective value is zero. At this point, ||x5 – x4|| = 0.045 and the procedure is terminated.  

Summary of Computations for the Method of Hooke and Jeeves Using Line Searches

Method of Hooke and Jeeves using line searches.

Point The Pattern Search has substantially improved the Convergence behavior by moving along a direction that is almost parallel to the valley shown by dashed lines.

Convergence of the Method of Hooke and Jeeves Suppose that: f differentiable Let the solution set Ω= { : f ( )= 0). Note: Each iteration of the method of Hooke and Jeeves consists of an application of the cyclic coordinate method, in addition to a pattern search. Let the cyclic coordinate search be denoted by the map B and the pattern search be denoted by the map C. If the minimum of f along any line is unique and letting α = f then α(y) < α( x ) for x Ω. By the definition of C, α(z) ≤ α(y) for z C(y). Assuming that Λ = { x : f ( x ) < f ( x1 ) } , where x1 is the starting point, is compact, convergence of the procedure is established.

Method of Hooke and Jeeves with Discrete Steps The method of Hooke and Jeeves, as originally proposed, does not perform line searches but, instead, adopts a simple scheme involving functional evaluations. A summary of the method is given below:

Initialization Step Let d1,...,dn be the coordinate directions. Choose a scalar ε > 0 to be used for terminating the algorithm. choose an initial step size, Δ ≥ ε. Choose an acceleration factor,α>0. Choose a starting point x1, let y1= x1, let k=j=1 Go to the Main Step.

Main Step Comparing f(yj+Δdj) and f(yj): If f(yj+Δdj) < f(yj), the trial is termed a success; let yj+1= yj+Δdj , and go to Step2. If f(yj+Δdj) ≥ f(yj), the trial is deemed a failure. In this case: If f(yj -Δdj) < f(yj), let yj+1= yj -Δdj , and go to Step 2; if f(yj -Δdj) ≥ f(yj), let yj+1= yj, and go to Step 2 .

Main Step Comparing j and n Do as follows: If j < n, replace j by j+1, and repeat Step 1. Otherwise: if f(yn+1) < f(xk) go to Step 3. if f(yn+1) ≥ f(xk) go to Step 4. Do as follows: Let xk+1 = yn+1 Let y1 = xk+1 + α(xk+1 – xk) Replace k by k+ l Let j = 1 Go to Step 1.

Main Step Compare Δ and ε If Δ ≤ ε, stop ; xk is the prescribed solution. Otherwise, replace Δ by Δ /2. Let y1 = xk Let xk+1=xk Replace k by k+ l , Let j = 1 Repeat Step 1.

Note Steps I and II describe an exploratory search. Step III is an acceleration step along the direction xk+1- xk. A decision whether to accept or reject the acceleration step is not made until after an exploratory search is performed. In Step IV, the step size Δ is reduced. The procedure could easily be modified so that different step sizes are used along the different directions. This is sometimes adopted for the purpose of scaling.

Consider the following problem: Minimize (x1 – 2)4 + (x1- 2 x2 )2 Example II: Solve the problem using the method of Hooke and Jeeves with discrete steps Consider the following problem: Minimize (x1 – 2)4 + (x1- 2 x2 )2 In which the parameters α and Δ are chosen as 1.0 and 0.2, respectively. The algorithm start from (0.0, 3.0). The points generated are numbered equentially. The acceleration step that is rejected is shown by the dashed lines.

Method of Hooke and Jeeves using discrete steps starting from (0. 0,3 Method of Hooke and Jeeves using discrete steps starting from (0.0,3.0). The numbers denote the order in which points are generated.

(S) denotes that the trial is a Success The following table give a more comprehensive illustration, It summarizes the computations starting from the new initial point (2.0, 3.0). Here: (S) denotes that the trial is a Success (F) denotes that the trial is a Failure At the iteration, and at subsequent iterations whenever: If f(y3) ≥ f (xk) then the vector y1 is taken as xk . Otherwise, y1 = 2xk+1- xk . Note: At the end iteration k =10, the point (1.70, 0.80) is reached having an objective value 0.02. The procedure is stopped here with the termination parameter ε = 0.1. If a greater degree of accuracy is required, Δ should be reduced to 0.05.

The Summary Table of Computations for the Method of Hooke and Jeeves with Discrete Steps

Method of Hooke and Jeeves using line searches Method of Hooke and Jeeves using line searches. The numbers denote the order in which Donets are generated.

Thank you all for your attention