Presentation is loading. Please wait.

Presentation is loading. Please wait.

Optimization of thermal processes

Similar presentations


Presentation on theme: "Optimization of thermal processes"— Presentation transcript:

1 Optimization of thermal processes
Lecture 8 Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery Optimization of thermal processes 2007/2008

2 Overview of the lecture
Indirect search (descent) methods Steepest descent (Cauchy) method The concept of the method Elementary example Practical example: optimal design of three-stage compressor Possible problems with steepest descent method Conjugate directions methods Davidon-Fletcher-Powell method Optimization of thermal processes 2007/2008

3 General types of iterative methods
Direct search methods (discussed on the previous lecture) only the value of the objective function is required (Gauss-Seidel and Powell’s method are good examples) Indirect (descent) methods methods of this type require not only the value of the objective function but also values of its derivatives thus, we need Gradient of the function and Unit vector of i axis 1D case descent ? Optimization of thermal processes 2007/2008 Optimization of thermal processes 2007/2008 minimum

4 Indirect search (descent methods)
2D case. The gradient is: The gradient points in the direction of steepest ascent. So: is the steepest descent direction. peak minimum Descent directions Optimization of thermal processes 2007/2008

5 Indirect search (descent methods)
Gradient is always perpendicular to the isocontours of the objective function. The step length may be constant (this is the idea of simple gradient method) or may be found with some one- dimensional optimization technique. gradient descent direction All descent methods make use of the gradient vector. But in some of them gradient is only one of the components needed to find the search direction optimum Optimization of thermal processes 2007/2008

6 Calculation of the gradient vector
To find the gradient we need to calculate the partial derivatives of the objective function. But this may lead to certain problems: when the function if differentiable, but the calculation of the components of the gradient is either impractical or impossible although the partial derivatives can be calculated, it requires a lot of computational time when the gradient is not defined at all points In the first (or the second) case we can use e.g. finite difference formula: Vector whose i-th component has a value 1 and all other componets are zero (unit vector of the i-th axis) Optimization of thermal processes 2007/2008 Small scalar quantity (choose carefully!)

7 Calculation of the gradient vector
The scalar quantity (grid step) xi cannot be: too large – the truncation error of finite difference formula may be large too small – numerical round-off error may be unacceptable We can use the central finite difference scheme: which is more accurate, but requires an additional function evaluation. In the third case (when gradient is not defined), we usually have to resort to direct search methods. Optimization of thermal processes 2007/2008

8 Steepest descent (Cauchy) method
Start with arbitrary initial point X1. Set the iteration numer as i=1. Find the search direction Si as Determine the optimal step length in the direction and set Test the new point for optimality. If is optimum, stop the process. Otherwise, go to step 5. Set the new iteration number i=i+1 and go to step 2. Steepest descent direction Optimal step length Optimization of thermal processes 2007/2008

9 Steepest descent method (example)
Minimize starting from the point Iteration 1 gradient gradient at the starting point So, the first search direction is: Optimization of thermal processes 2007/2008

10 Steepest descent method (example)
Now, we must optimize to find the step length. Using the necessary condition: Is it optimum? Let’s calculate the gradient at this point: We didn’t reach the optimum (non-zero slope) Optimization of thermal processes 2007/2008

11 Steepest descent method (example)
Iteration 2 New search direction Optimize for the step length Necessary condition Next point We didn’t reach the optimum (non-zero slope) Gradient at the next point Proceed to the next iteration. Optimization of thermal processes 2007/2008

12 Steepest descent method (example)
Iteration 3 New search direction Optimize for the step length Necessary condition Next point We didn’t reach the optimum (non-zero slope) Gradient at the next point This process should be continued until the optimum is found: Optimization of thermal processes 2007/2008

13 When to stop? (convergence criteria)
When the change in function value in two consecutive iterations is small: When the partial derivatives (components of the gradient) of f are small: When the change in the design vector in two consecutive iterations is small: iterations Near the optimum point the gradient should not differ much from zero. Optimization of thermal processes 2007/2008

14 Optimum design of three-stage compressor (steepest descent method)
Objective: find the values of interstage pressure to minimize work input. The heat exchangers reduce the temperature of the gas to T after compression. Compressors Heat exchangers Initial pressure Interstage pressure Final pressure Work input Gas constant Optimization of thermal processes 2007/2008 Let’s use steepest descent method.

15 Optimum design of three-stage compressor (steepest descent method)
It’s convenient to use the following the objective function: The gradient and its components are: Let’s choose the starting point (initial guess): Optimization of thermal processes 2007/2008

16 Optimum design of three-stage compressor (steepest descent method)
Using this values we calculate the first search direction: Derivative calculated at point X1 The next point we find from the formula: Now we must find optimum step length 1 along direction S1 to minimize the expression: Optimization of thermal processes 2007/2008

17 Optimum design of three-stage compressor (steepest descent method)
This means that we are looking for the minimum of the single-variable function: Using the necessary condition we find The next point is: Optimization of thermal processes 2007/2008

18 Optimum design of three-stage compressor (steepest descent method)
The value of the objective function at this point is: The first iteration has ended. Now we can start the second iteration, calculating the new search direction: Next point New step length And minimize the expression: and so on... Optimization of thermal processes 2007/2008 Solution:

19 Steepest descent method (possible difficulty)
optimum Contours of the objective function Long narrow valey If the minimum is in a long narrow valey, steepest descent method may converge rather slowly The problem stems from the fact, that gradient is only local property of the objective function More clever choice of search directions is possible. It is based on the concept of conjugate directions. Optimization of thermal processes 2007/2008

20 Conjugate directions A set of n vectors (directions) is said to be conjugate (more accurately A-conjugate) if: for all Theorem If a quadratic function is minimized sequentially once along each direction of a set of n mutually conjugate directions, the minimum of the function Q will be found at or before the n-th step irrespective of the starting point. Remark: Powell’s method is an example of conjugate directions method. Optimization of thermal processes 2007/2008

21 Conjugate directions (example)
For instance, suppose we have: Convergence after two iterations. Then and Orthogonality condition Contours of Q So, conjugate direction are in this case simply perpendicular. Optimization of thermal processes 2007/2008

22 Conjugate directions (quadratic convergence)
Thus, for quadratic functions conjugate directions method converges after n steps (at most) where n is the number of design variables That is really fast but what about other functions? Fortunately, a general nonlinear function can be approximated reasonably well by a quadratic function near its minimum (see the Taylor expansion) Conjugate directions method is expected to speed up the convergence for even general nonlinear objective functions Optimization of thermal processes 2007/2008

23 Davidon-Fletcher-Powell method
Start with an initial point X1 and a n n positive definite symmetric matrix [B1] to approximate the inverse of the Hessian matrix of f. Usually, [B1] is taken as the identity matrix [I]. Set the iteration numer as i=1. Compute the gradient of the function, , at point Xi, and set Find the optimal step length in the direction Si and set Test the new point for optimality. If is optimal, terminate the iterative process. Otherwise, go to step 5. Optimization of thermal processes 2007/2008

24 Davidon-Fletcher-Powell method
Update the matrix [B1] as: Set the new iteration number as i=i+1 and go to step 2. where Optimization of thermal processes 2007/2008

25 Thank you for your attention
Optimization of thermal processes 2007/2008


Download ppt "Optimization of thermal processes"

Similar presentations


Ads by Google