Presentation is loading. Please wait.

Presentation is loading. Please wait.

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.

Similar presentations


Presentation on theme: "9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or."— Presentation transcript:

1 9 1 Performance Optimization

2 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or

3 9 3 Steepest Descent Choose the next step so that the function decreases: For small changes in x we can approximate F(x): where If we want the function to decrease: We can maximize the decrease by choosing:

4 9 4 Example

5 9 5 Plot

6 9 6 Stable Learning Rates (Quadratic) Stability is determined by the eigenvalues of this matrix. Eigenvalues of [I -  A]. Stability Requirement: ( i - eigenvalue of A)

7 9 7 Example

8 9 8 Minimizing Along a Line Choose  k to minimize where

9 9 9 Example

10 9 10 Plot Successive steps are orthogonal. F x  T xx k1+ = p k g k1+ T p k ==

11 9 11 Newton’s Method Take the gradient of this second-order approximation and set it equal to zero to find the stationary point:

12 9 12 Example

13 9 13 Plot

14 9 14 Non-Quadratic Example Stationary Points: F(x)F(x)F2(x)F2(x)

15 9 15 Different Initial Conditions F(x)F(x) F2(x)F2(x)

16 9 16 Conjugate Vectors A set of vectors is mutually conjugate with respect to a positive definite Hessian matrix A if One set of conjugate vectors consists of the eigenvectors of A. (The eigenvectors of symmetric matrices are orthogonal.)

17 9 17 For Quadratic Functions The change in the gradient at iteration k is where The conjugacy conditions can be rewritten This does not require knowledge of the Hessian matrix.

18 9 18 Forming Conjugate Directions Choose the initial search direction as the negative of the gradient. Choose subsequent search directions to be conjugate. where or

19 9 19 Conjugate Gradient algorithm The first search direction is the negative of the gradient. Select the learning rate to minimize along the line. Select the next search direction using If the algorithm has not converged, return to second step. A quadratic function will be minimized in n steps. (For quadratic functions.)

20 9 20 Example

21 9 21 Example

22 9 22 Plots Conjugate GradientSteepest Descent


Download ppt "9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or."

Similar presentations


Ads by Google