Performance Optimization

Slides:



Advertisements
Similar presentations
Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Advertisements

Optimization.
Optimization Methods TexPoint fonts used in EMF.
Optimization of thermal processes
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
2.7.6 Conjugate Gradient Method for a Sparse System Shi & Bo.
Jonathan Richard Shewchuk Reading Group Presention By David Cline
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
Performance Optimization
Numerical Optimization
Function Optimization Newton’s Method. Conjugate Gradients
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
Tutorial 12 Unconstrained optimization Conjugate gradients.
Optimization Methods One-Dimensional Unconstrained Optimization
MASKS © 2004 Invitation to 3D vision Lecture 5 Introduction to Linear Algebra Shankar Sastry September 13 th, 2005.
Tutorial 5-6 Function Optimization. Line Search. Taylor Series for Rn
12 1 Variations on Backpropagation Variations Heuristic Modifications –Momentum –Variable Learning Rate Standard Numerical Optimization –Conjugate.
Function Optimization. Newton’s Method Conjugate Gradients Method
Advanced Topics in Optimization
CSE 245: Computer Aided Circuit Simulation and Verification
Why Function Optimization ?

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
UNCONSTRAINED MULTIVARIABLE
Geometry Optimisation Modelling OH + C 2 H 4 *CH 2 -CH 2 -OH CH 3 -CH 2 -O* 3D PES.
ENCI 303 Lecture PS-19 Optimization 2
Nonlinear programming Unconstrained optimization techniques.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
Dan Simon Cleveland State University Jang, Sun, and Mizutani Neuro-Fuzzy and Soft Computing Chapter 6 Derivative-Based Optimization 1.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Chapter 10 Minimization or Maximization of Functions.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Lecture 1: Math Review EEE2108: Optimization 서강대학교 전자공학과 2012 학년도 1 학기.
Steepest Descent Method Contours are shown below.
Gradient Methods In Optimization
Variations on Backpropagation.
Survey of unconstrained optimization gradient based algorithms
Signal & Weight Vector Spaces
Performance Surfaces.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Data assimilation for weather forecasting G.W. Inverarity 06/05/15.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
10 1 Widrow-Hoff Learning (LMS Algorithm) ADALINE Network  w i w i1  w i2  w iR  =
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Optimal Control.
Krylov-Subspace Methods - I
Widrow-Hoff Learning (LMS Algorithm).
CSE 245: Computer Aided Circuit Simulation and Verification
CS5321 Numerical Optimization
Variations on Backpropagation.
CSE 245: Computer Aided Circuit Simulation and Verification
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Optimization Part II G.Anuradha.
Instructor :Dr. Aamer Iqbal Bhatti
Variations on Backpropagation.
Performance Surfaces.
Outline Preface Fundamentals of Optimization
Outline Preface Fundamentals of Optimization
Section 3: Second Order Methods
Conjugate Direction Methods
First-Order Methods.
Presentation transcript:

Performance Optimization

Basic Optimization Algorithm pk - Search Direction ak - Learning Rate

Steepest Descent Choose the next step so that the function decreases: For small changes in x we can approximate F(x): where If we want the function to decrease: We can maximize the decrease by choosing:

Example

Plot

Stable Learning Rates (Quadratic) Stability is determined by the eigenvalues of this matrix. Eigenvalues of [I - aA]. (li - eigenvalue of A) Stability Requirement:

Example

Minimizing Along a Line Choose ak to minimize where

Example

Successive steps are orthogonal. Plot Successive steps are orthogonal. F x ( ) Ñ T k 1 + = p g

Newton’s Method Take the gradient of this second-order approximation and set it equal to zero to find the stationary point:

Example

Plot

Non-Quadratic Example Stationary Points: F(x) F2(x)

Different Initial Conditions F(x) F2(x)

(The eigenvectors of symmetric matrices are orthogonal.) Conjugate Vectors A set of vectors is mutually conjugate with respect to a positive definite Hessian matrix A if One set of conjugate vectors consists of the eigenvectors of A. (The eigenvectors of symmetric matrices are orthogonal.)

For Quadratic Functions The change in the gradient at iteration k is where The conjugacy conditions can be rewritten This does not require knowledge of the Hessian matrix.

Forming Conjugate Directions Choose the initial search direction as the negative of the gradient. Choose subsequent search directions to be conjugate. where or or

Conjugate Gradient algorithm The first search direction is the negative of the gradient. Select the learning rate to minimize along the line. Select the next search direction using If the algorithm has not converged, return to second step. A quadratic function will be minimized in n steps. (For quadratic functions.)

Example

Example

Plots Conjugate Gradient Steepest Descent