Optimization Techniques M. Fatih Amasyalı. Steepest Descent Exact step size x_new = x_old - eps * df; Eps is calculated as: – z(eps)=x – eps*df – Find.

Slides:



Advertisements
Similar presentations
ECE 471/571 - Lecture 13 Gradient Descent 10/13/14.
Advertisements

Lecture 5 Newton-Raphson Method
Line Search.
Optimization Techniques M. Fatih Amasyalı. What is optimization?
Section Differentials Tangent Line Approximations A tangent line approximation is a process that involves using the tangent line to approximate a.
Data Modelling and Regression Techniques M. Fatih Amasyalı.
ROOTS OF EQUATIONS Student Notes ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier.
Optimization Techniques
Advanced Engineering Mathematics by Erwin Kreyszig Copyright  2007 John Wiley & Sons, Inc. All rights reserved. Page 602.
Chapter 4 Roots of Equations
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
Chapter 1 Introduction The solutions of engineering problems can be obtained using analytical methods or numerical methods. Analytical differentiation.
Advanced Engineering Mathematics by Erwin Kreyszig Copyright  2007 John Wiley & Sons, Inc. All rights reserved. Fourier Series & Integrals Text Chapter.
Lectures on Numerical Methods 1 Numerical Methods Charudatt Kadolkar Copyright 2000 © Charudatt Kadolkar.
Open Methods Chapter 6 The Islamic University of Gaza
Advanced Engineering Mathematics by Erwin Kreyszig Copyright  2007 John Wiley & Sons, Inc. All rights reserved. Vector Integral Calculus Text Chapter.
Advanced Engineering Mathematics by Erwin Kreyszig Copyright  2007 John Wiley & Sons, Inc. All rights reserved. Page 334.
Why Function Optimization ?
Lab 7 Engineering Applications. Application 1: Sum of Series Write a program to calculate the following series: Write your program such that it calculates.
Tier I: Mathematical Methods of Optimization

Computational Optimization
UNCONSTRAINED MULTIVARIABLE
Advanced Engineering Mathematics by Erwin Kreyszig Copyright  2007 John Wiley & Sons, Inc. All rights reserved. Partial Differential Equations Text Chapter.
ITERATIVE TECHNIQUES FOR SOLVING NON-LINEAR SYSTEMS (AND LINEAR SYSTEMS)
Taylor Series.
Differentiation Recap The first derivative gives the ratio for which f(x) changes w.r.t. change in the x value.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Numerical Methods Applications of Loops: The power of MATLAB Mathematics + Coding 1.
Optimization Techniques M. Fatih Amasyalı. Convergence We have tried to find some x k that converge to the desired point x*(most often a local minimizer).
Loop Application: Numerical Methods, Part 1 The power of Matlab Mathematics + Coding.
1 Nonlinear Equations Jyun-Ming Chen. 2 Contents Bisection False Position Newton Quasi-Newton Inverse Interpolation Method Comparison.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Problem of the Day No calculator! What is the instantaneous rate of change at x = 2 of f(x) = x2 - 2 ? x - 1 A) -2 C) 1/2 E) 6 B) 1/6 D) 2.
Numerical Methods.
Dr. Jie Zou PHY Chapter 2 Solution of Nonlinear Equations: Lecture (II)
Page 46a Continued Advanced Engineering Mathematics by Erwin Kreyszig
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
CSE 2813 Discrete Structures Solving Recurrence Relations Section 6.2.
Gradient Methods In Optimization
4.8 Newton’s Method Mon Nov 9 Do Now Find the equation of a tangent line to f(x) = x^5 – x – 1 at x = 1.
Linearization, Newton’s Method
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Warm Up Write an equation of the tangent line to the curve at the given point. 1)f(x)= x 3 – x + 1 where x = -1 2)g(x) = 3sin(x/2) where x = π/2 3)h(x)
MTH 253 Calculus (Other Topics) Chapter 11 – Infinite Sequences and Series Section 11.8 –Taylor and Maclaurin Series Copyright © 2009 by Ron Wallace, all.
Numerical Methods and Optimization C o u r s e 1 By Bharat R Chaudhari.
MATH342: Numerical Analysis Sunjae Kim.
What is the differentiation.
The formulae for the roots of a 3rd degree polynomial are given below
The formulae for the roots of a 3rd degree polynomial are given below
LECTURE 3 OF SOLUTIONS OF NON -LINEAR EQUATIONS.
CSE 575 Computer Arithmetic Spring 2003 Mary Jane Irwin (www. cse. psu
Section 3.4 Zeros of Polynomial Functions
The formulae for the roots of a 3rd degree polynomial are given below
The graph of a function f(x) is given below
Roots of equations Class VII.
Computers in Civil Engineering 53:081 Spring 2003
Page 535 Advanced Engineering Mathematics by Erwin Kreyszig
Differentiation Recap
3.8 Newton’s Method How do you find a root of the following function without a graphing calculator? This is what Newton did.
Newton’s Method and Its Extensions
We’ll need to use the ratio test.
The formulae for the roots of a 3rd degree polynomial are given below
tangent line: (y - ) = (x - ) e2
MATH 1910 Chapter 3 Section 8 Newton’s Method.
Steepest Descent Optimization
The formulae for the roots of a 3rd degree polynomial are given below
The formulae for the roots of a 3rd degree polynomial are given below
Miss Battaglia AB Calculus
Presentation transcript:

Optimization Techniques M. Fatih Amasyalı

Steepest Descent Exact step size x_new = x_old - eps * df; Eps is calculated as: – z(eps)=x – eps*df – Find eps point where f’(z(eps))=0

Steepest Descent Find the minimum point of f(x)=x^2 z(eps)=x-eps*2*x=x(1-2*eps) = search direction f(z(eps))= the value of f at a point on the search direction Find eps value which is minimum of f(z(eps)), f’(z(eps))=0 f(z(eps))=(x(1-2*eps))^2=x^2*(1-2*eps)^2 f(z(eps))=x^2*(1-4*eps+4*eps^2) f’(z(eps))=x^2*(-4+8*eps)=0 eps=1/2 X n+1 =X n -eps*df=X n -eps*2*X n =X n -X n =0 Wherever you start, the minimum point is found at one iteration !

Steepest Descent Find the minimum point of f(x)=x^4 z(eps)=x-eps*4*x^3= search direction If we know that the minimum point of f(z(eps)) is 0 ( But, we do not know, in reality) eps*4*x^3=x than, eps=1/(4*x^2) X n+1 =X n -epx*4*x^3=X n -*(1/(4*X n ^2))*4*X n ^3= X n -X n =0 Wherever you start, the minimum point is found at one iteration !

Steepest Descent in 2 dims. Find the iteration equation to find the minimum of f(x1,x2)=x1^2+3*x2^2 df=[2*x1 ; 6*x2] t=eps z(t) have 2 dims as x z(t)=[x1 ; x2]-t*df z(t)=[x1;x2]-[2*x1*t; 6*x2*t] z(t)=[x1*(1-2*t) ; x2*(1-6*t)] f(z(t))= (x1^2)*(1-2*t) ^2+3*(x2^2)(1-6*t)^2

Steepest Descent in 2 dims. f(z(t))= (x1^2)*(1-2*t) ^2+3*(x2^2)(1-6*t)^2 df(z(t))/dt=f’(z(t))= =(x1^2)*2*(1-2*t)*(-2)+3*(x2^2)*2*(1-6*t)*(-6) =(x1^2)*(-4)*(1-2*t)-36*(x2^2)*(1-6*t) =(x1^2)*(-4+8*t)- (x2^2)*(36-216*t) =-4*(x1^2)+(x1^2)*8*t-36*(x2^2)+216*t*(x2^2) =0 because f’(z(t))=0 (x1^2)*8*t+216*t*(x2^2)=4*(x1^2)+36*(x2^2) t= (4*(x1^2)+36*(x2^2)) / ( (x1^2)*8+216*(x2^2) ) t= ((x1^2)+9*(x2^2)) / ( 2*(x1^2)+54*(x2^2) )

Steepest Descent in 2 dims. So the iteration equation is X n+1 =X n -t*[2*x1; 6*x2] where t= ((x1^2)+9*(x2^2)) / ( 2*(x1^2)+54*(x2^2) )

Steepest Descent Examples f(x)=x1^2+x2^2 df=[2*x1 ; 2*x2] z(t)=[x1 ; x2]-t*df z(t)=[x1;x2]-[2*x1*t; 2*x2*t] z(t)=[x1*(1-2*t) ; x2*(1-2*t)] f(z(t))= (x1^2)*(1-2*t) ^2+(x2^2)(1-2*t)^2 f(z(t))=((1-2*t) ^2)(x1^2+ x2^2) f’(z(t))=(x1^2 + x2^2)*(8*t - 4)=0 t=1/2 z(t)=[x1 ; x2]-t* [2*x1 ; 2*x2] z(t)=[x1 ; x2]-[x1 ; x2]= [0 0] Wherever you start, the minimum point is found at one iteration !

Steepest Descent Examples f(x)=x1^2+x2^2, t=1/2 f(x) =x1^2-x2, t= 0.5+1/(8*x1^2) steepest_desc_2dim_2.m where is the 3rd,4th, iterations…?

the new search direction will always be perpendicular to the previous direction.

Linear Approximation Assuming the function is linear at a point. L=f’(x) f(x+Δx) ≈ L(x+Δx) lim (Δx  0) ( f(x+Δx) - L(x+Δx) ) = 0 L(x+Δx) =f(x) + dy = f(x) + Δx f'(x) since f’(x)=dy/dx New point: domx=x+Δx, Δx=domx-x, f(domx) ≈ f(x)+(domx-x)f’(x)

Linear Approximation Example 1 (1.0002) 50 ≈? f(x)=x 50 f(x+Δx) ≈ f(x) + Δx f'(x) domx=1.0002, x=1, Δx= f( ) ≈ f(1) f'(1) f( ) ≈ f(1) * 50 * 1 49 f( ) ≈ * 50 * 1 = 1.01

Linear Approximation Example 2 Find the linear approximation for x tends to 1 where f(x) = ln x. f(x+Δx) ≈ f(x) + Δx f'(x) domx=x+Δx, Δx = domx-x, x=1, f’(x)=1/x f(domx) ≈ ln 1 + f'(1) (domx - 1) = domx -1 ln x ≈ x-1, for x close to 1 For x tends to 2 Δx = domx-x, x=2 f(domx) ≈ ln 2 + f'(2) (domx - 2) = ln 2 + (domx-2)/2 ln x ≈ ln 2+ (x-2)/2, for x close to 2

For a better approximation 1 st order Taylor: (linear approx.) f(x+Δx) ≈ f(x) + Δx f'(x) 2 nd order Taylor: (non-linear approx.) f(x+Δx) ≈ f(x) + Δx f'(x) + ½ f''(x) Δx 2 … N th order Taylor: (non-linear approx.) f(x+Δx) ≈ ∑ (f (i)’ (x) Δx i )/ i! i=0…N

Function approx. with Taylor f(x)=exp(x), N=0:5, X=0.25 Red: real f, Blues and green: approximations Green: The last approx. approx_taylor.m All approximations passes from (0.25, f(0.25))

Function approx. with Taylor f(x)=exp(x), N=0:5, X=1.25 Red: real f, Blues and green: approximations Green: The last approx. approx_taylor.m All approximations passes from (1.25, f(1.25))

Function approx. with Taylor f(X)=sin(x) X=1.25 N=0:15 Red: real f Blues and green: approximations Green: The last approx. approx_taylor.m

Finding a root of f(x) iteratively (find a point x where f(x)=0) Newton Raphson 1 st order f′(x n ) = f(x n ) / (x n - x n+1 ) x n - x n+1 = f(x n ) / f′(x n ) x n+1 = x n - f(x n ) / f′(x n ) n = 0,1,2,3.... If we require the root correct up to 6 decimal places, we stop when the digits in x n+1 and x n agree till the 6 th decimal place.

Newton Raphson- 1 st order

Example Find sqrt(2) Means find the root of x 2 -2=0 x 0 =1 x n+1 = x n -f(x n )/f’(x n ) x n+1 = x n - (x n *x n -2)/(2*x n ) x 0 =1 x 1 =1.5 x 2 = x 3 = x 4 = if the current improvement ( ) is insignificant, we can say sqrt(2) = , if not go on the iterations.

Taylor Series - Newton Raphson 2nd order 1 st order Taylor: f(x+Δx) ≈ f(x) + Δx f'(x)(1) According to 1 st order Taylor, to find f(x+Δx)=0, Δx=-f(x)/f’(x) To find f(x+Δx)’=0, take derivative of (1) f'(x+Δx) ≈ f’(x) + Δx f‘’(x) Δx=-f’(x)/f’’(x)  Newton Raphson 2nd order x n+1 = x n - f(x n ) / f′(x n ) Δx = x n+1 - x n

Newton Raphson Generally faster converge, because it uses more information (2nd derivative) No explicit step size selection 1st order : x_new = x_old - f/df; (f(x)=0) 2nd order : x_new = x_old - df/ddf; (f’(x)=0) instead of x_new = x_old - eps * df; (gradient descent)

Gradient Descent Step size=0.01 Starting point=-1.9 Does not converge with 200 iterations Converged at 20 iterations Newton Raphson 2nd order newton_raphson_1.m

newton_raphson_1.m 2nd order finds f’(x)=0 Blue f Red f’ Green f’’

newton_raphson_1.m 1nd order finds f(x)=0 Blue f Red f’ Green f’’

Newton Raphson 1st order -Cycle problem The tangent lines of x 3 - 2x + 2 at 0 and 1 intersect the x-axis at 1 and 0 respectively. What happened if we start at a point in (0,1) interval? If we start at 0.1, it goes to 1, then it goes to 0, then it goes to 1 …

Newton Raphson Faster convergence (lower iteration number) But, more calculation for each iteration

References Matematik Dünyası, MD 2014-II, Determinantlar Advanced Engineering Mathematics, Erwin Kreyszig, 10th Edition, John Wiley & Sons, ng/NonLinearEquationsMatlab.pdf ng/NonLinearEquationsMatlab.pdf