Presentation is loading. Please wait.

Presentation is loading. Please wait.

6.5 Taylor Series Linearization

Similar presentations


Presentation on theme: "6.5 Taylor Series Linearization"— Presentation transcript:

1 6.5 Taylor Series Linearization
writing out the Taylor series expansion simplification using the b and a notation expansion for the Gaussian peak example iterative solution of the matrix least-squares errors in the coefficients are given by the a-1 variance/covariance matrix tolerance toward initial guesses improving performance with a second-order expansion 6.5 : 1/8

2 The Approach Linearize the function locally by Taylor series expansion about a set of guess values for the coefficients. Use the linear least squares matrix method to calculate b and a and solve for a. Use the values of a as the new guess values for another Taylor series expansion Iterate to convergence. 6.5 : 1/8

3 Linearization of Coefficients
Consider a function non-linear in its coefficients, y = f(x,a), where a is a vector of coefficients: a0, a1, ... , an. The function can be linearized by a first-order Taylor series expansion about the true coefficient values. Let the true values be denoted as m0, m1, ... , mn, and be represented by the vector, m. Simplify this expression by using the following variable changes. The linear approximation can now be written in a compact form. 6.5 : 2/8

4 We have seen this problem before!
6.5 : 1/8

5 Iterative Matrix Solution
By defining, it is possible to recover a matrix formalism, where a-1 is the variance/covariance matrix. If the parameter values at the minimum in chi-square were substituted for m, the vector d would contain all zeros. In practice the parameters are estimated and used to construct an initial guess at the m vector. The expansion is performed around the guesses. The least-squares solution then yields a non-zero value for d. These two vectors are then used to construct a refined set of guesses, m'. The new vector, m', is used as a new set of guesses and the matrix calculation is repeated. The process stops when all dr ~ 0. 6.5 : 3/8

6 Example Gaussian Linearization
Consider the Gaussian example we have been using, where a is a vector containing the coefficients. The function y(x,a) is used to compute Dy given on an earlier slide. The calculation of the a matrix and b vector require the following partial derivatives. 6.5 : 4/8

7 Iterative Solution The initial guesses used with the grid and gradient search algorithms were used. They appear in the table below as iteration 1. The computed parameters were then used as guesses for the second computation. They are iteration 2. Etc. a Iteration 1 2 3 4 g0 d0 2.000 +0.137 2.137 +0.004 2.141 0.000 g1 d1 51.000 -0.058 50.942 -0.005 50.937 +0.001 50.938 The final coefficients are essentially the same as those obtained with the grid and gradient search algorithms. 6.5 : 5/8

8 Coefficient Errors The error in the fit is given by the following with a0 = and a1 = There were eleven data values. The a-1 variance/covariance equation has the following two diagonal elements. The standard deviation of the two coefficients can now be calculated. These values are comparable to those obtained by fitting chi-square space to a two-dimensional parabola: sa0 = 0.034, sa1 = 6.5 : 6/8

9 Tolerance Toward Initial Guesses
The Taylor series linearization does not tolerate arbitrarily bad initial guesses. Consider the example Gaussian. If the value of a1 is fixed at , the smallest value of a0 that converged to the minimum in chi-square space was The largest value of a0 that converged was Values smaller than 0.25 and larger than 5.1 gave iterative values that diverged. These values are shown below. In contrast, the gradient search converged with values as small as Values as large as 1,000 converged, albeit, taking ~1 min. 6.5 : 7/8

10 Tolerance Toward Initial Guesses
The curvature of c2-space dictates whether the Taylor series approach will converge or diverge. You may recall, the elements of the matrix a are essentially second derivatives of c2-space. When those invert in sign, the Taylor series approach leads away from the minimum rather than toward it. 6.5 : 7/8


Download ppt "6.5 Taylor Series Linearization"

Similar presentations


Ads by Google