Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fitting a line to N data points – 1 If we use then a, b are not independent. To make a, b independent, compute: Then use: Intercept = optimally weighted.

Similar presentations


Presentation on theme: "Fitting a line to N data points – 1 If we use then a, b are not independent. To make a, b independent, compute: Then use: Intercept = optimally weighted."— Presentation transcript:

1 Fitting a line to N data points – 1 If we use then a, b are not independent. To make a, b independent, compute: Then use: Intercept = optimally weighted mean value: Variance of intercept:

2 Fitting a line to N data points – 2 Slope = optimally weighted mean value: Optimal weights: Hence get optimal slope and its variance:

3 Linear regression If fitting a straight line, minimize: To minimize, set derivatives to zero: Note that these are a pair of simultaneous linear equations -- the “normal equations”.

4 The Normal Equations Solve as simultaneous linear equations in matrix form – the “normal equations”: In vector-matrix notation: Solve using standard matrix-inversion methods (see Press et al for implementation). Note that the matrix M is diagonal if: In this case we have chosen an orthogonal basis.

5 General linear regression Suppose you wish to fit your data points y i with the sum of several scaled functions of the x i : Example: fitting a polynomial: Goodness of fit to data x i, y i,  i : where: To minimise  2, then for each k we have an equation:

6 Normal equations Normal equations are constructed as before: Or in matrix form:

7 Uncertainties of the answers We want to know the uncertainties of the best- fit values of the parameters a j. For a one-parameter fit we’ve seen that: By analogy, for a multi-parameter fit the covariance of any pair of parameters is: Hence get local quadratic approximation to  2 surface using Hessian matrix H:

8 The Hessian matrix Defined as It’s the same matrix M we derived from the normal equations! Example: y = ax + b.

9 Principal axes of  2 ellipsoid The eigenvectors of H define the principal axes of the  2 ellipsoid. H is diagonalised by replacing the coordinates x i with: This gives And so orthogonalises the parameters. b a b a

10 Principal axes for general linear models In the general linear case where we fit K functions P k with scale factors a k : The Hessian matrix has elements: Normal equations are This gives K-dimensional ellipsoidal surfaces of constant  2 whose principal axes are eigenvectors of the Hessian matrix H. Use standard matrix methods to find linear combinations of x i, y i that diagonalise H.


Download ppt "Fitting a line to N data points – 1 If we use then a, b are not independent. To make a, b independent, compute: Then use: Intercept = optimally weighted."

Similar presentations


Ads by Google