Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 14 General Linear Squares and Nonlinear Regression.

Similar presentations


Presentation on theme: "Chapter 14 General Linear Squares and Nonlinear Regression."— Presentation transcript:

1 Chapter 14 General Linear Squares and Nonlinear Regression

2 y =  20.5717 +3.6005x Error S r = 4201.3 Correlation r = 0.4434 x = [-2.5 3.0 1.7 -4.9 0.6 -0.5 4.0 -2.2 -4.3 -0.2]; y = [-20.1 -21.8 -6.0 -65.4 0.2 0.6 -41.3 -15.4 -56.1 0.5];

3 Preferable to fit a parabola Large error, poor correlation

4 Polynomial Regression w Quadratic Least Squares w y = f(x) = a 0 + a 1 x + a 2 x 2 w Minimize total square error

5 Quadratic Least Squares w Use Cholesky decomposition to solve for the symmetric matrix w or use MATLAB function z = A\r

6 Standard error for 2 nd polynomial regression where n observations 2 nd order polynomial (3 coefficients) (start off with n degrees of freedom, use up m+1 for m th -order polynomial)

7

8 » [x,y]=example2 » z=Quadratic_LS(x,y) x y (a0+a1*x+a2*x^2) (y-a0-a1*x-a2*x^2) -2.5000 -20.1000 -18.5529 -1.5471 3.0000 -21.8000 -22.0814 0.2814 1.7000 -6.0000 -6.3791 0.3791 -4.9000 -65.4000 -68.6439 3.2439 0.6000 0.2000 -0.2816 0.4816 -0.5000 0.6000 -0.7740 1.3740 4.0000 -41.3000 -40.4233 -0.8767 -2.2000 -15.4000 -14.4973 -0.9027 -4.3000 -56.1000 -53.1802 -2.9198 -0.2000 0.5000 0.0138 0.4862 err = 25.6043 Syx = 1.9125 r = 0.9975 z = 0.2668 0.7200 -2.7231 y = 0.2668 + 0.7200 x - 2.7231 x 2 Correlation coefficient r Standard error of the estimate function [x,y] = example2 x = [ -2.5 3.0 1.7 -4.9 0.6 -0.5 4.0 -2.2 -4.3 -0.2]; y = [-20.1 -21.8 -6.0 -65.4 0.2 0.6 -41.3 -15.4 -56.1 0.5];

9 Quadratic Least Square: y = 0.2668 + 0.7200 x  2.7231 x 2 Error S r = 25.6043 Correlation r = 0.9975

10 Cubic Least Squares

11

12 » [x,y]=example2; » z=Cubic_LS(x,y) x y p(x)=a0+a1*x+a2*x^2+a3*x^3 y-p(x) -2.5000 -20.1000 -19.9347 -0.1653 3.0000 -21.8000 -21.4751 -0.3249 1.7000 -6.0000 -5.0508 -0.9492 -4.9000 -65.4000 -67.4300 2.0300 0.6000 0.2000 0.5842 -0.3842 -0.5000 0.6000 -0.8404 1.4404 4.0000 -41.3000 -41.7828 0.4828 -2.2000 -15.4000 -15.7997 0.3997 -4.3000 -56.1000 -53.2914 -2.8086 -0.2000 0.5000 0.2206 0.2794 err = 15.7361 Syx = 1.6195 r = 0.9985 z = 0.6513 1.5946 -2.8078 -0.0608 y = 0.6513 + 1.5946x – 2.8078x 2  0.0608x 3 Correlation coefficient r = 0.9985

13 » [x,y]=example2; » z1=Linear_LS(x,y); z1 z1 = -20.5717 3.6005 » z2=Quadratic_LS(x,y); z2 z2 = 0.2668 0.7200 -2.7231 » z3=Cubic_LS(x,y); z3 z3 = 0.6513 1.5946 -2.8078 -0.0608 » x1=min(x); x2=max(x); xx=x1:(x2-x1)/100:x2; » yy1=z1(1)+z1(2)*xx; » yy2=z2(1)+z2(2)*xx+z2(3)*xx.^2; » yy3=z3(1)+z3(2)*xx+z3(3)*xx.^2+z3(4)*xx.^3; » H=plot(x,y,'r*',xx,yy1,'g',xx,yy2,'b',xx,yy3,'m'); » xlabel('x'); ylabel('y'); » set(H,'LineWidth',3,'MarkerSize',12); » print -djpeg075 regres4.jpg Linear Least Square Quadratic Least Square Cubic Least Square

14 Linear Least Square: y = – 20.5717 + 3.6005x Quadratic: y = 0.2668 + 0.7200 x  2.7231x 2 Cubic: y = 0.6513 + 1.5946x – 2.8078x 2  0.0608x 3

15 Standard error for polynomial regression where n observations m order polynomial (start off with n degrees of freedom, use up m+1 for m th -order polynomial)

16 w Dependence on more than one variable w e.g. dependence of runoff volume on soil type and land cover, w or dependence of aerodynamic drag on automobile shape and speed Multiple Linear Regression

17 w With two independent variables, get a surface w Find the best-fit “plane” to the data

18 Multiple Linear Regression w Much like polynomial regression w Sum of squared residuals

19 w Rearrange the equations w Very similar to polynomial regression

20 Multiple Linear Regression w Once again, solve by any matrix method w Cholesky decomposition is appropriate - symmetric and positive definite w Very useful for fitting power equation

21 Example: Strength of concrete depends on cure time and cement/water ratio (or water content W/C)

22 » x1=[2 4 5 16 3 7 8 27 14 20]; » x2=[0.42 0.55 0.7 0.53 0.61 0.67 0.55 0.66 0.42 0.58]; » y=[2770 2639 2519 3450 2315 2545 2613 3694 3414 3634]; » H=plot3(x1,x2,y,'ro'); grid on; set(H,'LineWidth',5); » H1=xlabel('Cure Time (days)'); set(H1,'FontSize',12) » H2=ylabel('Water Content'); set(H2,'FontSize',12) » H3=zlabel('Strength (psi)'); set(H3,'FontSize',12)

23 Hand Calculations

24 Solve by Cholesky decomposition Forward and Back Substitutions

25

26 » [x1,x2,y]=concrete; » z=Multi_Linear(x1,x2,y) x1 x2 y (a0+a1*x1+a2*x2) (y-a0-a1*x1-a2*x2) 2 0.42 2770 2711.3 58.652 4 0.55 2639 2594.7 44.267 5 0.7 2519 2381.1 137.94 16 0.53 3450 3357.3 92.72 3 0.61 2315 2424.6 -109.57 7 0.67 2545 2556.9 -11.895 8 0.55 2613 2836.7 -223.73 27 0.66 3694 3785.2 -91.158 14 0.42 3414 3437.3 -23.339 20 0.58 3634 3507.9 126.11 Syx = 130.92 r = 0.97553 z = 3358 60.499 -1827.8 function [x1,x2,y] = concrete x1=[2 4 5 16 3 7 8 27 14 20]; x2=[0.42 0.55 0.7 0.53 0.61 0.67 0.55 0.66 0.42 0.58]; y=[2770 2639 2519 3450 2315 2545 2613 3694 3414 3634]; Correlation coefficient (a 0, a 1, a 2 )

27 Multiple Linear Regression

28 » xx=0:0.02:1; yy=0:0.02:1; [x,y]=meshgrid(xx,yy); » z=2*x+3*y+2; » surfc(x,y,z); grid on » axis([0 1 0 1 0 7]) » xlabel('x1'); ylabel('x2'); zlabel('y')

29 w Simple linear, polynomial, and multiple linear regressions are special cases of the general linear least squares model w Examples: w Linear in a i, but z i may be highly nonlinear General Linear Least Squares

30 w General equation in matrix form w Where General Linear Least Squares Dependent variables Regression coefficients Residuals

31 w As usual, take partial derivatives to minimize the square errors S r w This leads to the normal equations w Solve this for {A} using Cholesky LU decomposition, or matrix inverse General Linear Least Squares

32 w Use Taylor series expansion to linearize the original equation w Gauss-Newton method w Nonlinear function of a 1, a 2, …, a m w Where f is a nonlinear function of x w (x i, y i ) are one of a set of n observations Nonlinear Regression

33 w Use Taylor series for f, and truncate the higher-order terms w j = the initial guess w j+1 = the prediction (improved guess) Nonlinear Regression

34 w Plug the Taylor series into original equation w or Nonlinear Regression

35 Gauss-Newton Method w Given all n equations w Set up matrix equation

36 where

37 w Using the same least squares approach w Minimizing sum of squares of residuals e w Get  A from w Now modify a 1, a 2, …, a m with  A and repeat the procedure until convergence is reached Gauss-Newton Method

38 function [x,y] = mass_spring x = [0.00 0.11 0.18 0.25 0.32 0.44 0.55 0.61 0.68 0.80... 0.92 1.01 1.12 1.22 1.35 1.45 1.60 1.67 1.76 1.83 2.00]; y = [1.03 0.78 0.62 0.22 0.05 -0.20 -0.45 -0.50 -0.45 -0.31... -0.21 -0.11 0.04 0.12 0.22 0.23 0.18 0.10 0.07 -0.02 -0.10]; Example: Damped Sinusoidal Model it with

39 Gauss-Newton Method

40

41 » [x,y]=mass_spring; » a=gauss_newton(x,y) Enter the initial guesses [a0,a1] = [2,3] Enter the tolerance tol = 0.0001 Enter the maximum iteration number itmax = 50 n = 21 iter a0 a1 da0 da1 1.0000 2.1977 5.0646 0.1977 2.0646 2.0000 1.0264 3.9349 -1.1713 -1.1296 3.0000 1.1757 4.3656 0.1494 0.4307 4.0000 1.1009 4.4054 -0.0748 0.0398 5.0000 1.1035 4.3969 0.0026 -0.0085 6.0000 1.1030 4.3973 -0.0005 0.0003 7.0000 1.1030 4.3972 0.0000 0.0000 Gauss-Newton method has converged a = 1.1030 4.3972 Choose initial a 0 = 2, a 1 = 3 21 data points

42 a 0 = 1.1030, a 1 = 4.3972


Download ppt "Chapter 14 General Linear Squares and Nonlinear Regression."

Similar presentations


Ads by Google