Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adjustment theory / least squares adjustment Tutorial at IWAA2010 / examples Markus Schlösser adjustment theory Hamburg, 15.09.2010.

Similar presentations


Presentation on theme: "Adjustment theory / least squares adjustment Tutorial at IWAA2010 / examples Markus Schlösser adjustment theory Hamburg, 15.09.2010."— Presentation transcript:

1 adjustment theory / least squares adjustment Tutorial at IWAA2010 / examples Markus Schlösser adjustment theory Hamburg, 15.09.2010

2 Markus Schlösser | adjustment theory | 15.09.2010 | Page 2 random numbers > Computer generated random numbers  are only pseudo-random numbers  Mostly only uniformly distributed prn are availiable (C, Pascal, Excel, …)  Some packages (octave, matlab, etc.) have normally distributed prn („randn“) > Normally distributed prn can be obtained by  Box-Muller method  Sum of 12 U(0,1) (is an example for central limit theorem)  ….

3 Markus Schlösser | adjustment theory | 15.09.2010 | Page 3 random numbers / distributions

4 Markus Schlösser | adjustment theory | 15.09.2010 | Page 4 random numbers / distributions

5 Markus Schlösser | adjustment theory | 15.09.2010 | Page 5 random variables / repeated measurements 3.1538 3.1535 3.1545 3.1524 3.1544 3.1542 3.1540 3.1538 3.1529 3.1545 3.1521 3.1530 3.1532 3.1536.. Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 10 Measurements Mean3.1538 Median3.1539 s single 0.0007(empirical standard deviation for single measurement) s mean 0.00022(empirical standard deviation for mean value) t (0.975;9) 2.2622 quantil of student's t-distribution, 5% error probability, 9 (10-1) degrees of freedom PV0.00050 P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value

6 Markus Schlösser | adjustment theory | 15.09.2010 | Page 6 random variables / repeated measurements 3.1538 3.1535 3.1545 3.1524 3.1544 3.1542 3.1540 3.1538 3.1529 3.1545 3.1521 3.1530 3.1532 3.1536.. Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 100 Measurements Mean3.1534 Median3.1534 s single 0.0010(empirical standard deviation for single measurement) s mean 0.00010(empirical standard deviation for mean value) t (0.975;9) 1.9842 quantil of student's t-distribution, 5% error probability, 99 (100-1) degrees of freedom PV0.00020 P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value

7 Markus Schlösser | adjustment theory | 15.09.2010 | Page 7 random variables / repeated measurements 3.1538 3.1535 3.1545 3.1524 3.1544 3.1542 3.1540 3.1538 3.1529 3.1545 3.1521 3.1530 3.1532 3.1536.. Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 1000 Measurements Mean3.1534 Median3.1535 s single 0.0010(empirical standard deviation for single measurement) s mean 0.00003(empirical standard deviation for mean value) t (0.975;9) 1.9623 quantil of student's t-distribution, 5% error probability, 999 (1000-1) degrees of freedom PV0.00006 P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value

8 Markus Schlösser | adjustment theory | 15.09.2010 | Page 8 random variables / repeated measurements 3.1538 31.535 3.1545 3.1524 3.1544 3.1542 3.1540 3.1538 3.1529 3.1545 3.1521 31.530 3.1532 3.1536.. Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 10 Measurements Mean5.9920 Median3.1541 s single 8.9749(empirical standard deviation for single measurement) s mean 2.83812(empirical standard deviation for mean value) t (0.975;9) 2.2622 quantil of student's t-distribution, 5% error probability,9 (10-1) degrees of freedom PV6.42027 P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value blunder

9 Markus Schlösser | adjustment theory | 15.09.2010 | Page 9 error propagation > assume we have  instrument stand S  fixed point F  S and F both with known (error free) coordinates  horizontal angle to F and P, distance from S to P  instrument accuracy well known from other experiments > looking for  coordinates of P  confidence ellipse of P

10 Markus Schlösser | adjustment theory | 15.09.2010 | Page 10 error propagation gon356.2119t SF P 2.14520.673F 10.64210.332S Y [m]X [m]Point m10.2486d SP [m] gon14.9684=r SP [gon]=X gon321.6427r SF [gon] Parameters Observations Unknowns m17.836Y P [m] m17.631 = X P [m] =Z standard deviation of observations Variance / Covariance Matrix

11 Markus Schlösser | adjustment theory | 15.09.2010 | Page 11 error propagation 0.7019520.114658-0.114658 0.712224-0.1130040.113004 =F F contains partitial derivative of  build the difference quotient (numerically) with

12 Markus Schlösser | adjustment theory | 15.09.2010 | Page 12 error propagation 0.0220760.017666 0.022589  ZZ = covariance matrix of unknowns variances of coordinates are on the main diagonal BUT, this information is incomplete and could even be misleading, better use Helmert‘s error ellipse:

13 Markus Schlösser | adjustment theory | 15.09.2010 | Page 13 error propagation or even better, use a confidence ellipse. That means that with a chosen probablity P the target point is inside this confidence ellipse. P = 0.99 (=99%) Quantil of -distribution, with 2 degrees of freedom A 0.99 = 0.61mm B 0.99 = 0.21mm  = 50gon

14 Markus Schlösser | adjustment theory | 15.09.2010 | Page 14 network adjustment Example: Adjustment of a 2D-network with angular and distance measurements

15 Markus Schlösser | adjustment theory | 15.09.2010 | Page 15 adjustment theory > f = 0  no adjustment, but error propagation possible  no control of measurement > f > 0  adjustment possible  measurement is controlled by itself  f > 100 typical for large networks > f < 0  scratch your head

16 Markus Schlösser | adjustment theory | 15.09.2010 | Page 16 network adjustment 25.0005.000S3 15.0005.000S2 5.000 S1 30.00010.000N8 30.0000.000N7 20.00010.000N6 20.0000.000N5 10.000 N4 10.0000.000N3 0.00010.000N2 0.000 N1 Y [m]X [m]Name  small + regular network  2D for easier solution and smaller matrices  3 instrument stands (S1, S2, S3)  8 target points (N1 … N8)  all points are unknown (no fixed points)  initial coordinates are arbitrary, they just have to represent the geometry of the network

17 Markus Schlösser | adjustment theory | 15.09.2010 | Page 17 network adjustment - input vector of unknowns vector of observations vector of coarse coordinates vector of standard deviations

18 Markus Schlösser | adjustment theory | 15.09.2010 | Page 18 network adjustment

19 Markus Schlösser | adjustment theory | 15.09.2010 | Page 19 network adjustment – design matrix

20 Markus Schlösser | adjustment theory | 15.09.2010 | Page 20 network adjustment A-Matrix has lots of zero-elements Network points instrument stands orientation unknowns

21 Markus Schlösser | adjustment theory | 15.09.2010 | Page 21 network adjustment P is a diagonal matrix, because we assume that observations are uncorrelated

22 Markus Schlösser | adjustment theory | 15.09.2010 | Page 22 network adjustment Normal matrix shows dependencies between elements Normal matrix is singular when adjusting networks without fixed points easy inversion of N is not possible network datum has to be defined add rows and colums, to make the matrix regular

23 Markus Schlösser | adjustment theory | 15.09.2010 | Page 23 network adjustment datum deficiency for 2D-network with distances: 2 translations 1 rotation minimize the total matrix trace means to put the network on all point coordinates additional rows and columns look as Constraints: No shift of network in x No shift of network in y No rotation of network around z

24 Markus Schlösser | adjustment theory | 15.09.2010 | Page 24 network adjustment after addition of G, Normalmatrix is regular and thus invertible N -1 is in general fully occupied

25 Markus Schlösser | adjustment theory | 15.09.2010 | Page 25 network adjustment

26 Markus Schlösser | adjustment theory | 15.09.2010 | Page 26 network adjustment adjusted coordinates and orientation unknowns information about the error ellipses

27 Markus Schlösser | adjustment theory | 15.09.2010 | Page 27 network adjustment

28 Markus Schlösser | adjustment theory | 15.09.2010 | Page 28 network adjustment building the covariance matrix of unknowns (with empirical s 0 2 ) 2D-Network degrees of freedom error probability 1- 

29 Markus Schlösser | adjustment theory | 15.09.2010 | Page 29 network adjustment error ellipses with P=0.01 error probability for all network points

30 Markus Schlösser | adjustment theory | 15.09.2010 | Page 30 network adjustment confidence ellipses for all network points relative confidence ellipses beewen some network points

31 Markus Schlösser | adjustment theory | 15.09.2010 | Page 31 network adjustment Relative confidence ellipses are most useful in accelerator sience, because most of the time you are only interested in relative accuracy between components. For relative ellipse between N2 and N4 Ellipse parameters are then calculated from  rel N2N4

32 Markus Schlösser | adjustment theory | 15.09.2010 | Page 32 network adjustment estimation of s 0 2 from corrections v is used as a statistical test, to proof that the model parameters are right à priori variances are ok, with P = 0.99

33 Markus Schlösser | adjustment theory | 15.09.2010 | Page 33 adjustment Example: 2D - ellipsoid fid deviation of position and rotation of an ellipsoidal flange

34 Markus Schlösser | adjustment theory | 15.09.2010 | Page 34 flange adjustment known parameters (e.g. from workshop drawing) unknowns with initial value Observations constraints

35 Markus Schlösser | adjustment theory | 15.09.2010 | Page 35 flange adjustment Since it is not (easily) possible to separate unknowns and observations in the constraints, we use the general adjustment model: B contains the derivative of  with respect to L A contains the derivative of  with respect to X k are the Lagranges Multiplicators (“Korrelaten”) x is the vector of unknowns w is the vector  (L,X 0 )

36 Markus Schlösser | adjustment theory | 15.09.2010 | Page 36 flange adjustment

37 Markus Schlösser | adjustment theory | 15.09.2010 | Page 37 flange adjustment Result:

38 Markus Schlösser | adjustment theory | 15.09.2010 | Page 38 the end for now may your [vv] always be minimal …


Download ppt "Adjustment theory / least squares adjustment Tutorial at IWAA2010 / examples Markus Schlösser adjustment theory Hamburg, 15.09.2010."

Similar presentations


Ads by Google