Presentation is loading. Please wait.

Presentation is loading. Please wait.

On Difference Variances as Residual Error Measures in Geolocation Victor S. Reinhardt Raytheon Space and Airborne Systems El Segundo, CA, USA ION National.

Similar presentations


Presentation on theme: "On Difference Variances as Residual Error Measures in Geolocation Victor S. Reinhardt Raytheon Space and Airborne Systems El Segundo, CA, USA ION National."— Presentation transcript:

1 On Difference Variances as Residual Error Measures in Geolocation Victor S. Reinhardt Raytheon Space and Airborne Systems El Segundo, CA, USA ION National Technical Meeting January 28-30, 2008 San Diego, California

2 Page 2 ION V. Reinhardt Two Types of Random Error Variances Used in Navigation Residual error (R) variances are used in measuring geolocation error Residual error (R) variances are used in measuring geolocation error = Mean Sq (MS) of difference between position or time data x(t n ) and a trajectory est from data M th order difference (  ) variances used in measuring T&F error M th order difference (  ) variances used in measuring T&F error  MS of M th order difference of data x(t n ) over   1 st order difference  (  )x(t n ) = x(t n +  ) - x(t n ) Res Error Trajectory t x(t n )  (  )x(t n )  x(t n ) x(t n +  )

3 Page 3 ION V. Reinhardt Two Types of Random Error Variances Used in Navigation Residual error (R) variances are used in measuring geolocation error Residual error (R) variances are used in measuring geolocation error = Mean Sq (MS) of difference between position or time data x(t n ) and a trajectory est from data M th order difference (  ) variances used in measuring T&F error M th order difference (  ) variances used in measuring T&F error  MS of M th order difference of data x(t n ) over   1 st order difference  (  )x(t n ) = x(t n +  ) - x(t n )  MS of  (  ) 2 x(t n )  MS of  (  ) 2 x(t n )  Allan variance (of x)  MS of  (  ) 3 x(t n )  Hadamard variance (of x) Res Error Trajectory t x(t n )  (  )x(t n )    (  )x(t n +  )  (  ) 2 x(t n )

4 Page 4 ION V. Reinhardt R-variances not known for good convergence properties When negative power law (neg-p) noise is present When negative power law (neg-p) noise is present  Neg-p noise  PSD L x (f)  f p with p < 0  p generally -1, -2, -3, -4 for T&F sources R-variances are the proper residual error measures in geolocation R-variances are the proper residual error measures in geolocation  Despite any such convergence problems  Address statistical questions being posed  -variances known for good convergence properties when neg-p noise present  -variances known for good convergence properties when neg-p noise present  But  -variances don’t seem to relate to residual geolocation error as R-variances do

5 Page 5 ION V. Reinhardt Paper Will Show  -variances do measure residual geolocation error under certain conditions  -variances do measure residual geolocation error under certain conditions  Mainly when an (M-1) th order polynomial is used to estimate the trajectory   & R variances equivalent for these conditions R-variances do have good convergence properties for neg-p noise R-variances do have good convergence properties for neg-p noise  Because trajectory estimation process highpass (HP) filters the noise in the data  True under more general conditions than for equivalence between  & R variances

6 Page 6 ION V. Reinhardt t ● ● ● ● ● ● ● ● ● ● x(t n ) N Data Samples over T = N∙  o Residual Errors in Geolocation Problems Have N data samples x(t n ) over interval T Have N data samples x(t n ) over interval T  Data contains a (true) causal trajectory x c (t) that we want to estimate from the data  And data also contains neg-p noise x p (t) Assume we estimate x c (t) by fittingAssume we estimate x c (t) by fitting  A model function x w,M (t,A) to the data  Through adjustment of M parameters A = (a o,a 1,…a M-1 ) True Trajectory x c (t) True Noise x p (t) Model Fn Est x w,M (t,A)

7 Page 7 ION V. Reinhardt x j (t n ) = x(t n ) - x w,M (t n,A) Define point R variance at x(t n )  E{x j (t n ) 2 } Define point R variance at x(t n )  E{x j (t n ) 2 }  E{…} = Ensemble average over random noise Average R variance  x-j 2  Average of E{x j (t n ) 2 } over N samples Average R variance  x-j 2  Average of E{x j (t n ) 2 } over N samples  Average can be uniformly or non-uniformly weighted (depending on weighting used in fit) Observable Residual (R) Error (of Data from Fit) t ● ● ● ● ● ● ● ● ● ● x(t n ) N Data Samples over T = N∙  o x c (t) x w,M (t,A) Observable Res (R) Error x j (t n )

8 Page 8 ION V. Reinhardt x w (t n ) = x w,M (t n,A) - x c (t n ) True measure of fit accuracy but not observable from the data True measure of fit accuracy but not observable from the data Point W variance  E{x w (t n ) 2 } Point W variance  E{x w (t n ) 2 } Average W variance   x-w 2 Average W variance   x-w 2 The True (W) Error (Between Fit Function & Actual Trajectory) t ● ● ● ● ● ● ● ● ● ● x(t n ) N Data Samples over T = N∙  o x c (t) x w,M (t,A) True Function (W) Error x w (t n ) Observable Res (R) Error x j (t n )

9 Page 9 ION V. Reinhardt Precise Definition of M th Order  -Variance for Paper Overlapping arithmetic average of square of  (  ) M x(t n ) over data plus E{…} Overlapping arithmetic average of square of  (  ) M x(t n ) over data plus E{…}  Not discussing total or modified averages M  All orders equal for white (p=0) noise M  All orders equal for white (p=0) noise Can show M th order  -Variance HP filters L x (f) with 2M th order zero (at f = 0) Can show M th order  -Variance HP filters L x (f) with 2M th order zero (at f = 0)  x,1 (  ) 2  MS Time Interval Error  2 nd Order zero  x,2 (  ) 2  Allan variance of x  4 th Order zero  x,3 (  ) 3  Hadamard var of x  6 th Order zero

10 Page 10 ION V. Reinhardt  -Variances as Measures of Residual Error in Geolocation Can prove for N = M + 1 data points that Can prove for N = M + 1 data points that  R-variance = M th order  -Variance with  = T/M  x-j 2 =  x,M (T/M) 2 when  x a,M (t,A) is (M-1) th order polynomial  Uniform weighted Least SQ Fit (LSQF) is used   x-j 2  “unbiased” MS (  sum sq by N – M) Well-known for Allan variance of x Well-known for Allan variance of x  Equivalent to 3-sample  x-j 2 when time & freq offset (1 st order polynomial in x) removed Hadamard variance of x equivalent to Hadamard variance of x equivalent to  4-sample  x-j 2 when time & freq offset & freq drift (2 nd order polynomial in x) removed

11 Page 11 ION V. Reinhardt Can Extend Equivalence to Any N as Follows “Biased”  x-j  RMS{x j } “Biased”  x-j  RMS{x j }  “Biased”   sum sq by N  Can show RMS{x j }  Constant as N varies (while T remains fixed) Thus for “unbiased”  x-j 2 Thus for “unbiased”  x-j 2  Exact relationship exists for each p & N  Similar Allan-Barnes “bias” functions for Allan variance  x-j 2 (N)   x,M (T/M) 2 N-M N Errors vs N (M=2) RMS{x j } f 0 Noise  x-w K Samples N  f -2 Noise  x-w RMS{x j }  N = M f -4 Noise  x-w RMS{x j }

12 Page 12 ION V. Reinhardt Consequences of Equivalence Between  & R Variances  -variances measure res geolocation error  -variances measure res geolocation error  When x w,M (t,A) is poly & uniform LSQF used For non-uniform weighting (Kalman?)  x,M (T eff /M) should also be estimate of  x-j For non-uniform weighting (Kalman?)  x,M (T eff /M) should also be estimate of  x-j  T eff  Correlation time for non-uniform fit  Don’t have to remove x w,M (t,A) from data if use  x,M (T eff /M) to estimate  x-j  Because  (  ) M x w,M (t,A) = 0 when x w,M (t,A) = (M-1) th (or lower) order poly Explains sensitivity of Allan variance to causal frequency drift & insensitivity of Hadamard to such drift Explains sensitivity of Allan variance to causal frequency drift & insensitivity of Hadamard to such drift

13 Page 13 ION V. Reinhardt HP Filtering of Noise in R-Variances Paper proves fitting process Paper proves fitting process  HP filters L x (f) in R-variances  HP filtering order depends on complexity of model function x w,M (t n,A) used  R-variances guaranteed to converge if free to choose model function True under very general conditions True under very general conditions  Fit solution is linear in data x(t n )  Fit is exact solution when no noise is present & x w,M (t n,A) is correct model for x c (t n )  True even when  x,M (  ) not measure of  x-j  Applies to any weighting, LSQF, Kalman, … long term error-1.xls

14 Page 14 ION V. Reinhardt Graphical Explanation of HP Filtering of L x (f) in R-Variances For white (p=0) noise the fit behaves in classical manner For white (p=0) noise the fit behaves in classical manner  As N   x w,M (t n,A)  x c (t n ) &  x-w  0  Again T is fixed as N is varied But for neg-p noise But for neg-p noise  As N   x w,M (t n,A) not  x c (t n ) Because fit solution necessarily tracks highly correlated low freq (LF) noise components in data Because fit solution necessarily tracks highly correlated low freq (LF) noise components in data long term error-1.xls T Fit Solutions for Various p f 0 Noise x(t n ) f -2 Noise xwxw xcxc f -4 Noise x a,M xjxj f 0 Noise x(t n ) xcxc f -4 Noise x a,M f -2 Noise xwxw xjxj

15 Page 15 ION V. Reinhardt This tracking causes HP filtering of L x (f) in R-Variances  With HP knee f T  f T  1/T (uniform weighted fit)  f T  1/T eff (non-uniform) True for all noise True for all noise  Implicit in fitting theory for correlated noise  Can’t distinguish correlated noise from causal behavior  Only apparent for neg-p noise because most power in f< f T  While for white noise power uniformly distributed over f long term error-1.xls T Fit Solutions for Various p f 0 Noise x(t n ) f -2 Noise vwvw vcvc f -4 Noise v a,M vjvj xcxc f -4 Noise x a,M f -2 Noise xwxw xjxj

16 Page 16 ION V. Reinhardt Spectrally Representing R-Variances G j (t,f) & K x-j (f)  HP filtering due to fit G j (t,f) & K x-j (f)  HP filtering due to fit  To understand what G j (t,f) & K x-j (f) are consider the following  Can write fit solution in terms of Green’s function g w (t,t’) because assumed fit is linear in x(t n )

17 Page 17 ION V. Reinhardt Spectrally Representing R-Variances  G w (t,f) = Fourier transform of g w (t,t’) over t’  H s (f)X p (f) = Fourier transform of x p (t)  Green’s fn for x j (t)  g j (t,t’) =  (t-t n ) - g w (t,t’) Fourier transform  G j (t,f) = e j  t - G w (t,f) Fourier transform  G j (t,f) = e j  t - G w (t,f)

18 Page 18 ION V. Reinhardt Spectrally Representing R-Variances K x-j (f)  Average of |G j (t,f)| 2 over t (data) K x-j (f)  Average of |G j (t,f)| 2 over t (data)  c 2 &  x-c 2  Modeling error terms  c 2 &  x-c 2  Modeling error terms  Generated when x w,M (t n,A) can’t follow x c (t n ) over T

19 Page 19 ION V. Reinhardt Spectrally Representing R-Variances The paper proves the following The paper proves the following |G j (t,f)| 2 & K x-j (f)  f 2M (f<<1) |G j (t,f)| 2 & K x-j (f)  f 2M (f<<1)  When x a,M (t,A) is (M-1) th order polynomial |G j (t,f)| 2 & K x-j (f) at least  f 2 (f<<1) |G j (t,f)| 2 & K x-j (f) at least  f 2 (f<<1)  For any x a,M (t,A) with DC component So R-variances guaranteed to converge So R-variances guaranteed to converge  If free to choose model function for estimating the trajectory

20 Page 20 ION V. Reinhardt K x-j (f) Calculated for Polynomial x w,M (t,A) & LSQF 1 K x-j (f) in dB for Uniform Weighted Fit Log 10 (fT) M = 5  f 10 M = 4  f 8 M = 3  f 6 M = 2  f 4 M = 1  f 2 f = 1/T (N =1000) K x-j (f) in dB for Non-Uniform Weighted Fit Log 10 (fT) M = 5 M = 4 M = 3 M = 2 M = 1 Weighting T eff T f = 1/T eff (N =1000)

21 Page 21 ION V. Reinhardt x(t n ) x w,M (t n,A) x j (t n ) Spectral Equations for True Function & Model Errors We note that x j (t n ) + x w (t n ) = x p (t n ) We note that x j (t n ) + x w (t n ) = x p (t n )  So noise must be LP filtered in x w (t n ) because noise in x j (t n ) is HP filtered In paper derive spectral equations for In paper derive spectral equations for  W-variances E{x w (t n ) 2 } &  x-w 2 in terms of L x (f)  Model error variances  c 2 &  x-c 2 in terms of dual freq Loève Spectrum L c (f g,f) of x c (t) Note H s (f) appears in all spectral equations Note H s (f) appears in all spectral equations  So what is this H s (f)? x c (t) x w (t) x p (t)

22 Page 22 ION V. Reinhardt H s (f) = System Response Function (See Reinhardt, FCS, 2006) Models filtering action of system on x(t) Models filtering action of system on x(t)  Generated by actual filters in system & topological structures (such as PLLs)  Acts on all variables the same way  x p (t), x c (t), x j (t), x w (t) H s (f) can HP filter as well as LP filter L x (f) H s (f) can HP filter as well as LP filter L x (f)  2-way ranging H s (f) generates 2 nd order 0 at f=0  So H s (f) helps both R & W variances converge Topological H s (f) for 2-Way Ranging x ~D dd x(t) x(t-  d ) |H s (f)| 2 = 4sin 2 (  f  d ) Xponder  f 2 (f  d <<1)

23 Page 23 ION V. Reinhardt − True Fn Error − Obs Residual fTfTfTfT f Summary of Spectral Properties of R & W Errors with Respect to L x (f) At the knee freq f T the fitting process At the knee freq f T the fitting process  HP filters the obs residual error (R-variances)  LP filters the true fn error (W-variances) H s (f) filters both the same  f l = HP f h = LP H s (f) filters both the same  f l = HP f h = LP As T eff   (f T << f l ) true fn error  0 As T eff   (f T << f l ) true fn error  0  If H s (f) alone can overcome pole in L x (f)  Then W-variances also converge for neg-p noise  Transition to stationary but correlated statistics flflflfl fhfhfhfh f fTfTfTfT flflflfl fhfhfhfh

24 Page 24 ION V. Reinhardt Consequences for GPS Navigation Confirms that R-variances measure consistency not accuracy for small T eff Confirms that R-variances measure consistency not accuracy for small T eff Can view control segment operations as PLL-like H s (f) with HP cutoff f l = 1/T GPS Can view control segment operations as PLL-like H s (f) with HP cutoff f l = 1/T GPS  T GPS determined by time constant of satellite parameter correction loops  Can assume true function errors (W-variances) converge for this H s (f)  System tied to known ground sites  Drift of timescale doesn’t effect nav accuracy R-variances measure true accuracy when T eff >> T GPS R-variances measure true accuracy when T eff >> T GPS

25 Page 25 ION V. Reinhardt Final Summary & Conclusions  -variances can be used for R-variances in some geolocation problems  -variances can be used for R-variances in some geolocation problems  Mainly when model function is polynomial R-variances HP filter noise in data due to trajectory estimation processR-variances HP filter noise in data due to trajectory estimation process  True under very general conditions  R-variances guaranteed to converge if free to choose model function R-variances represent true errors for large T when H s (f) makes W-variances convergeR-variances represent true errors for large T when H s (f) makes W-variances converge Preprints:


Download ppt "On Difference Variances as Residual Error Measures in Geolocation Victor S. Reinhardt Raytheon Space and Airborne Systems El Segundo, CA, USA ION National."

Similar presentations


Ads by Google