Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adjustment of Temperature Trends In Landstations After Homogenization ATTILAH Uriah Heat Unavoidably Remaining Inaccuracies After Homogenization Heedfully.

Similar presentations


Presentation on theme: "Adjustment of Temperature Trends In Landstations After Homogenization ATTILAH Uriah Heat Unavoidably Remaining Inaccuracies After Homogenization Heedfully."— Presentation transcript:

1 Adjustment of Temperature Trends In Landstations After Homogenization ATTILAH Uriah Heat Unavoidably Remaining Inaccuracies After Homogenization Heedfully Estimating the Adjustments of Temperature trends

2 Break and Noise Variance
Homogenization To homogenize we consider the difference time series between two neighboring stations. The dominating natural variance is cancelled out, because it is very similar at both stations. The relative break variance is increased and we have a realistic chance to detect the breaks. General proceeding Random combinations of test breaks are inserted. That one explaining the maximum variance is considered to show the true breaks. Technical application Dynamic Programming with Stop criterion Break Var Noise Var Dipdoc Seminar –

3 Trend bias Trend bias If the positive and negative jumps do not cancel out each other, they introduce a trend bias. Underestimation of trend bias It is impossible to isolate the full break signal from the noise. Thus, only a certain part of it can be corrected. A small fraction remains (which has to be corrected after homogenization). Dipdoc Seminar –

4 Underestimation of jump height
The two fat horizontal lines indicate the true jump height Errors occur when the noise randomly (and erroneously) increases the data above the middle line. Then, a part of Segment 2 is (erroneously) exchanged to Segment 1 Correct detection x1 and x2 are determined as segment averages. x1 is nearly correct, but x2 is to high. Incorrect detection x1’ and x2’ are determined as segment averages. x2’ is nearly correct, but x1’ is to low. In both cases the jump height is underestimated. Dipdoc Seminar –

5 Obviously, this systematic underestimation depends on the interaction between noise and break variance. To quantify this effect, the statistical properties of both break and noise variance has to be known. Nomenclature k: Number of test breaks (here: 3) n: Number of true breaks (here: 7) m: Total length (here: 100) l: Test segment length (here: 14, 4, etc.) Dipdoc Seminar –

6 Statistical Characteristic of the Noise Variance
Beta distributed

7 Example for noise variance
We insert k = 3 random test breaks and check the variance they are able to explain. Since we have pure noise, the test segments’ means are very close to zero. However, there is a small random variation: This is the explained variance. Dipdoc Seminar –

8 Statistic for Noise Variance
We insert k = 3 test breaks at random positions into a random noise time series and calculate the explained variance. This procedure is repeated 1000 by 1000 times (1000 time series and 3000 test break positions). Relative explained variance: 𝒗= 𝒆𝒙𝒑𝒍𝒂𝒊𝒏𝒆𝒅 𝒗𝒂𝒓 𝒕𝒐𝒕𝒂𝒍 𝒗𝒂𝒓 Dipdoc Seminar –

9 Beta distribution 𝑃 𝑣 = 𝑣 𝛼−1 1−𝑣 𝛽−1 𝐵(𝛼,𝛽) 𝐵 𝛼,𝛽 = Γ 𝛼 Γ 𝛽 Γ 𝛼+𝛽
Probability density: 𝑃 𝑣 = 𝑣 𝛼−1 1−𝑣 𝛽−1 𝐵(𝛼,𝛽) with Beta function 𝐵 𝛼,𝛽 = Γ 𝛼 Γ 𝛽 Γ 𝛼+𝛽 For noise the shape parameters are: 𝛼 = 𝑘 2 𝛽 = 𝑚−1−𝑘 2 when k denotes the number of test breaks and m the total length Dipdoc Seminar –

10 Behavior of Noise optimum mean (random)
The mean of a Beta distribution is given by: 𝒗 = 𝑬 𝒗 = 𝜶 𝜶+𝜷 Mean explained variance: 𝒗 = 𝒌 𝒎−𝟏 Maximum explained variance: 𝐯 𝒎𝒂𝒙 = 𝟏− 𝟏− 𝒌 𝒎−𝟏 𝟒 𝛼 = 𝑘 2 𝛽 = 𝑚−1−𝑘 2 Remember mean (random) / (m-1) Dipdoc Seminar –

11 Statistical Characteristic of the Break Variance
1. Heuristic approach 2. Empirical approach 3. Theoretical approach

12 First Approach 𝜶 = 𝒌 𝟐 𝜷 = 𝒏−𝒌 𝟐
For true breaks, constant periods exist. Tested segment averages are the (weighted) means of such (few) constant periods. This is quite the same situation as for random scatter, only that less independent data is underlying. Obviously, the number of breaks n plays the same role as the time series length m did before for noise. Thus, the first approximation is: 𝜶 = 𝒌 𝟐 𝜷 = 𝒏−𝒌 𝟐 Dipdoc Seminar –

13 Second Approach (1/3) 4 2 4 1 However, this would lead to
𝒗 = 𝒌 𝒏 = 𝟏, 𝐟𝐨𝐫 𝒌=𝒏 This is obviously only true, when all real breaks are actually matched by the test breaks, (which is not the case for random trials). Consider k=3, n=7 and count the number of platforms in each test segment. Altogether, there are 11 “independents”, in general n+k+1. 𝜶 = 𝒌 𝟐 𝜷 = 𝒏−𝒌 𝟐 Remember Dipdoc Seminar –

14 Second Approach (2/3) 𝒗 = 𝜶 𝜶+𝜷 = 𝒌 𝟐 𝒎−𝟏 𝟐 𝜶+𝜷= 𝒊𝒏𝒅𝒆𝒑𝒆𝒏𝒅𝒆𝒏𝒕𝒔 −𝟏 𝟐
For noise, we had: Now we have n+k+1 ”independents”, thus: 𝒗 = 𝜶 𝜶+𝜷 = 𝒌 𝟐 𝒎−𝟏 𝟐 𝜶+𝜷= 𝒊𝒏𝒅𝒆𝒑𝒆𝒏𝒅𝒆𝒏𝒕𝒔 −𝟏 𝟐 𝜶+𝜷 = 𝒏+𝒌 𝟐 , 𝜶 = 𝒌 𝟐 𝜷 = 𝒏 𝟐 Dipdoc Seminar –

15 Second Approach (3/3) 4 2 4 1 This would lead to
𝒗 = 𝒌 𝒏+𝒌 =𝟏/𝟐, 𝐟𝐨𝐫 𝒌=𝒏 This is rather reasonable, because for n = k the situation is approximately: Each test segment contains one true break, thus two independents, which are then averaged. This leads to a reduction of the variance by a factor of 2. However, so far we did not take into account that the HSPs have different lengths. The effective number of true breaks must be smaller than the nominal. Dipdoc Seminar –

16 Effective observation number
If we generate i = 1…N random time series of length j = 1…m with each element being: 𝑥 𝒊𝒋 ~ 𝒩(0,1) only a fraction of (m-1)/m can be found within the time series (because a fraction of 1/m is “lost” due to the variance of the time series means. How large is this effect if a step function with n breaks is considered? Dipdoc Seminar –

17 Sketch of derivation (1/2)
The mean of each time series is: The “lost” variance is: Which can be reduced to the sum over mean squared lengths: Which is equal to the weighted sum over l2 𝑥 = 1 𝑚 𝑖=1 𝑛+1 𝑙 𝑖 𝑥 𝑖 𝑉𝑎𝑟 𝑥 = 𝑥 𝑥 = 𝑚 𝑖=1 𝑛+1 𝑙 𝑖 𝑥 𝑖 = 1 𝑚 2 𝑖=1 𝒏+1 𝑗=1 𝒏+1 𝑙 𝑖 𝑙 𝑗 𝑥 𝑖 𝑥 𝑗 𝑉𝑎𝑟 𝑥 = 1 𝑚 2 𝑖=1 𝑛+1 𝑙 𝑖 2 𝑥 𝑖 𝑥 𝑖 = 𝑚 2 𝑖=1 𝑛+1 𝑙 𝑖 𝑥 𝑖 𝑥 𝑖 = 1 𝑚 2 𝑖=1 𝒏+1 𝑙 𝑖 2 𝑉𝑎𝑟 𝑥 = 𝑛+1 𝑚 𝑚−1 𝑛 𝑙=1 𝑚−𝑛 𝑚−𝑙−1 𝑛−1 𝑙 2 = 1 𝑚 𝑚 𝑘 𝑙=1 𝑚−𝑘 𝑚−𝑙−1 𝑘−1 𝑙 2 Dipdoc Seminar –

18 Sketch of derivation (2/2)
The sum of a product of two Binomial coefficients is solvable by the Vandermonde’s identity: Which leads to a solution for the l2 sum: Inserted into the original expression, we obtain for the “lost” external variance: The remaining internal variance is then: 𝑙=0 𝑚 𝑙 𝑗 𝑚−𝑙 𝑛−𝑗 = 𝑚+1 𝑛+1 𝑙=1 𝑚 𝑙 2 𝑚−1−𝑙 𝑛− = 2 𝑚 𝑛 𝑚 𝑛+1 𝑉𝑎𝑟 𝑥 = 2𝑚−𝑛 𝑚 𝑛+2 ≈ 2 𝑛+2 𝑉𝑎𝑟 𝑥 = 2𝑚−𝑘 𝑚 𝑘 (𝐶21) 1−𝑉𝑎𝑟 𝑥 = 𝑚+1 𝑛 𝑚 𝑛+2 ≈ 𝑛 𝑛+2 Dipdoc Seminar –

19 Third approach 𝒗 = 𝒌 𝒌+ 𝒏 ∗ 𝒗= 𝒌 𝒏+𝒌
The relative unexplained variance of a test segment: with i: number of breaks within a test segment and n: number of breaks within the entire time series i = l/m n: m = l(k+1): with n* = n/2 +1 Similar to the second approach, but n counts only half. Remember: 1−𝑣 = 𝑖 𝑖+2 𝑛+2 𝑛 𝒗= 𝒆𝒙𝒑𝒍𝒂𝒊𝒏𝒆𝒅 𝒗𝒂𝒓 𝒕𝒐𝒕𝒂𝒍 𝒗𝒂𝒓 1−𝑣 = 𝑙 𝑚 𝑛 𝑙 𝑚 𝑛+2 𝑛+2 𝑛 = 𝑛+2 𝑛+2 𝑚 𝑙 1−𝑣 = 𝑛+2 𝑛+2 𝑘+1 = 𝑛 𝑛 2 +𝑘+1 = 𝑛 ∗ 𝑘+ 𝑛 ∗ 𝑉𝑎𝑟 𝑥 = 2𝑚−𝑘 𝑚 𝑘 (𝐶21) 𝒗 = 𝒌 𝒌+ 𝒏 ∗ Remember 𝒗= 𝒌 𝒏+𝒌 Dipdoc Seminar –

20 Statistical Characteristic of the Break Variance
1. Heuristic approach 2. Empirical approach 3. Theoretical approach

21 Empirical Var(k,n) Empirical test with 1000 random segmentations (fixed k) of 1000 time series (fixed n). Calculate the mean relative explained variance v from these 1,000,000 permutations. Repeat this procedure for all combinations of k = 1, …, 20 and n = 1, …, 20. 20 functions v(k) for the different n. Dipdoc Seminar –

22 Stepwise Fitting (1/3) v/(1-v) is proportional to k.
The slope is a function of n. (Numbers and lines do not cross). The slope is certainly not proportional, but rather reciprocal to n. (slp(1) large, slp(20) small). Thus, better to plot 1/slp(n). 𝒗 𝟏−𝒗 = 𝒔𝒍𝒑 𝒏 𝒌 Dipdoc Seminar –

23 Stepwise Fitting (2/3) 𝒌 𝟏−𝒗 𝒗 = 𝟏 𝒔𝒍𝒑(𝒏) =𝒄𝒐𝒏𝒔𝒕
𝒌 𝟏−𝒗 𝒗 = 𝟏 𝒔𝒍𝒑(𝒏) =𝒄𝒐𝒏𝒔𝒕 We expect horizontal lines, if the reciprocal slope is really independent from k. This is largely confirmed. Averages over k gives than a value for each n. These seems to be rather linear in n. Thus, plot these averages as a function of n. Dipdoc Seminar –

24 Stepwise Fitting (3/3) 𝒌 𝟏−𝒗 𝒗 = 𝟏 𝒔𝒍𝒑(𝒏) = 𝒂𝒏+𝒃
𝒌 𝟏−𝒗 𝒗 = 𝟏 𝒔𝒍𝒑(𝒏) = 𝒂𝒏+𝒃 0.629n with a = and b = 1.855 𝒌 𝟏−𝒗 𝒗 = 𝟏 𝒔𝒍𝒑(𝒏) solve for v: 𝒗 = 𝒌 𝒌+ 𝟏 𝒔𝒍𝒑(𝒏) and insert an+b: 𝒗= 𝒌 𝒌+ 𝒏 ∗ , 𝐰𝐢𝐭𝐡 𝒏 ∗ =𝟎.𝟔𝟐𝟗 𝒏+𝟏.𝟖𝟓𝟓 Dipdoc Seminar –

25 Application of findings
Summarizing the stepwise fitting: The direct fit in the v/k space yields: The best heuristic approach was: 𝒗= 𝒌 𝒌+ 𝒏 ∗ , 𝐰𝐢𝐭𝐡 𝒏 ∗ =𝟎.𝟔𝟐𝟗 𝒏+𝟏.𝟖𝟓𝟓 𝒏 ∗ =𝟎.𝟓 𝒏+𝟏 𝒏 ∗ =𝟎.𝟔𝟐𝟏 𝒏+𝟏.𝟗𝟐𝟖 Dipdoc Seminar –

26 Method of moments The mean of Beta distribution is: 𝒗 = 𝜶 𝜶+𝜷
The variance of a Beta distribution is: Which can be solved for a and b: 𝒗 = 𝜶 𝜶+𝜷 So far we developed an empirical equation for v, the mean explained variance. The same procedure is applied to derive equations for 𝛼 and 𝛽, the shape parameters describing the distribution of v. These coefficients are determined by the method of moments. 𝝈 𝒗 𝟐 = 𝜶𝜷 𝜶+𝜷+𝟏 𝜶+𝜷 𝟐 𝜶= 𝒗 𝟏− 𝒗 𝝈 𝒗 𝟐 −𝟏 𝒗 𝜷= 𝒗 𝟏− 𝒗 𝝈 𝒗 𝟐 −𝟏 𝟏− 𝒗 Dipdoc Seminar –

27 Empirical values for a and b
Again 1000 by 1000 permutations for fixed values of n and k are performed. The explained variance is calculated (for 1,000,000 permutation). From the mean and the variance of the resulting distribution, a and b are determined by the method of moments. This proceeding is repeated for combinations of k = 1…20 and n = 1…20. The result is plotted against k. a is strongly dependent on k, converging obviously to a = k/2 for large n. b is strongly dependent on n, converging obviously to b = n*/2 for large k. Dipdoc Seminar –

28 Alpha/k and Beta/n* 𝜶=𝒇𝒌 𝒗 = 𝜶 𝜶+𝜷 = 𝒌 𝒌+ 𝒏 ∗ = 𝒇𝒌 𝒇𝒌+𝒇 𝒏 ∗
𝒗 = 𝜶 𝜶+𝜷 = 𝒌 𝒌+ 𝒏 ∗ = 𝒇𝒌 𝒇𝒌+𝒇 𝒏 ∗ 𝒇= 𝜶 𝒌 = 𝜷 𝒏 ∗ 𝜷=𝒇 𝒏 ∗ Dipdoc Seminar –

29 1/f f is neither a linear function of k nor of n, it is more promising to depict the reciprocal 𝒌 𝜶 . 1/f is rather linear in k with a slope reciprocal to n and an incept of 2. 𝒌 𝜶 = 𝒏 ∗ 𝜷 = 𝟏 𝒇 = 𝟏 𝒄 𝒌+𝟐 Dipdoc Seminar –

30 Determination of c For a more detailed determination of c, we solve
for c: and plot the result against k. 𝟏 𝒇 = 𝒏 ∗ 𝜷 = 𝟏 𝒄 𝒌+𝟐 𝑐 = 𝛽𝑘 𝑛 ∗ −2𝛽 𝒄 = 𝜷𝒌 𝒏 ∗ −𝟐𝜷 𝒄=𝒏+𝟑+𝟎.𝟎𝟑 𝒏−𝟏 𝒌 Dipdoc Seminar –

31 Resulting fits for Alpha and Beta
𝜷 = 𝒄 𝒌+𝟐𝒄 𝒏 ∗ 𝜶 = 𝒄 𝒌+𝟐𝒄 𝒌 𝒄=𝒏+𝟑+𝟎.𝟎𝟑 𝒏−𝟏 𝒌 Dipdoc Seminar –

32 Conclusion The explained noise variance is Beta distributed with:
The explained break variance is Beta distributed with: 𝜶 = 𝒌 𝟐 𝜷 = 𝒎−𝟏−𝒌 𝟐 𝜶 = 𝒄 𝒌+𝟐𝒄 𝒌 𝜷 = 𝒄 𝒌+𝟐𝒄 𝒏 ∗ 𝒄 =𝒏+𝟑+𝟎.𝟎𝟑 𝒏−𝟏 𝒌 𝒏 ∗ =𝟎.𝟔𝟑 𝒏+𝟑 with Dipdoc Seminar –


Download ppt "Adjustment of Temperature Trends In Landstations After Homogenization ATTILAH Uriah Heat Unavoidably Remaining Inaccuracies After Homogenization Heedfully."

Similar presentations


Ads by Google