Presentation is loading. Please wait.

Presentation is loading. Please wait.

Empirical Localization of Observation Impact in Ensemble Filters Jeff Anderson IMAGe/DAReS Thanks to Lili Lei, Tim Hoar, Kevin Raeder, Nancy Collins, Glen.

Similar presentations


Presentation on theme: "Empirical Localization of Observation Impact in Ensemble Filters Jeff Anderson IMAGe/DAReS Thanks to Lili Lei, Tim Hoar, Kevin Raeder, Nancy Collins, Glen."— Presentation transcript:

1 Empirical Localization of Observation Impact in Ensemble Filters Jeff Anderson IMAGe/DAReS Thanks to Lili Lei, Tim Hoar, Kevin Raeder, Nancy Collins, Glen Romine, Chris Snyder, Doug Nychka 5th EnKF Workshop, 23 May 20121

2 For an observation y and state variable x; Increments for N ensemble samples of x are: Where is a sample regression coefficient, and is a localization. Traditionally, but here there is no upper bound. 5th EnKF Workshop, 23 May 20122 Definition of Localization

3 5th EnKF Workshop, 23 May 20123 Empirical Localization Have output from an OSSE. Know prior ensemble and truth for each state variable.

4 5th EnKF Workshop, 23 May 20124 Empirical Localization Have output from an OSSE. Know prior ensemble and truth for each state variable. Can get truth & prior ensemble for any potential observations.

5 5th EnKF Workshop, 23 May 20125 Empirical Localization Estimate localization for set of observations and subset of state variables. e.g. state variables at various horizontal distances from observations.

6 5th EnKF Workshop, 23 May 20126 Empirical Localization Example: how to localize impact of temperature observations (4 shown) on a U state variable that is between 600 and 800 km distant.

7 5th EnKF Workshop, 23 May 20127 Empirical Localization Given observational error variance, can compute expected ensemble mean increment for state. Plot this vs prior state truth - ensemble mean.

8 5th EnKF Workshop, 23 May 20128 Empirical Localization Do this for all state variables in subset.

9 5th EnKF Workshop, 23 May 20129 Empirical Localization Do this for all state variables in subset.

10 5th EnKF Workshop, 23 May 201210 Empirical Localization Do this for all state variables in subset.

11 5th EnKF Workshop, 23 May 201211 Empirical Localization Find a least squares fit. Slope is. Least squares minimizes: Same as minimizing Posterior mean

12 5th EnKF Workshop, 23 May 201212 Empirical Localization Define set of all pairs (y, x) of potential observations and state variable instances in an OSSE. (A state variable instance is defined by type, location and time). Choose subsets of this set.

13 5th EnKF Workshop, 23 May 201213 Empirical Localization Find that minimizes the RMS difference between the posterior ensemble mean for x and the true value over this subset. This can be computed from the output of the OSSE. Can then use this localization in a new OSSE for all (y, x) in the subset. Call the values of localization for all subsets an Empirical Localization Function (ELF).

14 5th EnKF Workshop, 23 May 201214 Lorenz-96 40-Variable Examples Assume all observations are located at a model grid point. (Easier but not necessary). Define 40 subsets of (y, x) pairs: x is 20 to the left, 19 to the left, … 1 to the left, colocated, 1 to the right, …, 19 to the right of y.

15 5th EnKF Workshop, 23 May 201215 Computing ELFs Start with a climatological ensemble. Do set of 6000-step OSSEs. (only use last 5000 steps). First has no localization. Compute ELF from each. Use ELF for next OSSE. ELF1 ELF2 ELF3 ELF4 ELF5 No Localization

16 5th EnKF Workshop, 23 May 201216 Evaluation Experiments Start with a climatological ensemble. Do 110,000 step assimilation, discard first 10,000 steps. Adaptive inflation with 0.1 inflation standard deviation. Many fixed Gaspari-Cohn localizations tested for each case. Also five ELFs (or should it be ELVEs?).

17 5th EnKF Workshop, 23 May 201217 Case 1: Frequent low-quality obs. Identity observations. Error variance 16. Assimilate every standard model timestep.

18 5th EnKF Workshop, 23 May 201218 Case 1: Frequent low-quality obs. N=20 Gaspari Cohn (GC) function with smallest time mean prior RMSE.

19 5th EnKF Workshop, 23 May 201219 Case 1: Frequent low-quality obs. N=20 first ELF is negative for many distances, but minimum localization is 0 when this ELF is used.

20 5th EnKF Workshop, 23 May 201220 Case 1: Frequent low-quality obs. Subsequent N=20 ELFs are less negative, smoother, closer to best GC.

21 5th EnKF Workshop, 23 May 201221 Case 1: Frequent low-quality obs. Subsequent N=20 ELFs are less negative, smoother, closer to best GC.

22 5th EnKF Workshop, 23 May 201222 Case 1: Frequent low-quality obs. Subsequent N=20 ELFs are less negative, smoother, closer to best GC.

23 5th EnKF Workshop, 23 May 201223 Case 1: Frequent low-quality obs. Subsequent N=20 ELFs are less negative, smoother, closer to best GC.

24 5th EnKF Workshop, 23 May 201224 Case 1: Frequent low-quality obs. N=20, best GC has half-width 0.2, time mean RMSE of ~1.03.

25 5th EnKF Workshop, 23 May 201225 Case 1: Frequent low-quality obs. N=20, best GC has half-width 0.2, time mean RMSE of ~1.03. ELFs give RMSE nearly as small as this.

26 5th EnKF Workshop, 23 May 201226 Case 1: Frequent low-quality obs. N=20, best GC has half-width 0.2, time mean RMSE of ~1.03. ELFs give RMSE nearly as small as this.

27 5th EnKF Workshop, 23 May 201227 Case 1: Frequent low-quality obs. Similar results for smaller ensemble, N=10. Note larger RMSE, narrower best GC half-width.

28 5th EnKF Workshop, 23 May 201228 Case 1: Frequent low-quality obs. Similar results for larger ensemble, N=40. Note smaller RMSE, wider best GC half-width.

29 5th EnKF Workshop, 23 May 201229 Case 1: Frequent low-quality obs. N=40 ELFs have smaller time mean RMSE than best GC.

30 5th EnKF Workshop, 23 May 201230 Case 1: Frequent low-quality obs. ELFs are nearly symmetric so can ignore negative distances.

31 5th EnKF Workshop, 23 May 201231 Case 1: Frequent low-quality obs. ELF for smaller ensemble is more compact.

32 5th EnKF Workshop, 23 May 201232 Case 1: Frequent low-quality obs. ELF for larger ensemble less compact, consistent with GC results.

33 5th EnKF Workshop, 23 May 201233 Case 1: Frequent low-quality obs. ELFs for even bigger ensembles are broader, but noisier at large distances.

34 5th EnKF Workshop, 23 May 201234 Case 1: Frequent low-quality obs. ELFs for even bigger ensembles are broader, but noisier at large distances.

35 5th EnKF Workshop, 23 May 201235 Case 1: Frequent low-quality obs. ELFs for even bigger ensembles are broader, but noisier at large distances.

36 5th EnKF Workshop, 23 May 201236 Case 2: Infrequent high-quality obs. Identity observations. Error variance 1. Assimilate every 12 th standard model timestep.

37 5th EnKF Workshop, 23 May 201237 Case 2: Infrequent high-quality obs. For N=10, all ELF cases have smaller RMSE than best GC.

38 5th EnKF Workshop, 23 May 201238 Case 2: Infrequent high-quality obs. For N=20, first ELF is worse than best GC; all others better. Best GC gets wider as ensemble size grows.

39 5th EnKF Workshop, 23 May 201239 Case 2: Infrequent high-quality obs. For N=40, all ELFs have smaller RMSE.

40 5th EnKF Workshop, 23 May 201240 Case 2: Infrequent high-quality obs. N=10 ELF is non-Gaussian. Has local minimum localization for distance 1.

41 5th EnKF Workshop, 23 May 201241 Case 2: Infrequent high-quality obs. N=40 ELF is broader; also has local minimum for distance 1. Need a non-gaussian ELF to possibly do better than GC.

42 5th EnKF Workshop, 23 May 201242 Case 3: Integral observations. Each observation is average of grid point plus its nearest 8 neighbors on both side; total of 17 points. (Something like a radiance observation.)

43 5th EnKF Workshop, 23 May 201243 Case 3: Integral observations. Each observation is average of grid point plus its nearest 8 neighbors on both side; total of 17 points. (Something like a radiance observation.) Error variance 1. Assimilate every standard model timestep. Very low information content: Assimilate 8 of these observations for each grid point. Total of 320 observations per assimilation time.

44 5th EnKF Workshop, 23 May 201244 Case 3: Integral observations. ELFs are not very Gaussian. No values close to 1, two peaks at distance +/- 7.

45 5th EnKF Workshop, 23 May 201245 Case 3: Integral observations. ELFs are not very Gaussian. Best GC is much larger near the observation location.

46 5th EnKF Workshop, 23 May 201246 Case 3: Integral observations. RMSE is a more complicated function of the GC half-width in this case.

47 5th EnKF Workshop, 23 May 201247 Case 3: Integral observations. ELFs all have significantly smaller time mean RMSE than best GC.

48 5th EnKF Workshop, 23 May 201248 Case 4: Frequent low-quality obs., imperfect model Identity observations. Error variance 16. Assimilate every standard model timestep. Truth has forcing F=8 (chaotic). Ensemble has forcing F=5 (not chaotic).

49 5th EnKF Workshop, 23 May 201249 Case 4: Frequent low-quality obs., imperfect model These are the localizations for the Case 1 perfect model.

50 5th EnKF Workshop, 23 May 201250 Case 4: Frequent low-quality obs., imperfect model Best GC is more compact for imperfect model case. Fifth ELF also more compact, but not as close to imperfect GC.

51 5th EnKF Workshop, 23 May 201251 How long an OSSE does this take? For large localization get good results with O(100) OSSE steps. Errors grow much more quickly for small localizations.

52 5th EnKF Workshop, 23 May 201252 Conclusions Can get estimates of good localization for any subset of observations and state variables from an OSSE. If good localizations are non-Gaussian do better than Gaspari Cohn. When Gaussian, can still be cheaper than tuning half-widths. Can this be applied to real geophysical models? How much could real applications be improved? Unclear… Can localization functions be separable in large models? Loc(time diff) * Loc(horizontal dist.) * Loc(vertical dist.) * Loc(obs_type, state_type)???

53 5th EnKF Workshop, 23 May 201253 Related Activities: Lili Lei Poster Testing ELFs in global climate model (CAM), and in WRF regional nested configuration. Some results look very similar to earlier sampling error correction methods. Next step, using ELFs in iterated OSSEs.

54 5th EnKF Workshop, 23 May 201254 Empirical Localization Without Knowing Truth Find that minimizes the RMSE between the posterior ensemble mean for x and observed value of x over the subset of (y, x) pairs. This can be computed from the output of an assimilation. Can then use this localization in a new assimilation for all (y, x) in the subset. BUT, can only compute for pairs of OBSERVED quantities. Can act as a way to calibrate OSSE results?

55 5th EnKF Workshop, 23 May 201255 Case 1 without knowing truth Identity observations. Error variance 16. Assimilate every standard model timestep. All state variables are observed, so no problem there.

56 5th EnKF Workshop, 23 May 201256 Case 1 without knowing truth Using real obs is much noisier for small localization values. Similar to using truth for larger localization values. Could be used to calibrate results from an OSSE for real assimilation use.


Download ppt "Empirical Localization of Observation Impact in Ensemble Filters Jeff Anderson IMAGe/DAReS Thanks to Lili Lei, Tim Hoar, Kevin Raeder, Nancy Collins, Glen."

Similar presentations


Ads by Google