1 Some ideas for Ensemble Kalman Filter Former students and Eugenia Kalnay UMCP Acknowledgements: UMD Chaos-Weather Group: Brian Hunt, Istvan Szunyogh,

Slides:



Advertisements
Similar presentations
The Future of Data Assimilation: 4D-Var or Ensemble Kalman Filter?
Advertisements

Recent advances in EnKF Eugenia Kalnay, Shu-Chih Yang, Steve Penny, Guo-Yuan Lien, Yoichiro Ota, Ji-Sun Kang, Takemasa Miyoshi, Junjie Liu, Kayo Ide, Brian.
Initialization Issues of Coupled Ocean-atmosphere Prediction System Climate and Environment System Research Center Seoul National University, Korea In-Sik.
Effects of model error on ensemble forecast using the EnKF Hiroshi Koyama 1 and Masahiro Watanabe 2 1 : Center for Climate System Research, University.
Targetting and Assimilation: a dynamically consistent approach Anna Trevisan, Alberto Carrassi and Francesco Uboldi ISAC-CNR Bologna, Italy, Dept. of Physics.
Jidong Gao and David Stensrud Some OSSEs on Assimilation of Radar Data with a Hybrid 3DVAR/EnKF Method.
Brian Ancell, Cliff Mass, Gregory J. Hakim University of Washington
Advanced data assimilation methods with evolving forecast error covariance Four-dimensional variational analysis (4D-Var) Shu-Chih Yang (with EK)
To understand the differing localization strength, consider two grid points, observation at grid point 1. K for grid point 2: (d 12 = distance between.
Advanced data assimilation methods- EKF and EnKF Hong Li and Eugenia Kalnay University of Maryland July 2006.
Experimenting with the LETKF in a dispersion model coupled with the Lorenz 96 model Author: Félix Carrasco, PhD Student at University of Buenos Aires,
Accounting for correlated AMSU-A observation errors in an ensemble Kalman filter 18 August 2014 The World Weather Open Science Conference, Montreal, Canada.
UNBIASED ESTIAMTION OF ANALYSIS AND FORECAST ERROR VARIANCES
Current Status of the Development of the Local Ensemble Transform Kalman Filter at UMD Istvan Szunyogh representing the UMD “Chaos-Weather” Group Ensemble.
Two Methods of Localization  Model Covariance Matrix Localization (B Localization)  Accomplished by taking a Schur product between the model covariance.
A comparison of hybrid ensemble transform Kalman filter(ETKF)-3DVAR and ensemble square root filter (EnSRF) analysis schemes Xuguang Wang NOAA/ESRL/PSD,
Slide 1 Evaluation of observation impact and observation error covariance retuning Cristina Lupu, Carla Cardinali, Tony McNally ECMWF, Reading, UK WWOSC.
Comparison of hybrid ensemble/4D- Var and 4D-Var within the NAVDAS- AR data assimilation framework The 6th EnKF Workshop May 18th-22nd1 Presenter: David.
Recent developments in data assimilation for global deterministic NWP: EnVar vs. 3D-Var and 4D-Var Mark Buehner 1, Josée Morneau 2 and Cecilien Charette.
The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model Michael Würsch, George C. Craig.
Kalman filtering techniques for parameter estimation Jared Barber Department of Mathematics, University of Pittsburgh Work with Ivan Yotov and Mark Tronzo.
Ensemble Data Assimilation and Uncertainty Quantification Jeffrey Anderson, Alicia Karspeck, Tim Hoar, Nancy Collins, Kevin Raeder, Steve Yeager National.
EnKF Overview and Theory
Nonlinear Data Assimilation and Particle Filters
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Statistical Characteristics of High- Resolution COSMO.
Computational Issues: An EnKF Perspective Jeff Whitaker NOAA Earth System Research Lab ENIAC 1948“Roadrunner” 2008.
1 ESTIMATING THE STATE OF LARGE SPATIOTEMPORALLY CHAOTIC SYSTEMS: WEATHER FORECASTING, ETC. Edward Ott University of Maryland Main Reference: E. OTT, B.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss High-resolution data assimilation in COSMO: Status and.
Ocean Data Variational Assimilation with OPA: Ongoing developments with OPAVAR and implementation plan for NEMOVAR Sophie RICCI, Anthony Weaver, Nicolas.
Data assimilation and observing systems strategies Pierre Gauthier Data Assimilation and Satellite Meteorology Division Meteorological Service of Canada.
Hybrid Variational/Ensemble Data Assimilation
1 GSI/ETKF Regional Hybrid Data Assimilation with MMM Hybrid Testbed Arthur P. Mizzi NCAR/MMM 2011 GSI Workshop June 29 – July 1, 2011.
Data Assimilation Using Modulated Ensembles Craig H. Bishop, Daniel Hodyss Naval Research Laboratory Monterey, CA, USA September 14, 2009 Data Assimilation.
Ensemble Data Assimilation for the Mesoscale: A Progress Report Massimo Bonavita, Lucio Torrisi, Francesca Marcucci CNMCA National Meteorological Service.
A unifying framework for hybrid data-assimilation schemes Peter Jan van Leeuwen Data Assimilation Research Center (DARC) National Centre for Earth Observation.
Applications of optimal control and EnKF to Flow Simulation and Modeling Florida State University, February, 2005, Tallahassee, Florida The Maximum.
Assimilating chemical compound with a regional chemical model Chu-Chun Chang 1, Shu-Chih Yang 1, Mao-Chang Liang 2, ShuWei Hsu 1, Yu-Heng Tseng 3 and Ji-Sung.
Sensitivity Analysis of Mesoscale Forecasts from Large Ensembles of Randomly and Non-Randomly Perturbed Model Runs William Martin November 10, 2005.
Research Vignette: The TransCom3 Time-Dependent Global CO 2 Flux Inversion … and More David F. Baker NCAR 12 July 2007 David F. Baker NCAR 12 July 2007.
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
Multivariate Data Assimilation of Carbon Cycle Using Local Ensemble Transform Kalman Filter 1 Ji-Sun Kang, 1 Eugenia Kalnay, 2 Junjie Liu, 2 Inez Fung,
Sabrina Rainwater David National Research Council Postdoc at NRL with Craig Bishop and Dan Hodyss Naval Research Laboratory Multi-scale Covariance Localization.
Deutscher Wetterdienst Vertical localization issues in LETKF Breogan Gomez, Andreas Rhodin, Hendrik Reich.
Impacts of Running-In-Place on LETKF analyses and forecasts of the 24 May 2011 outbreak Impacts of Running-In-Place on LETKF analyses and forecasts of.
Local Predictability of the Performance of an Ensemble Forecast System Liz Satterfield and Istvan Szunyogh Texas A&M University, College Station, TX Third.
Use ensemble error covariance information in GRAPES 3DVAR Jiandong GONG, Ruichun WANG NWP center of CMA Oct 27, 2015.
Implementation and Testing of 3DEnVAR and 4DEnVAR Algorithms within the ARPS Data Assimilation Framework Chengsi Liu, Ming Xue, and Rong Kong Center for.
2007/3/19 MRI, Tsukuba Recent developments of the local ensemble transform Kalman filter (LETKF) at JMA Takemasa Miyoshi (NPD/JMA) Collaborators: Shozo.
Targetting and Assimilation: a dynamically consistent approach Anna Trevisan (*), Alberto Carrassi (°) and Francesco Uboldi (§) (*) ISAC-CNR Bologna,
Assessing a Local Ensemble Kalman Filter
A Random Subgrouping Scheme for Ensemble Kalman Filters Yun Liu Dept. of Atmospheric and Oceanic Science, University of Maryland Atmospheric and oceanic.
The application of ensemble Kalman filter in adaptive observation and information content estimation studies Junjie Liu and Eugenia Kalnay July 13th, 2007.
ENSEMBLE KALMAN FILTER IN THE PRESENCE OF MODEL ERRORS Hong Li Eugenia Kalnay.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
The Ensemble Kalman filter
Korea Institute of Atmospheric Prediction Systems (KIAPS) ( 재 ) 한국형수치예보모델개발사업단 LETKF 앙상블 자료동화 시스템 테스트베드 구축 및 활용방안 Implementation and application of LETKF.
Hybrid Data Assimilation
Shu-Chih Yang1,Kuan-Jen Lin1, Takemasa Miyoshi2 and Eugenia Kalnay2
Data Assimilation Theory CTCD Data Assimilation Workshop Nov 2005
Local Ensemble Transform Kalman Filter for ROMS in Indian Ocean
Vertical localization issues in LETKF
Presented by: David Groff NOAA/NCEP/EMC IM Systems Group
FSOI adapted for used with 4D-EnVar
Developing 4D-En-Var: thoughts and progress
S. Kotsuki, K. Kurosawa, T. Miyoshi
Comparison of different combinations of ensemble-based and variational data assimilation approaches for deterministic NWP Mark Buehner Data Assimilation.
2. University of Northern British Columbia, Prince George, Canada
Project Team: Mark Buehner Cecilien Charette Bin He Peter Houtekamer
Sarah Dance DARC/University of Reading
Presentation transcript:

1 Some ideas for Ensemble Kalman Filter Former students and Eugenia Kalnay UMCP Acknowledgements: UMD Chaos-Weather Group: Brian Hunt, Istvan Szunyogh, Ed Ott and Jim Yorke, Kayo Ide, and students Former students: Shu-Chih Yang, Takemasa Miyoshi, Hong Li, Junjie Liu, Chris Danforth Also: Malaquías Peña, Matteo Corazza, Pablo Grunman, DJ Patil, Ji-Sun Kang, Debra Baker, Steve Greybush, Tamara Singleton, Matt Hoffman, Steve Penny, Elana Fertig.

2 Some ideas for Ensemble Kalman Filter Basic idea: EnKF and 4D-Var are in a friendly competition A clean comparison shows 4D-Var & EnKF scores are indistinguishable (Buehner et al.) We can take advantage of ideas and methods developed for 4D-Var and easily adapt them into the LETKF (Hunt et al., 2007), a type of EnKF EnKF needs no adjoint model, priors, …

3 Prelude: Inter-comparison of 4D-Var and EnKF systems for operational deterministic NWP WWRP/THORPEX Workshop on 4D-VAR and Ensemble Kalman Filter Intercomparisons Buenos Aires, Argentina November 13, 2008 Project Team: Mark Buehner Cecilien Charette Bin He Peter Houtekamer Herschel Mitchell Mark Buehner Meteorological Research Division Environment Canada

4 radiosonde temperature observation at 500hPa observation at middle of assimilation window (+0h) with same B, increments very similar from 4D-Var, EnKF contours are 500hPa GZ background state at 0h (ci=10m) Single observation experiments Difference in temporal covariance evolution contour plots at 500 hPa

5 Forecast Results – 120h NH EnKF mean analysis vs. 4D-Var Bnmc 4D-Var Benkf vs. 4D-Var Bnmc stddev & bias relative to radiosondes U |U||U| GZT T-T d U |U||U| GZT T-T d

6 Forecast Results – 120h SH EnKF mean analysis vs. 4D-Var Bnmc 4D-Var Benkf vs. 4D-Var Bnmc stddev & bias relative to radiosondes U |U||U| GZT T-T d U |U||U| GZT T-T d

7 Conclusions from a clean intercomparison of 4D-Var and EnKF (Buehner et al., Canada, 2008) When running with the same model, same observations the forecast scores of 4D-Var and EnKF, are essentially identical (February 2007). When running with the same model, same observations the forecast scores of 4D-Var and EnKF, are essentially identical (February 2007). When B NMC in 4D-Var is replaced by B EnKF there was a 10 hour improvement in the SH. When B NMC in 4D-Var is replaced by B EnKF there was a 10 hour improvement in the SH.

8 Some ideas to improve LETKF We can adapt ideas developed within 4D-Var We can adapt ideas developed within 4D-Var: No-cost smoother (Kalnay et al, Tellus 2007) Accelerating the spin-up: Running in place (Kalnay and Yang, QJRMS, submitted) “Outer loop” and nonlinearities (Yang and Kalnay) Forecast sensitivity to observations (Liu and Kalnay, QJRMS, 2008) Coarse analysis resolution interpolating weights (Yang, Kalnay, Hunt, Bowler, QJ submitted) Low-dimensional model bias correction (Li, Kalnay, Danforth, Miyoshi, MWR, submitted) Simultaneous estimation of optimal inflation and observation errors (QJ, in press).

9 Local Ensemble Transform Kalman Filter (Ott et al, 2004, Hunt et al, 2004, 2007) Model independent (black box) Obs. assimilated simultaneously at each grid point 100% parallel: very fast No adjoint needed 4D LETKF extension (Start with initial ensemble) LETKF Observation operator Model ensemble analyses ensemble forecasts ensemble “observations” Observations

10 Perform data assimilation in a local volume, choosing observations The state estimate is updated at the central grid red dot Localization based on observations

11 Perform data assimilation in a local volume, choosing observations The state estimate is updated at the central grid red dot All observations (purple diamonds) within the local region are assimilated Localization based on observations The LETKF algorithm can be described in a single slide!

12 Local Ensemble Transform Kalman Filter (LETKF) Forecast step: Analysis step: construct Locally: Choose for each grid point the observations to be used, and compute the local analysis error covariance and perturbations in ensemble space: Analysis mean in ensemble space: and add to to get the analysis ensemble in ensemble space The new ensemble analyses in model space are the columns of. Gathering the grid point analyses forms the new global analyses. Note that the the output of the LETKF are analysis weights and perturbation analysis matrices of weights. These weights multiply the ensemble forecasts. Globally:

13 The 4D-LETKF produces an analysis in terms of weights of the ensemble forecast members at the analysis time t n, giving the trajectory that best fits all the observations in the assimilation window. Analysis time

14 No-cost LETKF smoother ( ): apply at t n-1 the same weights found optimal at t n. It works for 3D- or 4D-LETKF The 4D-LETKF produces an analysis in terms of weights of the ensemble forecast members at the analysis time t n, giving the trajectory that best fits all the observations in the assimilation window.

15 No-cost LETKF smoother test on a QG model: It really works! “Smoother” reanalysis LETKF Analysis LETKF analysis at time n Smoother analysis at time n-1 This very simple smoother allows us to go back and forth in time within an assimilation widow!!

16 “Running in place” to spin-up faster Kalnay and Yang (2008) 4D-Var spins-up faster than EnKF because it is a smoother: it keeps iterating until it fits the observations within the assimilation window as well as possible EnKF spins-up fast if starting from a “good” initial state, e.g., 3D-Var, but needs also an ensemble representing the “errors of the day” In a severe storm where radar observations start with the storm, there is little real time to spin-up Caya et al. (2005): “EnKF is eventually better than 4D-Var” (but it is too late to be useful, it misses the storm). Jidong Gao, (pers. comm. 2007): spin-up is the main obstacle for the use of EnKF for storm prediction.

17 Can we use the data more than once? Hunt et al., 2007: The background term represents the evolution of the maximum likelihood trajectory given all the observations in the past After the analysis a similar relationship is valid: From here one can derive the linear KF equations Also the rule: “Use the data once and then discard it”

18 Can we use the data more than once? Can we use the data more than once? The rule: “Use the data once and then discard it” (Ide et al., 1997) makes sense when the analysis/forecasts are the most likely given all the past data, not when we start from scratch. We propose “Running in place” during spin-up. We need – 4D-LETKF (Hunt et al, 2004) to use all the observations within an assimilation window at their right time – A No-Cost Smoother (Kalnay et al., 2007b) – An appropriate iterative scheme

19 “Running in Place” EnKF is a sequential data assimilation system where, after the new data is used at the analysis time, it should be discarded… only if the previous analysis and the new background are the most likely states given the past observations. If the system has converged after the initial spin-up all the information from past observations is already included in the background. During the spin-up we should use the observations repeatedly if we can extract extra information. But we should avoid overfitting the observations

20 Running in Place algorithm (1) Cold-start the EnKF from any initial ensemble mean and random perturbations at t 0, and integrate the initial ensemble to t 1. The “running in place” loop with n=1, is:

21 Running in Place algorithm (2) a) Perform a standard EnKF analysis and obtain the analysis weights at t n, saving the mean square observations minus forecast (OMF) computed by the EnKF. b) Apply the no-cost smoother to obtain the smoothed analysis ensemble at t n-1 by using the same weights obtained at t n. c) Perturb the smoothed analysis ensemble with a small amount of random Gaussian perturbations, similar to additive inflation. d) Integrate the perturbed smoothed ensemble to t n. If the forecast fit to the observations is smaller than in the previous iteration according to some criterion, go to a) and perform another iteration. If not, let and proceed to the next assimilation window.

22 Running in Place algorithm (notes) Notes: c) Perturb the smoothed analysis ensemble with a small amount of random Gaussian perturbations, a method similar to additive inflation. This perturbation has two purposes: 1)Avoid reaching the same analysis as before, and 2)Encourage the ensemble to explore new unstable directions d) Convergence criterion: if with do another iteration. Otherwise go to the next assimilation window.

23 Results with a QG model Spin-up depends on initial perturbations, but RIP works well even with random perturbations. It becomes as fast as 4D-Var (blue). RIP takes only 2- 4 iterations.

24 Results with a QG model LETKF spin-up from random perturbations: 141 cycles. With RIP: 46 cycles LETKF spin-up from 3D-Var perts. 54 cycles. With RIP: 37 cycles 4D-Var spin-up using 3D-Var prior: 54 cycles.

25 Nonlinearities and “outer loop” The main disadvantage of EnKF is that it cannot handle nonlinear (non-Gaussian) perturbations and therefore needs short assimilation windows. It doesn’t have the important outer loop so important in 3D- Var and 4D-Var (DaSilva, pers. comm. 2006)It doesn’t have the important outer loop so important in 3D- Var and 4D-Var (DaSilva, pers. comm. 2006) Lorenz -3 variable model (Kalnay et al. 2007a Tellus), RMS analysis error 4D-VarLETKF Window=8 steps (linear window) Window=25 steps (nonlinear window) Long windows + Pires et al. => 4D-Var clearly wins!

26 “Outer loop” in 4D-Var

27 Comparison of ensemble-based and variational-based data assimilation schemes in a Quasi-Geostrophic model. 3D-Var Hybrid (3DVar+20 BVs) 12-hour 4D-Var LETKF (40 ensemble) 24-hour 4D-Var EnKF does not handle well long windows because ensemble perturbations become non-Gaussian. 4D-Var simply iterates and produces a more accurate control. We can imitate this with the “outer loop” idea for LETKF.

28 Nonlinearities and “outer loop” Outer loop: similar to 4D-Var: use the final weights to correct only the mean initial analysis, keeping the initial perturbations. Repeat the analysis once or twice. It centers the ensemble on a more accurate nonlinear solution. Miyoshi pointed out that Jaszwinski (1970) suggested this in a footnote!!!!! Lorenz -3 variable model RMS analysis error 4D-VarLETKFLETKF +outer loop Window=8 steps Window=25 steps Running in place further reduces RMS from 0.48 to 0.39!

29 Estimation of observation impact without adjoint in an ensemble Kalman filter Junjie Liu and Eugenia Kalnay

30 Background  The adjoint method proposed by Langland and Baker (2004) and Zhu and Gelaro (2007) quantifies the reduction in forecast error for each individual observation source  The adjoint method detects the observations which make the forecast worse.  The adjoint method requires adjoint model which is difficult to get. AIRS shortwave μm AIRS shortwave μm AIRS longwave μm AMSU/A

31 Objective and outline Objective  Propose an ensemble sensitivity method to calculate observation impact without using adjoint model. Outline  Illustrate and derive the ensemble sensitivity method;  With Lorenz-40 variable model, compare the ensemble sensitivity method with adjoint method in  the ability to represent the actual error reduction;  the ability to detect the poor quality observations.  Summary

32 Schematic of the observation impact on the reduction of forecast error The only difference between and is the assimilation of observations at 00hr.  Observation impact on the reduction of forecast error: (Adapted from Langland and Baker, 2004) analysis t -6hr 00hr OBS.

33 Euclidian cost function: The ensemble sensitivity method Cost function as function of obs. increments:

34 Euclidian cost function: The ensemble sensitivity method The sensitivity of cost function with respect to the assimilated observations: Cost function as function of obs. Increments: With this formula we can predict the impact of observations on the forecasts!

35 Ability to detect the poor quality observation Like adjoint method, ensemble sensitivity method can detect the observation poor quality (11 th observation location) The ensemble sensitivity method has a stronger signal when the observation has negative impact on the forecast. Observation impact from LB (red) and from ensemble sensitivity method (green) Larger random errorBiased observation case

36 Summary for forecast sensitivity to obs. Derived a formula to calculate the observation impact based on the ensemble without using the adjoint model which usually is not available. The results based on Lorenz-40 variable model show that ensemble sensitivity method without using adjoint model gives results similar to adjoint method. Like adjoint method, ensemble sensitivity method can detect the observation which either has larger random error or has bias. Under such conditions, the ensemble sensitivity method has stronger and more accurate signal. It provides a powerful tool to check the quality of the observations.

37 In EnKF the analysis is a weighted average of the forecast ensemble We performed experiments with a QG model interpolating weights compared to analysis increments. Coarse grids of 11%, 4% and 2% interpolated analysis points. 1/(3x3)=11% analysis grid Coarse analysis with interpolated weights Yang et al (2008)

38 Weights vary on very large scales: they interpolate well. Interpolated weights are obtained even for data void areas. Coarse analysis with interpolated weights

39 Analysis increments With increment interpolation, the analysis is OK only with 50% analysis coverage With weight interpolation, there is almost no degradation! EnKF maintains balance and conservation properties

40 Impact of coarse analysis on accuracy With increment interpolation, the analysis degrades With weight interpolation, there is no degradation, the analysis is actually better!

41 Model error: comparison of methods to correct model bias and inflation Hong Li, Chris Danforth, Takemasa Miyoshi, and Eugenia Kalnay

42 Model error: If we assume a perfect model in EnKF, we underestimate the analysis errors (Li, 2007) imperfect model (obs from NCEP- NCAR Reanalysis NNR) perfect SPEEDY model

43 — Why is EnKF vulnerable to model errors ?  In the theory of Extended Kalman filter, forecast error is represented by the growth of errors in IC and the model errors.  However, in ensemble Kalman filter, error estimated by the ensemble spread can only represent the first type of errors. Ens mean Ens 1 Ens 2 Truth Imperfect model Forecast error The ensemble spread is ‘blind’ to model errors

44 imperfect model perfect model Low Dimensional Method to correct the bias (Danforth et al, 2007) combined with additive inflation We compared several methods to handle bias and random model errors

45 Simultaneous estimation of EnKF inflation and obs errors in the presence of model errors Hong Li 1,2 Takemasa Miyoshi 3 Eugenia Kalnay 1 1 University of Maryland 2 Typhoon Institute of Shanghai 3 Numerical Prediction Division, JMA (QJ, in press)

46 Motivation  Any data assimilation scheme requires accurate statistics for the observation and background errors. Unfortunately those statistics are not known and are usually tuned or adjusted by gut feeling.  Ensemble Kalman filters need inflation of the background error covariance: tuning is expensive  Wang and Bishop (2003) and Miyoshi (2005) proposed a technique to estimate the covariance inflation parameter online.  This method works well, but only if the observation errors are known.  Here we introduce a method to simultaneously estimate observation errors and inflation.  We test the method for a perfect model and in the presence of model random errors (it works well) and model bias (not so well).

47 Diagnosis of observation error statistics (Desroziers et al, 2005, Navascues et al, 2006) These relationships are correct if the R and B statistics are correct and errors are uncorrelated! Desroziers et al, 2005, introduced two new statistical relationships: OMB*OMB OMA*OMB AMB*OMB in addition to the well known relationship

48 Diagnosis of observation error statistics (Desroziers et al, 2005, Navascues et al, 2006) Desroziers et al. (2005) and Navascues et al. (2006) have used these relations in a diagnostic mode, from past 3D-Var/4D-Var stats. Here we use a simple KF to estimate both and online. OMA*OMB AMB*OMB OMB 2

49 SPEEDY model: online estimated observational errors, each variable started with 2 not 1. The original wrongly specified R converges to the correct value of R quickly (in about 5-10 days)

50 Estimation of the inflation Using a perfect R and estimating adaptively Using an initially wrong R and but estimating them adaptively Estimated Inflation After R converges, the time dependent inflation factors are quite similar

51 Tests with LETKF with imperfect L40 model: added random errors to the model The method works quite well even with very large random errors!

52 Tests with LETKF with imperfect L40 model: added biases to the model The method works well for low biases, but not for large biases: Model bias needs to be accounted by a separate bias correction method, not by multiplicative inflation

53 Discussion: 4D-Var and EnKF synergy 4D-Var and EnKF now give similar results in Canada! (Buehner) Simple strategies can capture current advantages developed within 4D-Var: –Smoothing and running in place –A simple outer loop to deal with nonlinearities –Adjoint sensitivity without adjoint model –Coarse resolution analysis without degradation –Correcting model bias (Baek et al, 2006, Danforth et al, 2007, Li et al. submitted). –Correction of model bias combined with additive inflation gives the best results –Can estimate simultaneously optimal inflation and ob errors Should test the LETKF in an operational set-up…

54 Extra Slides on Low Dim Method

55 Generate a long time series of model forecast minus reanalysis from the training period 2.3 Low-dim method ( Danforth et al, 2007: Estimating and correcting global weather model error. Mon. Wea. Rev, J. Atmos. Sci., 2007) t=0 t=6hr model Bias removal schemes (Low Dimensional Method) We collect a large number of estimated errors and estimate from them bias, etc. NNR Time-mean model bias Diurnal model error State dependent model error Forecast error due to error in IC NNR

56 Low-dimensional method Include Bias, Diurnal and State-Dependent model errors: Having a large number of estimated errors allows to estimate the global model error beyond the bias

57 SPEEDY 6 hr model errors against NNR (diurnal cycle) 1987 Jan 1~ Feb 15 Error anomalies For temperature at lower-levels, in addition to the time-independent bias, SPEEDY has diurnal cycle errors because it lacks diurnal radiation forcing Leading EOFs for 925 mb TEMP pc1 pc2