Download presentation

Presentation is loading. Please wait.

Published byRoger Shurley Modified over 2 years ago

1

2
Introduction to Data Assimilation Peter Jan van Leeuwen IMAU

3
Basic estimation theory T 0 = T + e 0 T m = T + e m E{e 0 } = 0 E{e m } = 0 E{e 0 2 } = s 0 2 E{e m 2 } = s m 2 E{e 0 e m } = 0 Assume a linear best estimate: T n = a T 0 + b T m with T n = T + e n Find a and b such that: b = 1 - a a = __________ sm2sm2 s s m 2 E{e n } = 0 E{e n 2 } minimal

4
Solution: T n = _______ T 0 + _______ T m sm2sm2 s02s02 s s m 2 ___ = ___ + ___ 111 sm2sm2 s02s02 sn2sn2 and Note: s n smaller than s 0 and s m ! Basic estimation theory Best Linear Unbiased Estimate BLUE Just least squares!!!

5
Can we generalize this? More dimensions Nonlinear estimates (why linear?) Observations that are not directly modeled Biases

6
P(u) u (m/s) The basics: probability density functions

7
The model pdf P[u(x1),u(x2),T(x3),.. u(x1) u(x2) T(x3)

8
Observations In situ observations: e.g. sparse hydrographic observations, irregular in space and time Satellite observations: e.g. of the sea-surface

9
NO INVERSION !!! Data assimilation: general formulation

10
Bayes’ Theorem Conditional pdf: Similarly: Combine: Even better:

11
Filters and smoothers Filter: solve 3D problem several times Smoother: solve 4D problem once Note: the model is (highly) nonlinear!

12
Model equation: Pdf evolution: Kolmogorov’s equation (Fokker-Planck equation) Pdf evolution in time

13
Only consider mean and covariance At observation times: -The mean of the product of 2 Gaussians is equal to linear combination of the 2 means: E{ |d} = a E{ } + b E{d| } - Assume p(d| ) and p( ) are Gaussian, and use Bayes - But we have seen this before in the first example ! (Ensemble) Kalman Filter

14
with Kalman gain K = PH T (HPH T + R) -1 Kalman Filter notation: m new = m old + K (d - H m old ) Old solution: T n = _______ T 0 + _______ T m sm2sm2 s02s02 s s m 2 But now for covariance matrices: m new = R (P+R) -1 m old + P (P+R) -1 d (Ensemble) Kalman Filter II

15
The error covariance: tells us how model variables co-vary P SSH SSH (x,y) = E{ (SSH(x) - E{SSH(x)}) (SSH(y) - E{SSH(y)}) } P SSH SST (x,y) = E{ (SSH(x) - E{SSH(x)}) (SST(y) - E{SST(y)}) } For example SSH at point x with SSH at point y: Or SSH at point x and SST at point y:

16
Spatial correlation of SSH and SST in the Indian Ocean x x Haugen and Evensen, 2002

17
Covariances between model variables Haugen and Evensen, 2002

18
Summary on Kalman filters: Gaussian pdf’s for model and observations Propagation of error covariance P If N operations for state vector evolution, then N 2 operations for P evolution… Problems: Nonlinear dynamics, so non-Gaussian statistics Evolution equation for P not closed Size of P (> 1,000,000,000,000) ….

19
Propagation of pdf in time: ensemble or particle methods

20
Example of Ensemble Kalman Filter (EnKF) MICOM model with 1.3 million model variables Observations: Altimetry, infra-red Validated with hydrographic observations

21
SST (-2K to +2K)

22
SSH (-10 cm to +10 cm)

23
RMS difference with XBT-data

24
? Spurious covariances

25
Local updating: restrict update using only local covariances: EnKF: with Kalman gain Schurproduct, or direct cut-off Localization in EnKF-like methods

26
Ensemble Kalman Smoother (EnKS) Basic idea: use covariances over time. Efficient implementation: 1) run EnKF, store ensemble at observation times 2) add influence of data back in time using covariances at different times

27
Probability density function of layer thickness of first layer at day 41 during data-assimilation No Kalman filter No variational methods Nonlinear filters

28
The particle filter (Sequential Importance Resampling SIR) Ensemble with

29
Particle filter

30
SIR-results for a quasi-geostrophic ocean model around South Africa with 512 members

31
Smoothers: formulation Model error Initial error Observation error Boundary errors etc. etc.

32
Smoothers: prior pdf

33
Smoothers: posterior pdf Assume all errors are Gaussian: modelinitialobservation

34
Assume Gaussian pdf for model errors and observations: in which Find min J from variational derivative: J is costfunction or penalty function model dynamics initial condition model-obs misfit Smoothers in practice: Variational methods

35
Gradient descent methods J model variable ’

36
Forward integrations Backward integrations Nonlinear two-point boundary value problem solved by linearization and iteration The Euler-Lagrange equations

37
4D-VAR strong constraint Assume model errors negligible: In practice only a few linear and one or two nonlinear iterations are done…. No error estimate (Hessian too expensive and unwanted…)

38
Example 4D-VAR: GECCO 1952 through 2001 on a 1º global grid with 23 layers in the vertical, using the ECCO/MIT adjoint technology. Model started from Levitus and NCEP forcing and uses state of the art physics modules (GM, KPP). Control parameters: initial temperature and salinity fields and the time varying surface forcing,

39
The Mean Ocean Circulation, global Residual values can reveal inconsistencies in data sets (here geoid).

40
MOC at 25N Bryden et al. (2005)

41
Error estimates J Local curvature from second derivative of J, the Hessian X

42
Other smoothers Representers, PSAS, Ensemble Kalman smoother, …. Simulated annealing (Metropolis Hastings), …

43
Relations between model variables Covariance gives linear correlations between variables Adjoint gives linear correlation between variables along a nonlinear model run (linear sensitivity) Pdf gives full nonlinear relation between variables (nonlinear sensitivity)

44
Parameter estimation Bayes: Looks simple, but we don’t observe model parameters…. We observe model fields, so: in which H has to be found from model integrations

45
Example: ecosystem modeling 29 parameters of which 15 were estimated and 14 were kept Fixed.

46
Estimated parameters from particle filter (SIR) All other methods that were tried, including 4D-VAR and EnKF failed. Losa et al, 2001

47
Estimate size of model error Brasseur et al, 2006

48
Why data assimilation? Forecasts Process studies Model improvements - model parameters - parameterizations ‘Intelligent monitoring’

49
Conclusions Evolution of pdf with time is essential ingredient Filters: dominated by Kalman-like methods, but moving towards nonlinear methods (SIR etc.) Smoothers: dominated by 4D-VAR, New ideas needed!

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google