Presentation on theme: "Process-based modelling of vegetations and uncertainty quantification Marcel van Oijen (CEH-Edinburgh) Course Statistics for Environmental Evaluation Glasgow,"— Presentation transcript:
Process-based modelling of vegetations and uncertainty quantification Marcel van Oijen (CEH-Edinburgh) Course Statistics for Environmental Evaluation Glasgow, 2011-09-07
ContentsContents 1.Process-based modelling of vegetations 2.The Bayesian approach 3.Bayesian Calibration (BC) of process-based models 4.Bayesian Model Comparison (BMC) 5.Limitations of BC & BMC 6.On the usage of BC & BMC, now and in the future 7.References, Summary, Discussion
1.1 Ecosystem PBMs simulate biogeochemistry Atmosphere Tree Soil Subsoil H2OH2O H2OH2O H2OH2O H2OH2OC C C N N N N
1.2 I/O of PBMs Parameters & initial constants vegetation Parameters & initial constants soil Atmospheric drivers InputModel Output Management & land use Simulation of time series of plant and soil variables
1.3 I/O of empirical models Two parameters: P1 = slope P2 = intercept InputModel Output Y = P1 + P2 * t
1.4 Environmental evaluation: increasing use of PBMs C-sequestration (model output for 1920-2000) Uncertainty of C-sequestration [Van Oijen & Thomson, 2010]
1.5 Coupled vegetation-climate modelling White et al (1998)
1.5 Coupled vegetation-climate modelling White et al (1998) Death of Amazonian rain forest ?!
1.6 Forest models and uncertainty Model [Levy et al, 2004]
1.6 Forest models and uncertainty bgc century hybrid N dep UE (kg C kg -1 N) [Levy et al, 2004]
1.7 There are many models! Status: 680 models (9.05.2011) Search models (by subject) Result of query : Subject : Forestry 78 models found ANIMO: Agricultural NItrogen MOdel ACRU; Agricultural Catchments Research Unit Model AMORPHYS: A forest model based on tree morphology and physiology AREFS: The Automated Regional Ecological Forecast System BIOMASS: Forest canopy carbon and water balance model BROOK: BROOK, BROOK2 and BROOK90 BWIN: Program for forest stand analysis and prognosis CACTOS: California Conifer Timber Output Simulator CALPRO: The growth model for uneven-aged mixed conifer stands in California CARDYN: CARbon DYNamics CARRY: CARRY - contaminant transport model CRYPTOS: CRYPTOS CUPID: A comprehensive model of plant-environment interaction DENIT: DenNit DRYADES: Dryades DYNLAYER: Dynamic forest simulator EFIMOD: Dynamic Model of the "Mixed Stand/Soil" System in European Boreal Forests EFISCEN: European Forest Information Model (…) http://ecobas.org/www-server/index.html
1.8 Coupled vegetation-climate modelling White et al (1998) Death of Amazonian rain forest ?!
1.9 Amazonia revisited: model uncertainties 1. Change in precipitation and temperature over Amazonia predicted by 20 GCMs Galbraith et al. (2010). 3. Resulting change in rainforest biomass predicted by 3 vegetation models x 20 GCMs x 4 scenarios 2. Change in rainforest biomass predicted by 3 vegetation models for most extreme scenario (HadCM3 climate, A1FI) %
1.10 Reality check ! How reliable are these model studies: Sufficient data for model parameterization? Sufficient data for model input? How plausible are the different models? In every study using systems analysis and simulation: Model parameters, inputs and structure are uncertain How to deal with uncertainties optimally?
2.1 Probability Theory Uncertainties are everywhere: Models (environmental inputs, parameters, structure), Data Uncertainties can be expressed as probability distributions (pdfs) We need methods that: Quantify all uncertainties Show how to reduce them Efficiently transfer information: data models model application Calculating with uncertainties (pdfs) = Probability Theory
2.2 The Bayesian approach: reasoning using probability theory
2.3 The Bayesian approach = using Bayes Theorem
2.4 Dealing with uncertainty: Medical diagnostics A flu epidemic occurs: one percent of people is ill Diagnostic test, 99% reliable Test result is positive (bad news!) What is P(diseased|test positive)? (a)0.50 (b)0.98 (c)0.99 P(dis) = 0.01 P(pos|hlth) = 0.01 P(pos|dis) = 0.99 P(dis|pos) = P(pos|dis) P(dis) / P(pos) Bayes Theorem
2.4 Dealing with uncertainty: Medical diagnostics A flu epidemic occurs: one percent of people is ill Diagnostic test, 99% reliable Test result is positive (bad news!) What is P(diseased|test positive)? (a)0.50 (b)0.98 (c)0.99 P(dis) = 0.01 P(pos|hlth) = 0.01 P(pos|dis) = 0.99 P(dis|pos) = P(pos|dis) P(dis) / P(pos) = P(pos|dis) P(dis) P(pos|dis) P(dis) + P(pos|hlth) P(hlth) Bayes Theorem
2.4 Dealing with uncertainty: Medical diagnostics A flu epidemic occurs: one percent of people is ill Diagnostic test, 99% reliable Test result is positive (bad news!) What is P(diseased|test positive)? (a)0.50 (b)0.98 (c)0.99 P(dis) = 0.01 P(pos|hlth) = 0.01 P(pos|dis) = 0.99 P(dis|pos) = P(pos|dis) P(dis) / P(pos) = P(pos|dis) P(dis) P(pos|dis) P(dis) + P(pos|hlth) P(hlth) = 0.99 0.01 0.99 0.01 + 0.01 0.99 = 0.50 Bayes Theorem
2.5 Proof of Bayes Theorem P(A&B)= P(B) PA|B) P(A|B)= P(A) P(B|A) / P(B) A B Product Rule Bayes Theorem
2.10 What and why? We want to use data and models to explain and predict ecosystem behaviour Data as well as model inputs, parameters and outputs are uncertain No prediction is complete without quantifying the uncertainty. No explanation is complete without analysing the uncertainty Uncertainties can be expressed as probability density functions (pdfs) Probability theory tells us how to work with pdfs: Bayes Theorem (BT) tells us how a pdf changes when new information arrives BT: Prior pdf Posterior pdf BT: Posterior = Prior x Likelihood / Evidence BT: P(θ|D) = P(θ) P(D|θ) / P(D) BT: P(θ|D) P(θ) P(D|θ)
3. Bayesian Calibration (BC) of process-based models
Bayesian updating of probabilities for process-based models Model parameterization:P(params) P(params|data) Model selection:P(models) P(model|data) Bayes Theorem:Prior probability Posterior prob.
3.1 Process-based forest models Soil C NPP Height Environmental scenarios Initial values Parameters Model
3.14 Continued calibration when new data become available Prior pdf Posterior pdf Bayesian calibration Prior pdf New data
3.14 Continued calibration when new data become available New data Bayesian calibration Prior pdf Posterior pdf Prior pdf
3.15 Bayesian projects at CEH-Edinburgh Selection of forest models ( NitroEurope team) Data Assimilation forest EC data (David Cameron, Mat Williams) Risk of frost damage in grassland (Stig Morten Thorsen, Anne-Grete Roer, MvO) Uncertainty in agricultural soil models (Lehuger, Reinds, MvO) Uncertainty in UK C-sequestration (MvO, Jonty Rougier, Ron Smith, Tommy Brown, Amanda Thomson) Parameterization and uncertainty quantification of 3-PG model of forest growth & C-stock (Genevieve Patenaude, Ronnie Milne, M. v.Oijen) Uncertainty in earth system resilience (Clare Britton & David Cameron) [CO 2 ] Time
3.16 BASFOR: forest C-sequestration 1920-2000 - Uncertainty due to model parameters only, NOT uncertainty in inputs / upscaling Soil N-content C-sequestration Uncertainty of C-sequestration
3.18 What kind of measurements would have reduced uncertainty the most ?
3.20 Prior & posterior uncertainty: use of height data Height Biomass Prior pred. uncertainty Posterior uncertainty (using height data) Height data Skogaby
3.20 Prior & posterior uncertainty: use of height data Height Biomass Prior pred. uncertainty Posterior uncertainty (using height data) Height data (hypothet.)
3.20 Prior & posterior uncertainty: use of height data Height Biomass Prior pred. uncertainty Posterior uncertainty (using height data) Posterior uncertainty (using precision height data)
3.22 Summary for BC vs tuning Model tuning 1.Define parameter ranges (permitted values) 2.Select parameter values that give model output closest (r 2, RMSE, …) to data 3.Do the model study with the tuned parameters (i.e. no model output uncertainty) Bayesian calibration 1. Define parameter pdfs 2. Define data pdfs (probable measurement errors) 3. Use Bayes Theorem to calculate posterior parameter pdf 4. Do all future model runs with samples from the parameter pdf (i.e. quantify uncertainty of model results) BC can use data to reduce parameter uncertainty for any process-based model
4.1 Multiple models -> structural uncertainty bgc century hybrid N dep UE (kg C kg -1 N) [Levy et al, 2004]
4.2 Bayesian comparison of two models Bayes Theorem for model probab.: P(M|D) = P(M) P(D|M) / P(D) The Integrated likelihood P(D|M i ) can be approximated from the MCMC sample of outputs for model M i ( * ) Model 1 Model 2 P(M 2 |D) / P(M 1 |D) = P(D|M 2 ) / P(D|M 1 ) The Bayes Factor P(D|M 2 ) / P(D|M 1 ) quantifies how the data D change the odds of M 2 over M 1 P(M 1 ) = P(M 2 ) = ½ (*)(*) harmonic mean of likelihoods in MCMC-sample (Kass & Raftery, 1995)
4.4 Bayes Factor for two big forest models MCMC 5000 steps Calculation of P(D|BASFOR) Calculation of P(D|BASFOR+) Data Rajec: Emil Klimo
4.5 Bayes Factor for two big forest models MCMC 5000 steps Calculation of P(D|BASFOR) Calculation of P(D|BASFOR+) Data Rajec: Emil Klimo P(D|M 1 ) = 7.2e-016 P(D|M 2 ) = 5.8e-15 Bayes Factor = 7.8, so BASFOR+ supported by the data
4.6 Summary of BMC: what do we need, what do we do? What do we need to carry out a BMC? 1. Multiple models:M 1, …, M n 2. For each model, a list of its parameters:θ 1, …, θ n 3. Data:D What do we do with the models, parameters and data? 1. We express our uncertainty about the correctness of models, parameter values and data by means of probability distributions. 2. We apply the rules of probability theory to transfer the information from the data to the probability distributions for models and parameters 3. The result tells us which model is the most plausible, and what its parameter values are likely to be
5.1 What do BC & BMC tell us about our models? BC tells us about our parameters: what their values probably are BMC tells us about the structure of our models: which model is more plausible than others. But … BC does not tell us why the most probable parameter values sometimes look strange BMC does not tell us whether the most plausible model could be improved, or how.
5.2 EXAMPLE: BC giving strange posterior pdfs Red lines: Prior pdf. Black histograms: Posterior pdf after using data from Scots pine in Estonia.
5.4 Forest model comparison (NitroEurope) System:Spruce forest, Höglwald, Germany Models:BASFOR, COUP, DAYCENT, Mobile-DNDC Data:Soil water, Emissions of N 2 O, NO, CO 2 Field data [Van Oijen et al. 2011]
5.5 Analysis of model-data mismatch before/after BC: logL [Van Oijen et al. 2011]
5.6 Analysis of model-data mismatch before/after BC: MSE MSE for N 2 O Prior Posterior Prior Posterior Prior Posterior Prior Posterior [Van Oijen et al. 2011]
5.7 Parameters: universal or site-specific ? System:Forest soils, Europe, 182 sites Model:VSD Data:pH, [Ca,Mg,K], [NO 3 ], [Al] Single-site calibration turns prior uncertainty into spatial variability Multi-site calibration removes parameter uncertainty … … but NRMSE on validation plots are 20- 100% higher than using nearest-neighbour single-site calibration Prior Posterior
6. On the usage of BC & BMC, now and in the future
Linear regression using least squares Model: straight line Prior: uniform Likelihood: Gaussian (iid) BC, e.g. for spatiotemporal stochastic modelling with spatial correlations included in the prior = Note: Realising that LS-regression is a special case of BC opens up possibilities to improve on it, e.g. by having more information in the prior or likelihood (Sivia 2005) All Maximum Likelihood estimation methods can be seen as limited forms of BC where the prior is ignored (uniform) and only the maximum value of the likelihood is identified (ignoring uncertainty) Hierarchical modelling = BC, except that uncertainty is ignored 6.1 Bayes in other disguises
- Inverse modelling (e.g. to estimate emission rates from concentrations) - Geostatistics, e.g. Bayesian kriging - Data Assimilation (KF, EnKF etc.) 6.2 Bayes in other disguises (cont.)
6.3 Trends More use of Bayesian approaches in all areas of environmental science Improvements in computational techniques for BC & BMC of slow process-based models Increasing use of hierarchical models (to represent complex prior pdfs, or to represent spatial relationships) Replacement of informal methods (or methods that only approximate the full probability approach) by BMC
6.4 Improvements in Markov Chain Monte Carlo algorithms
6.5 Hierarchical Bayesian modelling in ecology See also: Ogle, K. and J.J. Barber (2008) "Bayesian data-model integration in plant physiological and ecosystem ecology." Progress in Botany 69:281-311
6.6 Bayesian calibration instead of model spin-up System:Grassland, Oensingen (Switzerland) Model:DAYCENT Data:Soil respiration Data Prior Posterior
6.7 Bayes & space Van Oijen, Thomson & Ewert (2009)
7.1 Summary of methodology 1. Express all uncertainties probabilistically Assign probability distributions to (1) data, (2) the collection of models, (3) the parameter-set of each individual model 2. Use the rules of probability theory to transfer the information from the data to the probability distributions for models and parameters Main tool from probability theory to do this: Bayes Theorem P(α|D) P(α) P(D|α) Posterior is proportional to prior times likelihood α = parameter set parameterisation (Bayesian Calibration, BC) α = model set model evaluation (Bayesian Model Comparison, BMC)
7.2 Bayesian methods: References Bayes, T. (1763) Metropolis, N. (1953) Kass & Raftery (1995) Green, E.J. / MacFarlane, D.W. / Valentine, H.T., Strawderman, W.E. (1996, 1998, 1999, 2000) Jansen, M. (1997) Jaynes, E.T. (2003) Van Oijen et al. (2005, 2008, 2011) Bayes Theorem MCMC BMC Forest models Crop models Probability theory Complex process- based models, MCMC
Bayesian Calibration (BC) and Bayesian Model Comparison (BMC) of process- based models: Theory, implementation and guidelines Freely downloadable from http://nora.nerc.ac.uk/6087/
7.4 Discussion: BC & BMC in practice Practicalities: 1.When new data arrive: MCMC provides a universal method for calculating posterior pdfs Easy to implement, difficult to fine-tune 2.Quantifying the prior: Not a key issue in env. sci.: (1) many data, (2) prior is posterior from previous calibration 3.Defining the likelihood: Normal pdf for measurement error usually describes our prior state of knowledge adequately (Jaynes) 4.For model development: BC & BMC are not enough ! Still a need for analysis of model-data mismatch Overall: Uncertainty quantification often shows that our process-based environmental models are not very reliable
Appendix A: How to do BC The problem: You have: (1) a prior pdf P(θ) for your models parameters, (2) new data. You also know how to calculate the likelihood P(D|θ). How do you now go about using BT to calculate the posterior P(θ|D)? Methods of using BT to calculate P(θ|D): 1.Analytical. Only works when the prior and likelihood are conjugate (family-related). For example if prior and likelihood are normal pdfs, then the posterior is normal too. 2.Numerical. Uses sampling. Three main methods: 1.MCMC (e.g. Metropolis, Gibbs) Sample directly from the posterior. Best for high-dimensional problems 2.Accept-Reject Sample from the prior, then reject some using the likelihood. Best for low-dimensional problems 3.Model emulation followed by MCMC or A-R
Appendix B: Should we try to measure the sensitive parameters? Yes, because the sensitive parameters: are obviously important for prediction ? No, because model parameters: are correlated with each other, which we do not measure cannot really be measured at all So, it may be better to measure output variables, because they: are what we are interested in are better defined, in models and measurements help determine parameter correlations if used in Bayesian calibration Key question: what data are most informative?
Appendix C: Data have information content, which is additive = +
Your consent to our cookies if you continue to use this website.