Presentation is loading. Please wait.

Presentation is loading. Please wait.

Calibration Guidelines 1. Start simple, add complexity carefully 2. Use a broad range of information 3. Be well-posed & be comprehensive 4. Include diverse.

Similar presentations


Presentation on theme: "Calibration Guidelines 1. Start simple, add complexity carefully 2. Use a broad range of information 3. Be well-posed & be comprehensive 4. Include diverse."— Presentation transcript:

1 Calibration Guidelines 1. Start simple, add complexity carefully 2. Use a broad range of information 3. Be well-posed & be comprehensive 4. Include diverse observation data for ‘best fit’ 5. Use prior information carefully 6. Assign weights that reflect ‘observation’ error 7. Encourage convergence by making the model more accurate 8. Consider alternative models 9.Evaluate model fit 10.Evaluate optimal parameter values 11. 11.Identify new data to improve parameter estimates 12. 12.Identify new data to improve predictions 13.Use deterministic methods 14. Use statistical methods Model developmentModel testing Potential new data Prediction uncertainty

2 Commonly used graph to show model fit: (weighted) observed vs. (weighted) simulated This reveals some problems with model fit, but how significant and what are the details? From D’Agnese +, 1997, 1999 Book fig. 15.3b, p. 362

3 Recommended graph: Weighted Residuals vs. Weighted Simulated Values Weighted residuals should be evenly distributed about zero for all weighted simulated values, and should display no trends with the weighted simulated values. Trends or unequal variance are indicators of model bias. From D’Agnese +, 1997, 1999 Book fig. 15.3a, p. 362

4 Diagnosing Poor Model Fit Investigate 5 aspects of model & calibration –Parameter definition: Does it need modification? –Other aspects of model construction (e.g., boundary conditions, simulated processes): Need modification? –Simulated equivalents: Do they correctly represent the observed values? –Observations: Are there observations affected by unsimulated processes? –Weighting: Errors in weighting are possible, but use caution when changing!

5 Additional Model Fit Issue Test model’s ability to predict data not included in calibration: –Commonly called ‘validation’; that term is problematic because we can never prove a model is right –If the data are at a future time, generally called a ‘post-audit’. –To be meaningful, new data must represent stress conditions or model aspects not represented in the data used for calibration

6 Calibration Guidelines 1. Start simple, add complexity carefully 2. Use a broad range of information 3. Be well-posed & be comprehensive 4. Include diverse observation data for ‘best fit’ 5. Use prior information carefully 6. Assign weights that reflect ‘observation’ error 7. Encourage convergence by making the model more accurate 8. Consider alternative models 9.Evaluate model fit 10.Evaluate optimal parameter values 11. 11.Identify new data to improve parameter estimates 12. 12.Identify new data to improve predictions 13.Use deterministic methods 14. Use statistical methods Model developmentModel testing Potential new data Prediction uncertainty

7 Model Testing Guideline 10: Evaluate optimized parameter values TOOLS From the model:  Parameter estimates  Confidence intervals on parameters - Linear and nonlinear From field data:  Reasonable ranges for parameter values

8 Premise of evaluation ‘Best fit’ parameter values are very powerful indicators of model error IF: Adequate data, correctly interpreted THEN: or Estimated parameter values should be reasonable or their 95% confidence interval should include reasonable values. Weighted residuals should be random. AND: Model correctly represents the system Book, fig. 12.1, p. 316

9 Typical figure comparing optimized parameter values and their linear confidence intervals with reasonable ranges. Trick question: Why are the linear intervals symmetric given the log scale of the graph? Book fig 12.5, p. 324

10 Importance of including confidence intervals in the comparison to reasonable values Barlebo +, 1998, Grinsted landfill (Denmark) transport model Book fig 15.14, p. 372

11 Importance of including confidence intervals in the comparison to reasonable values Here, the confidence intervals help show that the problem with an unreasonable estimate obtained using heads only is that the observation data provide inadequate information toward estimating this parameter value. Barlebo +, 1998, Grindsted landfill (Denmark) transport model. Book fig 15.15, p. 373

12 Use confidence intervals to determine if some parameters can be combined If estimates are different but confidence intervals overlap, this may imply that the true values of the parameters are similar and the two parameters can be combined – that the observation data cannot distinguish between the two parameters. If doing so worsens model fit too much, keep separate. K1K2 1 10 K, m/d

13 Nonlinear models: linear intervals are approximate Conclusions based on linear intervals need to be made knowing that they are approximate. Linear intervals require trivial computation time, so are usually used. Nonlinear intervals are new and computationally intensive. From Christensen and Cooley, 1999 Graph shows that nonlinear and linear confidence intervals can be similar or quite different. Difficult to know how close they will be without calculating nonlinear intervals. Z = log Kz of a confining bed Y = log T.


Download ppt "Calibration Guidelines 1. Start simple, add complexity carefully 2. Use a broad range of information 3. Be well-posed & be comprehensive 4. Include diverse."

Similar presentations


Ads by Google