Presentation is loading. Please wait.

Presentation is loading. Please wait.

Uncertainty Analysis and Model “Validation” or Confidence Building.

Similar presentations


Presentation on theme: "Uncertainty Analysis and Model “Validation” or Confidence Building."— Presentation transcript:

1 Uncertainty Analysis and Model “Validation” or Confidence Building

2 Conclusions Calibrations are non-unique. A good calibration (even if ARM = 0) does not ensure that the model will make good predictions. Field data are essential in constraining the model so that the model can capture the essential features of the system. Modelers need to maintain a healthy skepticism about their results.

3 Head predictions are more robust (consistent among different calibrated models) than transport (particle tracking) predictions. Conclusions Need for an uncertainty analysis to accompany calibration results and predictions. Ideally models should be maintained for the long term and updated to establish confidence in the model. Rather than a single calibration exercise, a continual process of confidence building is needed.

4 Uncertainty in the Calibration Involves uncertainty in: Parameter values Conceptual model including boundary conditions, zonation, geometry, etc. Targets

5 Zonation Kriging

6 To use conventional inverse models/parameter estimation models in calibration, you need to have a pretty good idea of zonation (of K, for example). Also need to identify reasonable ranges for the calibration parameters and weights. (New version of PEST with pilot points does not need zonation as it works with continuous distribution of parameter values.) Zonation vs Pilot Points

7 Field data are essential in constraining the model so that the model can capture the essential features of the system. Parameter Values

8 Calibration Targets calibration value associated error 20.24 m  0.80 m Target with relatively large associated error. Target with smaller associated error. Need to establish model specific calibration criteria and define targets including associated error.

9 Examples of Sources of Error in a Calibration Target Surveying errors Errors in measuring water levels Interpolation error Transient effects Scaling effects Unmodeled heterogeneities

10 Importance of Flux Targets When recharge rate (R) is a calibration parameter, calibrating to fluxes can help in estimating K and/or R. R was not a calibration parameter in our final project.

11 H1 H2 q = KI In this example, flux information helps calibrate K.

12 Here discharge information helps calibrate R. Q

13 H1 H2 q = KI In this example, flux information helps calibrate K.

14 All water discharges to the playa. Calibration to ET merely fine tunes the discharge rates within the playa area. Calibration to ET does not help calibrate the heads and K values except in the immediate vicinity of the playa. In our example, total recharge is known/assumed to be 7.14E08 ft 3 /year and discharge = recharge.

15 Smith Creek Valley (Thomas et al., 1989) Calibration Objectives (matching targets) 1.Heads within 10 ft of measured heads. Allows for Measurement error and interpolation error. 2.Absolute residual mean between measured and simulated heads close to zero (0.22 ft) and standard deviation minimal (4.5 ft). 3.Head difference between layers 1&2 within 2 ft of field values. 4. Distribution of ET and ET rates match field estimates.

16 Includes results from 2006 and 4 other years 724 Project Results A “good” calibration does not guarantee an accurate prediction. ?

17 Sensitivity analysis to analyze uncertainty in the calibration Use an inverse model (automated calibration) to quantify uncertainties and optimize the calibration. Perform sensitivity analysis during calibration. Sensitivity coefficients

18 (Zheng and Bennett) Sensitivity analysis performed during the calibration Steps in Modeling calibration loop

19 Uncertainty in the Prediction Involves uncertainty in how parameter values (e.g., recharge) or pumping rates will vary in the future. Reflects uncertainty in the calibration.

20 Stochastic simulation Ways to quantify uncertainty in the prediction Scenario analysis - stresses Sensitivity analysis - parameters

21 (Zheng and Bennett) Sensitivity analysis performed after the prediction Steps in Modeling Traditional Paradigm

22 Multi-model Analysis (MMA) Predictions and sensitivity analysis are inside the calibration loop From J. Doherty 2007 New Paradigm for Sensitivity & Scenario Analysis

23 Stochastic simulation Ways to quantify uncertainty in the prediction Scenario analysis - stresses Sensitivity analysis - parameters

24 MADE site – Feehley and Zheng, 2000, WRR 36(9). Stochastic simulation Stochastic modeling option available in GW Vistas

25

26 A Monte Carlo analysis considers 100 or more realizations.

27

28 Zheng & Bennett Fig. 13.2

29 Hydraulic conductivity Initial concentrations (plume configuration) Both Zheng & Bennett Fig. 13.5

30 Reducing Uncertainty Hard data only Soft and hard data With inverse flow modeling Hypothetical example truth Z&B Fig. 13.6

31 How do we “validate” a model so that we have confidence that it will make accurate predictions? Confidence Building

32 Modeling Chronology 1960’s Flow models are great! 1970’s Contaminant transport models are great! 1975 What about uncertainty of flow models? 1980s Contaminant transport models don’t work. (because of failure to account for heterogeneity) 1990s Are models reliable? Concerns over reliability in predictions arose over efforts to model geologic repositories for high level radioactive waste.

33 “The objective of model validation is to determine how well the mathematical representation of the processes describes the actual system behavior in terms of the degree of correlation between model calculations and actual measured data” (NRC, 1990) Hmmmmm…. Sounds like calibration… What they really mean is that a valid model will yield an accurate prediction.

34 Oreskes et al. (1994): paper in Science Calibration = forced empirical adequacy Verification = assertion of truth (possible in a closed system, e.g., testing of codes) Validation = establishment of legitimacy (does not contain obvious errors), confirmation, confidence building What constitutes “validation”? (code vs. model) NRC study (1990): Model validation is not possible.

35 How to build confidence in a model Calibration (history matching) steady-state calibration(s) transient calibration “Verification” requires an independent set of field data Post-Audit: requires waiting for prediction to occur Models as interactive management tools (e.g., the AEM model of The Netherlands)

36 HAPPY MODELING!


Download ppt "Uncertainty Analysis and Model “Validation” or Confidence Building."

Similar presentations


Ads by Google