Presentation is loading. Please wait.

Presentation is loading. Please wait.

Center for Radiative Shock Hydrodynamics Fall 2011 Review Assessment of predictive capability Derek Bingham 1.

Similar presentations


Presentation on theme: "Center for Radiative Shock Hydrodynamics Fall 2011 Review Assessment of predictive capability Derek Bingham 1."— Presentation transcript:

1 Center for Radiative Shock Hydrodynamics Fall 2011 Review Assessment of predictive capability Derek Bingham 1

2 Experiment design Screening (identifying most important inputs) Emulator construction Prediction Calibration/tuning (solving inverse problems) Confidence/prediction interval estimation Analysis of multiple simulators 2 Will focus the framework where we can quantify uncertainties in predictions and the impact of the sources of variability CRASH has required innovations to most UQ activities

3 Page where: o model or system inputs o system response o simulator response o calibration parameters o observational error *Kennedy and O’Hagan (2001); Higdon et al. (2004) 3 3 The predictive modeling approach is often called model calibration *

4 Page where: o model or system inputs o system response o simulator response o calibration parameters o observational error 4 4 The predictive modeling approach is often called model calibration

5 Page where: o model or system inputs o system response o simulator response o calibration parameters o observational error 5 Gaussian Process Models (looking at other models) 5 The predictive modeling approach is often called model calibration

6 Page where: o model or system inputs o system response o simulator response o calibration parameters o observational error 6 Goal is to estimate unknown calibration parameters and also make predictions of the physical system 6 The predictive modeling approach is often called model calibration

7 Page Vector of observations and simulations denoted as 7 The Gaussian process model specifications links simulations and observations through the covariance

8 We have used 2-D CRASH simulations and observations to build and explore the predictive model for shock location and breakout time Experiment data: o 2008 and 2009 experiments o Experiment variables: Be thickness, Laser energy, Xe fill pressure, Observation time o Response: Shock location (2008) and shock breakout time (2009) 2-D CRASH Simulations o 104 simulations, varied over 5 inputs o Experiment variables: Be thickness, Laser energy, Observation time o Calibration parameters: Electron flux limiter, Be gamma, Wall opacity 8

9 Can sample from joint posterior distribution of the calibration parameters 9 Breakout time calibration Shock location calibration Joint calibration

10 10 A look at the posterior marginal distributions of the calibration parameters

11 Statistical model can be used to evaluate sensitivity of codes or system to inputs 11 2-D CRASH shock breakout time sensitivity plots

12 12 The statistical model is used to predict shock breakout time incorporating sources of uncertainty

13 13 (μs) The statistical model is used to predict shock location incorporating sources of uncertainty

14 Have simulations from 1-D and 2-D models 2-D models runs come at a higher computational cost Would like to use all simulations, and experiments, to make predictions 14 We developed a new statistical model for combining outputs from multi-fidelity simulators

15 Have simulations from 1-D and 2-D models 2-D models runs come at a higher computational cost Would like to use all simulations, and experiments, to make predictions 1-D CRASH Simulations o 1024 simulations o Experiment variables: Be thickness, Laser energy, Xe fill pressure, Observation time o Calibration parameters: Electron flux limiter, Laser energy scale factor 2-D CRASH Simulations o 104 simulations o Experiment variables: Be thickness, Laser energy, Xe fill pressure, Observation time o Calibration parameters: Electron flux limiter, Wall opacity, Be gamma 15 We developed a new statistical model for combining outputs from multi-fidelity simulators

16 The available shock information comes from models and experiments where: o model or system inputs o system response o simulator response o vectors of calibration parameters Modeling approach in the spirit of Kennedy and O’Hagan (2000); Kennedy and O’Hagan (2001); Higdon et al. (2004) 1-D simulator …calibration parameters are adjusted 2-D simulator …calibration parameters are adjusted Experiments … calibration parameters are fixed and unknown 16

17 Idea is that the 1-D code does not match the 2-D code for two reasons 17 Calibrate lower fidelity code to higher fidelity code

18 Link the simulator responses and observations through joint model and discrepancies 18

19 Link the simulator responses and observations through joint model and discrepancies 19

20 Link the simulator responses and observations through joint model and discrepancies 20

21 Link the simulator responses and observations through joint model and discrepancies 21 Comments: o For deciding what variables belong in the discrepancy, one can ask “what is fixed at this level” o The interpretation of the calibration parameters changes somewhat o Discrepancies are almost guaranteed for this specification

22 Link the simulator responses and observations through joint model and discrepancies 22 Gaussian Process Models

23 Need to specify prior distributions Approach is Bayesian Inverted-gamma priors for variance components Beta priors for the correlation parameters Log-normal priors for the calibration parameters 23

24 Can illustrate using a simple example 24 Low fidelity model

25 Can illustrate using a simple example 25 Low fidelity model High fidelity model

26 Can illustrate using a simple example 26 Low fidelity model High fidelity model True model + replication error

27 How would this work in practice? Evaluate each computer model at at different input settings We evaluated the low fidelity (LF) model 20 times with inputs (x, t 1, t f ) chosen according to a Latin hypercube design The high fidelity (HF) model was evaluated 5 times with inputs (x, t 2, t f ) chosen according to a Latin hypercube design The experimental data was generated by evaluating the true model 3 times and adding replication error from a N(0,0.2) 27

28 Observations and response functions at the true value of the calibration parameters 28

29 We can construct 95% posterior prediction intervals at the observations 29

30 Comparison of predicted response surfaces 30

31 New methodology applied to CRASH for breakout time 31

32 Observations Able to build a statistical model that appears to predict the observations well Prediction error is in the order of the experimental uncertainty Care must be taken choosing priors for the variances of GP’s 32

33 Approach to combine outputs from experiments and several different computer models Experiments: The mean function is just one of many possible response functions View computer model evaluations as biased versions of this “super-reality” 33 Developing new statistical model for combining simulations and experiments

34 Experiments: Computer model: Each computer model will be calibrated directly to the observations Information for estimating individual unknown calibration parameters comes from observations and models with that parameter as on input 34 Super-reality model for prediction and calibration

35 Use the model calibration framework to perform a variety of tasks such as explore the simulation response surfaces, making predictions for experiments and sensitivity analysis Developed new statistical model for calibration of multi- fidelity computer models with field data Can make predictions with associated uncertainty informed by multi-fidelity models Developing model to combine several codes (not necessarily ranked by fidelity) and observations 35 Have deployed state of the art UQ techniques to leverage CRASH codes and experiments

36 Allocation of computational budget The goal is to use available simulations and experiments to evaluate the allocation of the computational budget to computational models Since prediction is our goal, will use the reduction in the integrated mean square error (IMSE) This measures the prediction variance, averaged across the input space The optimal set of simulations is the one that maximized the expected reduction in the IMSE 36

37 Criterion can be evaluated in the current statistical framework Can compute an estimate of the mean square error at any potential input, conditional on the model parameters Would like a new trial to improve the prediction everyone in the input region This criterion is difficult to optimize 37

38 A quick illustration – CRASH 1-D using shock location Can use the 1-D predictive calibration model to evaluate the value of adding new trials Suppose wish to conduct 10 new field trials Which 10? What do we expect to gain? 38

39 Expected reduction in IMSE for up to 10 new experiments 39 Expected reduction in IMSE Number of follow-up experiments

40 Can compare the value of new experiments to simulations One new field trial yields an expected reduction in the IMSE of about 5% The optimal IMSE design with 200 1-D new computer trials yields an expected reduction of of about 3% The value of an experiment is substantially more than that of a computer trial Can do the same exercise when there are multiple codes 40

41 Fin 41


Download ppt "Center for Radiative Shock Hydrodynamics Fall 2011 Review Assessment of predictive capability Derek Bingham 1."

Similar presentations


Ads by Google