Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to emulators Tony O’Hagan University of Sheffield.

Similar presentations


Presentation on theme: "Introduction to emulators Tony O’Hagan University of Sheffield."— Presentation transcript:

1 Introduction to emulators Tony O’Hagan University of Sheffield

2 Emulation A computer model encodes a function, that takes inputs and produces outputs An emulator is a statistical approximation of that function Estimates what outputs would be obtained from given inputs With statistical measure of estimation error Given enough training data, estimation error variance can be made small

3 So what? A good emulator estimates the code accurately with small uncertainty and runs “instantly” So we can do UA/SA and all sorts of other things fast and efficiently Conceptually, we use model runs to learn about the function then derive any desired properties of the model

4 Gaussian process Simple regression models can be thought of as emulators But error estimates are invalid We use Gaussian process emulation Nonparametric, so can fit any function Error measures can be validated Analytically tractable, so can often do UA/SA etc analytically Highly efficient when many inputs

5 2 code runs Consider one input and one output Emulator estimate interpolates data Emulator uncertainty grows between data points

6 3 code runs Adding another point changes estimate and reduces uncertainty

7 5 code runs And so on

8 Smoothness It is the basic assumption of a (homogeneously) smooth, continuous function that gives the GP its computational advantages The actual degree of smoothness concerns how rapidly the function “wiggles” A rough function responds strongly to quite small changes in inputs We need many more data points to emulate accurately a rough function over a given range

9 Effect of Smoothness Smoothness determines how fast the uncertainty increases between data points

10 Estimating smoothness We can estimate the smoothness from the data This is obviously a key Gaussian process parameter to estimate Need robust estimate Validate by predicting left-out data points

11 Higher dimensions With 2 inputs, we are fitting a surface through the data With many inputs, the principles are the same Smoothness parameters (one for each dimension, usually) are crucial Regression even more useful In many dimensions there is much more “space” between data points But we also get more smoothness

12 Automatic screening Models never respond strongly to all their inputs Pragmatically, we get a high level of smoothness except in a few dimensions By estimating smoothness, Gaussian process automatically adjusts Effectively projects points down through smooth dimensions 200 points in 25 dimensions look sparse But in 5 dimensions they are pretty good

13 Many outputs A model with many outputs effectively has many functions of the inputs E.g. a time series or spatial grid We can emulate multiple outputs in different ways Emulate each output separately Multivariate emulator Treat output index as another input

14 Design We need to choose input configurations at which to run the model to get training data Don’t need to be random Objective to learn about the function Well spaced points that cover the region of interest E.g. generate 100 random LHC samples and choose the one having largest minimum distance between points

15 Comparison with Monte Carlo Monte Carlo (and other sampling-based methods) need many thousands of model runs need new samples for each computation Bayesian methods using GP emulation usually need only a few hundred runs after which all computations use the same set of runs Difference is crucial for large, complex models

16 Bayesian UA/SA Plenty of experience now Analytic results for normal and uniform input distributions (working on others) Big efficiency saving over MC allows more systematic SA Main effect and interaction terms Nonlinearity assessment “What if” analyses with different input distributions GEM-SA software

17 Oakley and O’Hagan (2004) Example with 15 inputs, 250 runs MC/FAST needed over 15,360 runs to compute SA variance components with comparable accuracy

18 Bayesian calibration Theory in Kennedy and O’Hagan (2001) but less experience with applications Introduces a second GP to describe discrepancy between model and reality Model inadequacy function Smoothness is again important See Robin Hankin’s presentation Beta of GEM-Cal available

19 Challenges Dimensionality Multiple outputs Dynamic models Model inadequacy Data assimilation Smoothness/discontinuities

20 MUCM ‘Managing Uncertainty in Complex Models’ New project to take technology forward Will work with a variety of models Identify robust toolkit Provide UML specifications and case studies Climate models provisionally planned for a case study GCMs will probably stretch technology to limit To get convincing results may need a dedicated initiative As with CTCD or flooding proposal (Jim Hall, FREE)

21 Conclusions Bayesian GP emulation powerful and efficient Encompasses all kinds of techniques in one coherent framework Well established for UA/SA Software available Theory in place for calibration and other techniques Need more experience and software http://tonyohagan.co.uk http://mucm.group.sheffield.ac.uk


Download ppt "Introduction to emulators Tony O’Hagan University of Sheffield."

Similar presentations


Ads by Google