Presentation is loading. Please wait.

Presentation is loading. Please wait.

Options and generalisations. Outline Dimensionality Many inputs and/or many outputs GP structure Mean and variance functions Prior information Multi-output,

Similar presentations


Presentation on theme: "Options and generalisations. Outline Dimensionality Many inputs and/or many outputs GP structure Mean and variance functions Prior information Multi-output,"— Presentation transcript:

1 Options and generalisations

2 Outline Dimensionality Many inputs and/or many outputs GP structure Mean and variance functions Prior information Multi-output, dynamic and non-deterministic emulators Design Design for building the emulator Design for validation Design for calibration Bayes linear emulators MUCM short course - session 32

3 Dimensionality

4 Many inputs This session is about variations and extensions of the methodology illustrated in Session 2 Brief look at a number of topics – more detail in the toolkit! The first topic is dimensionality All serious simulators require more than one input The norm is anything from a few to thousands All of the basic emulation theory in the toolkit assumes multiple inputs Even the core problem (e.g. ThreadCoreGP) Large numbers of inputs pose computational problems Dimension reduction techniques have been developed Output typically depends principally on a few inputs MUCM short course - session 34

5 Many outputs Most simulators also produce multiple outputs For instance, a climate simulator may predict temperature on a grid of pixels, sea level, etc. Usually, for any given use of the simulator we are interested in just one output So we can just emulate that one But some problems require multi-output emulation Again, there are dimension reduction techniques All described in the toolkit MUCM short course - session 35

6 GP structure

7 The GP mean function We can use this to say what kind of shape we would expect the output to take as a function of the inputs Most simulator outputs exhibit some overall trend in response to varying a single input So we usually specify a linear mean function Slopes (positive or negative) are estimated from the training data The emulator mean smoothes the residuals after fitting the linear terms We can generalise to other kinds of mean function if we have a clear idea of how the simulator will behave The better the mean function the less the GP has to do MUCM short course - session 37

8 Example Simulator is solid line Dashed line is linear fit Thin blue lines indicate fitted residuals Without the linear mean function, we’d have a horizontal (constant) fit And larger residuals Leading to larger emulator uncertainty MUCM short course - session 38

9 9

10 The GP covariance function The covariance function determines how ‘wiggly’ the response is to each input There’s a lot of flexibility here, but standard covariance functions have a parameter for each input These ‘correlation length’ parameters are also estimated from the training data But some care is needed For predicting output at untried x, correlation lengths are important They determine how much information comes from nearby training points And hence the emulator accuracy MUCM short course - session 310

11 Prior distributions Prior information enters through the form of the mean function And to a lesser extent the covariance function But we can also supply prior information through the prior distributions For slope/regression parameters and correlation lengths Also the overall variance parameter Putting in genuine prior information here generally improves emulator performance Compared with standard ‘non-informative’ priors MUCM short course - session 311

12 Multi-output emulators When we need to emulate several simulator outputs, there are a number of available approaches Single output GP with added input(s) indexing the outputs For temperature outputs on a grid, make grid coordinates 2 additional inputs Independent GPs Multivariate GP Independent GPs for a linear transformation E.g. Principal components Possibility for dimension reduction These are all documented in the toolkit MUCM short course - session 312

13 MUCM short course - session 313 From ThreadVariantMultipleOutputs

14 Dynamic emulation Many simulators predict a process evolving in time At each time-step the simulator updates the system state Often driven by external forcing variables at each time- step Climate models are usually dynamic in this sense We are interested in emulating the simulator’s time series of outputs The various forms of multi-output emulation can be used Or a dynamic emulator that works by emulating the single time-step And then iterating the emulator Also documented in the toolkit MUCM short course - session 314

15 MUCM short course - session 315 From ThreadVariantDynamic

16 Stochastic emulation Other simulators produce non-deterministic outputs Running a stochastic simulator twice with the same input x produces randomly different outputs Different emulation strategies arise depending on what aspect of the output is of interest Interest focuses on the mean Output has added noise Which we allow for when building the emulator Interest focuses on risk of exceeding a threshold Emulate the distribution and derive the risk Emulate the risk This is not yet covered in the toolkit MUCM short course - session 316

17 Design

18 Training sample design To build an emulator, we use a set of simulator runs Our training data are y 1 = f(x 1 ),..., y n = f(x n ) Where x 1, x 2,..., x n are n different points in the space of possible inputs This set of n points is a design Traditional Monte Carlo methods use randomly chosen design points One reason why emulation can be better is that we can use a carefully chosen design A good design will provide us with maximum information about the simulator And hence an emulator that is as good as possible MUCM short course - session 318

19 MUCM short course - session 319

20 Design principles Design space Over what range of values for the inputs do we want to build the emulator? This is usually a small part of the possible input space Filling the space We want to place n points in this space It is generally good practice to spread them quite evenly over the design space Because adding a new point very close to an existing point provides little additional information There are several ways to generate space-filling designs MUCM short course - session 320

21 Factorial designs A factorial design is a grid Uses a set of values for each input And the grid is all combinations of these This is a 3x3 = 9 point design for two inputs Disadvantages In higher dimensions requires too many points E.g. Just 3 values for each of 8 inputs means 3 8 = 6561 runs Wasteful when some inputs don’t affect output appreciably The 3x3 design has just 3 points if one input is inactive MUCM short course - session 321 xxx xxx xxx

22 Latin hypercube designs LHC designs Use n values for each input Combining randomly Advantages Doesn’t necessarily require a large number of points Nothing lost if some inputs are inactive Disadvantages Random choice may not produce an even spread of points Need to generate many LHC designs and pick the best MUCM short course - session 322 x x x x x x x x x

23 MUCM short course - session 323

24 Some more choices Various formulae and algorithms exist to generate space-filling designs for any number of inputs The Sobol sequence is often used Quick and convenient Not always good when some inputs are inactive Optimal designs maximise/minimise some criterion E.g. Maximum entropy designs Can be hard to compute Hybrid designs try to satisfy two criteria Space-filling but also having a few points closer together In order to estimate correlation lengths well MUCM short course - session 324

25 Design for validation Validation checks outputs from a sample of new simulator runs against the predictions of an emulator Are the true outputs as close to the emulator mean values as the emulator variances say they should be? What would be good inputs to use for validation? Points close to others already used for training Such points test correlation lengths Points far from any training sample point These points test the mean function Together, the training and validation designs should look space filling except for some points close together As suggested for hybrid training sample designs MUCM short course - session 325

26 Design for calibration A very different design problem How should we design an experiment observing the real system? How to set the values of controllable inputs? With the aim of learning most about uncertain (uncontrollable) inputs And model discrepancy This topic is as yet largely unexplored! MUCM short course - session 326

27 MUCM short course - session 327 From ThreadTopicExperimentalDesign

28 Bayes linear emulation

29 Bayes linear methods The approach in this course is based in the fully Bayesian framework But there is an alternative framework – Bayes linear methods Based only on first and second order moments Means, variances, covariances Avoids making assumptions about distributions Its predictions are also first and second order moments Means, variances, covariances but no distributions The toolkit contains theory and procedures for Bayes linear emulators ThreadCoreBL etc. MUCM short course - session 329

30 MUCM short course - session 330

31 Bayes linear emulators Much of the mathematics is very similar A Bayes linear emulator is not a GP but gives the same mean and variance predictions for f(x) For given correlation lengths, mean function parameters Although these are handled differently But the emulator predictions no longer have distributions Compared with GP emulators Advantages – simpler and may be feasible for more complex problems Disadvantages – absence of distributions limits many of the uses of emulators Compromises made MUCM short course - session 331


Download ppt "Options and generalisations. Outline Dimensionality Many inputs and/or many outputs GP structure Mean and variance functions Prior information Multi-output,"

Similar presentations


Ads by Google