Download presentation

Presentation is loading. Please wait.

Published byDuane Horsman Modified over 2 years ago

1
Insert Date HereSlide 1 Using Derivative and Integral Information in the Statistical Analysis of Computer Models Gemma Stephenson March 2007

2
www.mucm.group.shef.ac.ukSlide 2 Outline Background Complex Models Simulators and Emulators Building an emulator Examples: 1 Dimensional 2 Dimensions Future Work Use of Derivatives

3
www.mucm.group.shef.ac.ukSlide 3 Complex Models Simulate the behaviour of real-world systems Simulator: deterministic function, y = η(x) Inputs: x Outputs: y are the predictions of the real-world system being modelled Uncertainty in x in η(.) in how well the emulator approximates the simulator

4
www.mucm.group.shef.ac.ukSlide 4 Emulators Gaussian Process (GP) Emulation A Gaussian Process is one where every finite linear combination of values of the process has a normal distribution Emulator - Statistical approximation of the simulator Mean used as an approximation to simulator Approximation is simpler and quicker than original function Used for any uncertainty analysis and sensitivity analysis

5
www.mucm.group.shef.ac.ukSlide 5 Building an Emulator Deterministic function: y = η(x) Choose n design points x 1,..., x n Provides training data y T = {y 1 = η(x 1 ),..., y n = η(x n )} Aim: using the observations above we want to make Bayesian Inferences about η(x) Prior information about η(.) is represented as a GP and after the training data is applied; the posterior distribution is a GP also.

6
www.mucm.group.shef.ac.ukSlide 6 Prior Knowledge E [η(x) | β] = h(x) T β h(x) T is a known function of x β is a vector comprising of unknown coefficients Cov ( η(x), η(x ' ) | σ 2 ) = σ 2 c(x, x ' ) c(x, x ' ) = exp {− (x − x ' ) T B (x − x ' ) } B is a diagonal matrix of smoothing parameters Weak prior distribution for β and σ 2 p (β, σ 2 ) α σ -2

7
www.mucm.group.shef.ac.ukSlide 7 Posterior Information m ** (x) is the posterior mean used to predict the output at new points c ** (x, x) is the posterior covariance

8
www.mucm.group.shef.ac.ukSlide 8 1 Dimensional Example η(x) = 5 + x + cos(x) Choose n = 7 design points: (x 1 = -6, x 2 = -4,..., x 6 = 4, x 7 = 6) Training data is then: y T = {y 1 = η(x 1 ),..., y n = η(x 7 )} Take h(x) T =(1 x) then emulator mean is derived. Variance derived choosing c(x, x ' ) = exp {− 0.5 (x − x ' ) 2 } as the correlation function

9
www.mucm.group.shef.ac.ukSlide 9 1 Dimensional Example

10
www.mucm.group.shef.ac.ukSlide 10 Smoothness Assume that η(.) is a smooth, continuous function of the inputs. Given we know y at x = i, smoothness implies y is close to the same value, for any x close enough to i. The parameter, b, specifies how smooth the function is. b tells us how far a point can be from a design point before the uncertainty becomes appreciable

11
www.mucm.group.shef.ac.ukSlide 11 2 Dimensional Example x = (x 1, x 2 ) T η(x) = x 1 + x 2 + sin(x 1 x 2 ) + 2cos(x 1 ) n = 20 design points chosen using Latin Hypercube Sampling B estimated from the training data Emulator mean used to predict the output at 100 new inputs

12
www.mucm.group.shef.ac.ukSlide 12 2 Dimensional Example

13
www.mucm.group.shef.ac.ukSlide 13 Future Work How can derivative (and integral) information help?

14
www.mucm.group.shef.ac.ukSlide 14 Without Derivative Information

15
www.mucm.group.shef.ac.ukSlide 15 Derivative Information

16
www.mucm.group.shef.ac.ukSlide 16 Using Derivative Information

17
www.mucm.group.shef.ac.ukSlide 17 Future Work Cost of using derivatives When already available When we have the capability to produce them www.mucm.group.shef.ac.uk

Similar presentations

Presentation is loading. Please wait....

OK

Machine Learning Math Essentials Part 2

Machine Learning Math Essentials Part 2

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google