Presentation is loading. Please wait.

Presentation is loading. Please wait.

Local surrogates To model a complex wavy function we need a lot of data. Modeling a wavy function with high order polynomials is inherently ill-conditioned.

Similar presentations


Presentation on theme: "Local surrogates To model a complex wavy function we need a lot of data. Modeling a wavy function with high order polynomials is inherently ill-conditioned."— Presentation transcript:

1

2 Local surrogates To model a complex wavy function we need a lot of data. Modeling a wavy function with high order polynomials is inherently ill-conditioned. With a lot of data we normally predict function values using only nearby values. We may fit several local surrogates as in figure. For example, if you have the price of gasoline every first of the month from 2000 through 2009, how many values would you use to estimate the price on June 15, 2007?

3 Popular local surrogates Moving least squares: Weighting more heavily points near the prediction location. Radial basis neural network: Regression with local functions that decay away from data points. Kriging: Radial basis functions, but fitting philosophy not based on error at data points but on correlation between function values at near and far points.

4 Review of Linear Regression

5 Moving least squares

6 Weighted least squares

7 Six-hump camelback function Definition: Function fit with moving least squares using quadratic polynomials.

8 Effect of number of points and decay rate.

9 Radial basis neural networks a1a1 a2a2 a3a3 x ŷ(x) Input Output Radial basis function W1W1 W2W2 W3W3 0.5 Radial basis functions 1 0 -0.8330.833 b Input

10 In regression notation

11 Example Evaluate the function y=x+0.5sin(5x) at 21 points in the interval [1,9], fit an RBF to it and compare the surrogate to the function over the interval[0,10]. Fit using default options in Matlab, achieves zero rms error by using all data points as basis functions (neurons) Very good interpolation, but even mild extrapolation is horrible.

12 Accept 0.1 mean squared error net=newrb(x,y,0.1,1,20,1); spread set to 1, ( 11 neurons were used). With about half of the data points used as basis functions, the fit is more like polynomial regression. Interpolation is not as good, but the trend is captured, so that extrapolation is not as disastrous. Obviously, if we just wanted to capture the trend, we would have been better with a polynomial.

13 Too narrow a spread net=newrb(x,y,0.1,0.2,20,1); ( 17 neurons used) With a spread of 0.2 and the points being 0.4 apart (21 points in [1,9]), the shape functions decay to less than 0.02 at the nearest point. This means that each data point if fitted individually, so that we get spikes at data points. A rule of thumb is that the spread should not be smaller than the distance to the nearest point.


Download ppt "Local surrogates To model a complex wavy function we need a lot of data. Modeling a wavy function with high order polynomials is inherently ill-conditioned."

Similar presentations


Ads by Google