Presentation on theme: "Toolbox example with three surrogates Data: clc; clear all; X = [1.4004 0.0466 2.8028 4.5642 6.1976]; Y = sin(X); NbVariables = 1; NbPointsTraining = length(X);"— Presentation transcript:
Toolbox example with three surrogates Data: clc; clear all; X = [1.4004 0.0466 2.8028 4.5642 6.1976]; Y = sin(X); NbVariables = 1; NbPointsTraining = length(X); Xplot = linspace(-pi/4, 2.5*pi)'; Yplot = sin(Xplot);
Increasing the bounds for the kriging theta UpperBound = 30*Theta0
Which surrogate is the best? Many papers have been written comparing surrogates for a single or group of problems to claim that a particular surrogate is superior. As we will see, there is no surrogate that is superior for most problems. When authors compare surrogates for test problems, they often can afford dense grid for testing. When we need to choose one for a particular problem, cross validation error is our best bet. There are other error metrics that are based on assumptions linked to a given surrogate, but they are not good for comparing surrogates of different types.
Recent study on cross validation error F.A.C. Viana, R.T. Haftka, and V. Steffen Jr, "Multiple surrogates: how cross-validation errors can help us to obtain the best predictor," Structural and Multidisciplinary Optimization, Vol. 39 (4), pp. 439-457, 2009 Test a series of problems with 24 surrogates, with different designs of experiments.
Conclusions Cross validation is useful to identify top group of surrogates for given design of experiments. Changing the number of points or even the design of experiments can change the ranking of the surrogates. For many industrial problems, fitting surrogates and using them to optimize is much cheaper than generating data points. It makes sense then to use several surrogates, not just one!