Wednesday , June 20 2018

The Influence of Experimental Designs on the Performance of Surrogate Model Based Costly Global Optimization Solvers


Dedicated to Dr. N. Andrei on the occasion of his 60th birthday

Abstract: When dealing with costly objective functions in optimization, one good alternative is to use a surrogate model approach. A common feature for all such methods is the need of an initial set of points, or “experimental design”, in order to start the algorithm. Since the behavior of the algorithms often depends heavily on this set, the question is how to choose a good experimental design. We investigate this by solving a number of problems using different designs, and compare the outcome with respect to function evaluations and a root mean square error test of the true function versus the surrogate model produced. Each combination of problem and design is solved by 3 different solvers available in the TOMLAB optimization environment. Results indicate two designs as superior.

Keywords: Black-box, Surrogate model, Costly functions, Latin Hypercube Designs, Experimental Design.

N.-H. Quttineh got a M.Sc. in Optimization at Linköping University in 2004 and has been a graduate student at Mälardalen University since 2005, with main subject Algorithms for costly global optimization.

K. Holmström got a Ph.D. in Numerical Analysis at Umeå University 1988 and did industrial research and development work for ABB between 1990 and 1993. Besides his academic career, he has been consultant in more than 100 industrial and scientific projects. Since 2001 he has been Full Professor in Optimization at Mälardalen University with main research interests algorithm and software development for industrial and financial optimization problems, in particular costly global mixed-integer constrained nonlinear optimization. Holmström is CEO of Tomlab Optimization Inc. and creator of TOMLAB, an advanced MATLAB optimization environment distributed since 1998.

>>Full text
Nils-Hassan QUTTINEH, Kenneth HOLMSTRÖM, The Influence of Experimental Designs on the Performance of Surrogate Model Based Costly Global Optimization Solvers, Studies in Informatics and Control, ISSN 1220-1766, vol. 18 (1), pp. 87-95, 2009.

1. Introduction

Global optimization of continuous black-box functions that are costly (computationally expensive, CPU-intensive) to evaluate is a challenging problem. Several approaches based on response surface techniques have been developed over the years. A common feature is that, unlike local optimization methods, every computed function value is saved and utilized.

Problems that are costly to evaluate are commonly found in engineering design, as well as industrial and financial applications. A function value could be the result of a complex computer program or an advanced simulation, e.g. computational fluid dynamics (CFD). Hence consuming anything from a few minutes to many hours of CPU time.

From an application perspective there are often restrictions on the variables besides the lower and upper bounds, such as linear, nonlinear or even integer constraints. The most general problem formulation is as follows:

The Mixed-Integer Costly Global Black-Box Nonconvex Problem
Image988-2009,1,9 (1)
Image989-2009,1,9 (1)
Image991-2009,1,9, Image992-2009,1,9

where Image993-2009,1,9 and Image994-2009,1,9. Matrix Image995-2009,1,9, Image996-2009,1,9; defines the Image997-2009,1,9 linear constraints and Image998-2009,1,9defines the Image999-2009,1,9 nonlinear constraints. The variables Image1000-2009,1,9 are restricted to be integers, where I is an index subset of Image1001-2009,1,9

Let Image1002-2009,1,9 be the feasible set defined only by the simple bounds, the box constraints, and Image1003-2009,1,9 be the feasible set defined by all the constraints in (1).

Almost every Costly Global Optimization (CGO) solver utilizes a surrogate model, or response surface, to approximate the true (costly) function. The RBF algorithm introduced by Powell and Gutmann [2, 9] use radial basis function interpolation to build an approximating surrogate model. The EGO algorithm by Jones et al. [6] utilizes the DACE framework. By optimizing a less costly utility function these algorithms determine a new point, where afterwards the original objective function is evaluated. This is repeated until some convergence criteria is fulfilled.

6. Conclusion

The N2 setting performed better for all experimental designs, so this is definitely good. One could of course try to start with even more points, but since the CGO solvers are constructed in a way where each new point is chosen carefully by utilizing information from all the sampled points, this is probably not a good idea.

Finding a feasible experimental design with space filling capacity is not easy. The algorithm proposed in this paper generates an initial design with feasible points having the structure of a Maximin LHD. To see any real effect of a fully feasible experimental design, one must probably have test problems where a large area of the design space is infeasible. Most of the problems in PC have large feasible areas and thus the effect is not as noticeable. Also, as the number of initial points N is typically a small part of the total number of sampled points, the effect is limited.

Sampling the corner points of the bounding box add a tremendous stability to the solvers, one could think of it as pinpointing the corners of the surface and therefore getting a more stable description of the boundary. This feature is important as it tends to help the solvers sample more interior points, which often helps the convergence.

The Maximin LHD approach is superior when looking at the RMS error. Combining this with the success of the Corner Point Strategy seemed like a promising idea, but unfortunately did not improve the f% and x% metrics as we had hoped.

There is no obvious winner since all the experimental designs work satisfactory. But since we consider costly functions, even small differences do matter. The combination of Corner Points and global solver performs very well compared to the other experimental designs, with robust results for all metrics.


  1. Dolan, E.D., J.J. Moré, and T.S. Munson, Optimality measures for performance profiles, Preprint ANL/MCS-P1155-0504 (2004).
  2. Gutmann, H.-M., A radial basis function method for global optimization, Journal of Global Optimization 19 (2001a).
  3. Holmström, K., An adaptive radial basis algorithm (ARBF) for expensive black-box global optimization, Journal of Global Optimization 41 (2008).
  4. Holmström, K. and M. M. Edvall, Chapter 19: The tomlab optimization environment, In: Josef Kallrath, L.G. (ed.) Modeling Languages in Mathematical Optimization. Boston/ Dordrecht/ London (2004).
  5. Jones, D. R., C. D. Perttunen, and B. E. Stuckman, Lipschitzian optimization without the lipschitz constant, Journal of Optimization Theory and Applications 79 (1993October), No. 1, pp. 157-181.
  6. Jones, D. R., M. Schonlau, and W. J. Welch, Efficient global optimization of expensive black-box functions, Journal of Global Optimization 13 (1998).
  7. Jones, Donald R., DIRECT, Encyclopedia of Optimization (2001).
  8. Moré, J. J. and S. M. Wild, Benchmarking derivative-free optimization algorithms, Preprint ANL/MCS-P1471-1207 (2007).
  9. Powell, M. J. D., Recent research at Cambridge on radial basis functions, New Developments in Approximation Theory (1999).
  10. D. den Hertog, E. Stinstra, P. Stehouwer, and A. Vestjens, Constrained maximin designs for computer experiments, Technometrics 45 (2003), no. 4, pp. 340-346.