Single-Objective Bayesian Optimization via Multipoint Constant Liar
Source:R/bayesopt_mpcl.R
mlr_loop_functions_mpcl.Rd
Loop function for single-objective Bayesian Optimization via multipoint constant liar. Normally used inside an OptimizerMbo.
In each iteration after the initial design, the surrogate and acquisition function are updated.
The acquisition function is then optimized, to find a candidate but instead of evaluating this candidate, the
objective function value is obtained by applying the liar
function to all previously obtained objective function values.
This is repeated q - 1
times to obtain a total of q
candidates that are then evaluated in a single batch.
Usage
bayesopt_mpcl(
instance,
surrogate,
acq_function,
acq_optimizer,
init_design_size = NULL,
q = 2L,
liar = mean,
random_interleave_iter = 0L
)
Arguments
- instance
(bbotk::OptimInstanceBatchSingleCrit)
The bbotk::OptimInstanceBatchSingleCrit to be optimized.- surrogate
(Surrogate)
Surrogate to be used as a surrogate. Typically a SurrogateLearner.- acq_function
(AcqFunction)
AcqFunction to be used as acquisition function.- acq_optimizer
(AcqOptimizer)
AcqOptimizer to be used as acquisition function optimizer.- init_design_size
(
NULL
|integer(1)
)
Size of the initial design. IfNULL
and the bbotk::ArchiveBatch contains no evaluations,4 * d
is used withd
being the dimensionality of the search space. Points are generated via a Sobol sequence.- q
(
integer(1)
)
Batch size >1
. Default is2
.- liar
(
function
)
Any function accepting a numeric vector as input and returning a single numeric output. Default ismean
. Other sensible functions includemin
(ormax
, depending on the optimization direction).- random_interleave_iter
(
integer(1)
)
Everyrandom_interleave_iter
iteration (starting after the initial design), a point is sampled uniformly at random and evaluated (instead of a model based proposal). For example, ifrandom_interleave_iter = 2
, random interleaving is performed in the second, fourth, sixth, ... iteration. Default is0
, i.e., no random interleaving is performed at all.
Note
The
acq_function$surrogate
, even if already populated, will always be overwritten by thesurrogate
.The
acq_optimizer$acq_function
, even if already populated, will always be overwritten byacq_function
.The
surrogate$archive
, even if already populated, will always be overwritten by the bbotk::ArchiveBatch of the bbotk::OptimInstanceBatchSingleCrit.To make use of parallel evaluations in the case of `q > 1, the objective function of the bbotk::OptimInstanceBatchSingleCrit must be implemented accordingly.
References
Ginsbourger, David, Le Riche, Rodolphe, Carraro, Laurent (2008). “A Multi-Points Criterion for Deterministic Parallel Global Optimization Based on Gaussian Processes.”
Wang, Jialei, Clark, C. S, Liu, Eric, Frazier, I. P (2020). “Parallel Bayesian Global Optimization of Expensive Functions.” Operations Research, 68(6), 1850–1865.
Examples
# \donttest{
if (requireNamespace("mlr3learners") &
requireNamespace("DiceKriging") &
requireNamespace("rgenoud")) {
library(bbotk)
library(paradox)
library(mlr3learners)
fun = function(xs) {
list(y = xs$x ^ 2)
}
domain = ps(x = p_dbl(lower = -10, upper = 10))
codomain = ps(y = p_dbl(tags = "minimize"))
objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain)
instance = OptimInstanceBatchSingleCrit$new(
objective = objective,
terminator = trm("evals", n_evals = 7))
surrogate = default_surrogate(instance)
acq_function = acqf("ei")
acq_optimizer = acqo(
optimizer = opt("random_search", batch_size = 100),
terminator = trm("evals", n_evals = 100))
optimizer = opt("mbo",
loop_function = bayesopt_mpcl,
surrogate = surrogate,
acq_function = acq_function,
acq_optimizer = acq_optimizer,
args = list(q = 3))
optimizer$optimize(instance)
}
#> x x_domain y
#> <num> <list> <num>
#> 1: 2.053603 <list[1]> 4.217285
# }