OptimizerMbo
class that implements Model Based Optimization (MBO).
The implementation follows a modular layout relying on a loop_function determining the MBO flavor to be used, e.g.,
bayesopt_ego for sequential single-objective Bayesian Optimization, a Surrogate, an AcqFunction, e.g., mlr_acqfunctions_ei for
Expected Improvement and an AcqOptimizer.
MBO algorithms are iterative optimization algorithms that make use of a continuously updated surrogate model built for the objective function. By optimizing a comparably cheap to evaluate acquisition function defined on the surrogate prediction, the next candidate is chosen for evaluation.
Detailed descriptions of different MBO flavors are provided in the documentation of the respective loop_function.
Termination is handled via a bbotk::Terminator part of the bbotk::OptimInstanceBatch to be optimized.
Note that in general the Surrogate is updated one final time on all available data after the optimization process has terminated.
However, in certain scenarios this is not always possible or meaningful, e.g., when using bayesopt_parego()
for multi-objective optimization
which uses a surrogate that relies on a scalarization of the objectives.
It is therefore recommended to manually inspect the Surrogate after optimization if it is to be used, e.g., for visualization purposes to make
sure that it has been properly updated on all available data.
If this final update of the Surrogate could not be performed successfully, a warning will be logged.
By specifying a ResultAssigner, one can alter how the final result is determined after optimization, e.g., simply based on the evaluations logged in the archive ResultAssignerArchive or based on the Surrogate via ResultAssignerSurrogate.
Archive
The bbotk::ArchiveBatch holds the following additional columns that are specific to MBO algorithms:
acq_function$id
(numeric(1)
)
The value of the acquisition function.".already_evaluated"
(logical(1))
Whether this point was already evaluated. Depends on theskip_already_evaluated
parameter of the AcqOptimizer.
Super classes
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerMbo
Active bindings
loop_function
(loop_function |
NULL
)
Loop function determining the MBO flavor.surrogate
(Surrogate |
NULL
)
The surrogate.acq_function
(AcqFunction |
NULL
)
The acquisition function.acq_optimizer
(AcqOptimizer |
NULL
)
The acquisition function optimizer.args
(named
list()
)
Further arguments passed to theloop_function
. For example,random_interleave_iter
.result_assigner
(ResultAssigner |
NULL
)
The result assigner.param_classes
(
character()
)
Supported parameter classes that the optimizer can optimize. Determined based on thesurrogate
and theacq_optimizer
. This corresponds to the values given by a paradox::ParamSet's$class
field.properties
(
character()
)
Set of properties of the optimizer. Must be a subset ofbbotk_reflections$optimizer_properties
. MBO in principle is very flexible and by default we assume that the optimizer has all properties. When fully initialized, properties are determined based on the loop, e.g., theloop_function
, andsurrogate
.packages
(
character()
)
Set of required packages. A warning is signaled prior to optimization if at least one of the packages is not installed, but loaded (not attached) later on-demand viarequireNamespace()
. Required packages are determined based on theacq_function
,surrogate
and theacq_optimizer
.
Methods
Inherited methods
Method new()
Creates a new instance of this R6 class.
If surrogate
is NULL
and the acq_function$surrogate
field is populated, this Surrogate is used.
Otherwise, default_surrogate(instance)
is used.
If acq_function
is NULL
and the acq_optimizer$acq_function
field is populated, this AcqFunction is used (and therefore its $surrogate
if populated; see above).
Otherwise default_acqfunction(instance)
is used.
If acq_optimizer
is NULL
, default_acqoptimizer(instance)
is used.
Even if already initialized, the surrogate$archive
field will always be overwritten by the bbotk::ArchiveBatch of the current bbotk::OptimInstanceBatch to be optimized.
For more information on default values for loop_function
, surrogate
, acq_function
, acq_optimizer
and result_assigner
, see ?mbo_defaults
.
Usage
OptimizerMbo$new(
loop_function = NULL,
surrogate = NULL,
acq_function = NULL,
acq_optimizer = NULL,
args = NULL,
result_assigner = NULL
)
Arguments
loop_function
(loop_function |
NULL
)
Loop function determining the MBO flavor.surrogate
(Surrogate |
NULL
)
The surrogate.acq_function
(AcqFunction |
NULL
)
The acquisition function.acq_optimizer
(AcqOptimizer |
NULL
)
The acquisition function optimizer.args
(named
list()
)
Further arguments passed to theloop_function
. For example,random_interleave_iter
.result_assigner
(ResultAssigner |
NULL
)
The result assigner.
Method reset()
Reset the optimizer.
Sets the following fields to NULL
:
loop_function
, surrogate
, acq_function
, acq_optimizer
, args
, result_assigner
Method optimize()
Performs the optimization and writes optimization result into bbotk::OptimInstanceBatch. The optimization result is returned but the complete optimization path is stored in bbotk::ArchiveBatch of bbotk::OptimInstanceBatch.
Examples
# \donttest{
if (requireNamespace("mlr3learners") &
requireNamespace("DiceKriging") &
requireNamespace("rgenoud")) {
library(bbotk)
library(paradox)
library(mlr3learners)
# single-objective EGO
fun = function(xs) {
list(y = xs$x ^ 2)
}
domain = ps(x = p_dbl(lower = -10, upper = 10))
codomain = ps(y = p_dbl(tags = "minimize"))
objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain)
instance = OptimInstanceBatchSingleCrit$new(
objective = objective,
terminator = trm("evals", n_evals = 5))
surrogate = default_surrogate(instance)
acq_function = acqf("ei")
acq_optimizer = acqo(
optimizer = opt("random_search", batch_size = 100),
terminator = trm("evals", n_evals = 100))
optimizer = opt("mbo",
loop_function = bayesopt_ego,
surrogate = surrogate,
acq_function = acq_function,
acq_optimizer = acq_optimizer)
optimizer$optimize(instance)
# multi-objective ParEGO
fun = function(xs) {
list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2)
}
domain = ps(x = p_dbl(lower = -10, upper = 10))
codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize"))
objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain)
instance = OptimInstanceBatchMultiCrit$new(
objective = objective,
terminator = trm("evals", n_evals = 5))
optimizer = opt("mbo",
loop_function = bayesopt_parego,
surrogate = surrogate,
acq_function = acq_function,
acq_optimizer = acq_optimizer)
optimizer$optimize(instance)
}
#> WARN [17:22:20.207] [bbotk] Task 'surrogate_task' has missing values in column(s) 'y_scal', but learner 'regr.km' does not support this
#> WARN [17:22:20.208] [bbotk] Could not update the surrogate a final time after the optimization process has terminated.
#> x x_domain y1 y2
#> <num> <list> <num> <num>
#> 1: 1.5279770 <list[1]> 2.3347137 0.2228057
#> 2: -0.1195738 <list[1]> 0.0142979 4.4925933
# }