Skip to contents

Optimizer for AcqFunctions which performs the acquisition function optimization. Wraps an bbotk::OptimizerBatch and bbotk::Terminator.

Parameters

n_candidates

integer(1)
Number of candidate points to propose. Note that this does not affect how the acquisition function itself is calculated (e.g., setting n_candidates > 1 will not result in computing the q- or multi-Expected Improvement) but rather the top n_candidates are selected from the bbotk::ArchiveBatch of the acquisition function bbotk::OptimInstanceBatch. Note that setting n_candidates > 1 is usually not a sensible idea but it is still supported for experimental reasons. Note that in the case of the acquisition function bbotk::OptimInstanceBatch being multi-criteria, due to using an AcqFunctionMulti, selection of the best candidates is performed via non-dominated-sorting. Default is 1.

logging_level

character(1)
Logging level during the acquisition function optimization. Can be "fatal", "error", "warn", "info", "debug" or "trace". Default is "warn", i.e., only warnings are logged.

warmstart

logical(1)
Should the acquisition function optimization be warm-started by evaluating the best point(s) present in the bbotk::Archive of the actual bbotk::OptimInstance (which is contained in the archive of the AcqFunction)? This is sensible when using a population based acquisition function optimizer, e.g., local search or mutation. Default is FALSE. Note that in the case of the bbotk::OptimInstance being multi-criteria, selection of the best point(s) is performed via non-dominated-sorting.

warmstart_size

integer(1) | "all"
Number of best points selected from the bbotk::Archive of the actual bbotk::OptimInstance that are to be used for warm starting. Can either be an integer or "all" to use all available points. Only relevant if warmstart = TRUE. Default is 1.

skip_already_evaluated

logical(1)
It can happen that the candidate(s) resulting of the acquisition function optimization were already evaluated on the actual bbotk::OptimInstance. Should such candidate proposals be ignored and only candidates that were yet not evaluated be considered? Default is TRUE.

catch_errors

logical(1)
Should errors during the acquisition function optimization be caught and propagated to the loop_function which can then handle the failed acquisition function optimization appropriately by, e.g., proposing a randomly sampled point for evaluation? Setting this to FALSE can be helpful for debugging. Default is TRUE.

Public fields

optimizer

(bbotk::OptimizerBatch).

terminator

(bbotk::Terminator).

acq_function

(AcqFunction).

callbacks

(NULL | list of mlr3misc::Callback).

Active bindings

print_id

(character)
Id used when printing.

param_set

(paradox::ParamSet)
Set of hyperparameters.

Methods


Method new()

Creates a new instance of this R6 class.

Usage

AcqOptimizer$new(optimizer, terminator, acq_function = NULL, callbacks = NULL)

Arguments

optimizer

(bbotk::OptimizerBatch).

terminator

(bbotk::Terminator).

acq_function

(NULL | AcqFunction).

callbacks

(NULL | list of mlr3misc::Callback)


Method format()

Helper for print outputs.

Usage

AcqOptimizer$format()

Returns

(character(1)).


Method print()

Print method.

Usage

AcqOptimizer$print()

Returns

(character()).


Method optimize()

Optimize the acquisition function.

Usage

AcqOptimizer$optimize()

Returns

data.table::data.table() with 1 row per candidate.


Method reset()

Reset the acquisition function optimizer.

Currently not used.

Usage

AcqOptimizer$reset()


Method clone()

The objects of this class are cloneable with this method.

Usage

AcqOptimizer$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

if (requireNamespace("mlr3learners") &
    requireNamespace("DiceKriging") &
    requireNamespace("rgenoud")) {
  library(bbotk)
  library(paradox)
  library(mlr3learners)
  library(data.table)

  fun = function(xs) {
    list(y = xs$x ^ 2)
  }
  domain = ps(x = p_dbl(lower = -10, upper = 10))
  codomain = ps(y = p_dbl(tags = "minimize"))
  objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain)

  instance = OptimInstanceBatchSingleCrit$new(
    objective = objective,
    terminator = trm("evals", n_evals = 5))

  instance$eval_batch(data.table(x = c(-6, -5, 3, 9)))

  learner = default_gp()

  surrogate = srlrn(learner, archive = instance$archive)

  acq_function = acqf("ei", surrogate = surrogate)

  acq_function$surrogate$update()
  acq_function$update()

  acq_optimizer = acqo(
    optimizer = opt("random_search", batch_size = 1000),
    terminator = trm("evals", n_evals = 1000),
    acq_function = acq_function)

  acq_optimizer$optimize()
}
#> Loading required namespace: DiceKriging
#> Loading required namespace: rgenoud
#>           x   acq_ei  x_domain .already_evaluated
#>       <num>    <num>    <list>             <lgcl>
#> 1: 1.187665 5.305171 <list[1]>              FALSE