Optimizer for AcqFunctions which performs the acquisition function optimization. Wraps an bbotk::Optimizer and bbotk::Terminator.

## Parameters

`n_candidates`

`integer(1)`

Number of candidate points to propose. Note that this does not affect how the acquisition function itself is calculated (e.g., setting`n_candidates > 1`

will not result in computing the q- or multi-Expected Improvement) but rather the top`n-candidates`

are selected from the bbotk::Archive of the acquisition function bbotk::OptimInstance. Note that setting`n_candidates > 1`

is usually not a sensible idea but it is still supported for experimental reasons. Default is`1`

.`logging_level`

`character(1)`

Logging level during the acquisition function optimization. Can be`"fatal"`

,`"error"`

,`"warn"`

,`"info"`

,`"debug"`

or`"trace"`

. Default is`"warn"`

, i.e., only warnings are logged.`warmstart`

`logical(1)`

Should the acquisition function optimization be warm-started by evaluating the best point(s) present in the bbotk::Archive of the actual bbotk::OptimInstance? This is sensible when using a population based acquisition function optimizer, e.g., local search or mutation. Default is`FALSE`

.`warmstart_size`

`integer(1) | "all"`

Number of best points selected from the bbotk::Archive that are to be used for warm starting. Can also be "all" to use all available points. Only relevant if`warmstart = TRUE`

. Default is`1`

.`skip_already_evaluated`

`logical(1)`

It can happen that the candidate resulting of the acquisition function optimization was already evaluated in a previous iteration. Should this candidate proposal be ignored and the next best point be selected as a candidate? Default is`TRUE`

.`catch_errors`

`logical(1)`

Should errors during the acquisition function optimization be caught and propagated to the`loop_function`

which can then handle the failed acquisition function optimization appropriately by, e.g., proposing a randomly sampled point for evaluation? Default is`TRUE`

.

## Public fields

`optimizer`

`terminator`

`acq_function`

(AcqFunction).

## Active bindings

`print_id`

(

`character`

)

Id used when printing.`param_set`

(paradox::ParamSet)

Set of hyperparameters.

## Methods

### Method `new()`

Creates a new instance of this R6 class.

#### Usage

`AcqOptimizer$new(optimizer, terminator, acq_function = NULL)`

#### Arguments

`optimizer`

`terminator`

`acq_function`

(

`NULL`

| AcqFunction).

### Method `optimize()`

Optimize the acquisition function.

#### Returns

`data.table::data.table()`

with 1 row per optimum and x as columns.

## Examples

```
if (requireNamespace("mlr3learners") &
requireNamespace("DiceKriging") &
requireNamespace("rgenoud")) {
library(bbotk)
library(paradox)
library(mlr3learners)
library(data.table)
fun = function(xs) {
list(y = xs$x ^ 2)
}
domain = ps(x = p_dbl(lower = -10, upper = 10))
codomain = ps(y = p_dbl(tags = "minimize"))
objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain)
instance = OptimInstanceSingleCrit$new(
objective = objective,
terminator = trm("evals", n_evals = 5))
instance$eval_batch(data.table(x = c(-6, -5, 3, 9)))
learner = default_gp()
surrogate = srlrn(learner, archive = instance$archive)
acq_function = acqf("ei", surrogate = surrogate)
acq_function$surrogate$update()
acq_function$update()
acq_optimizer = acqo(
optimizer = opt("random_search", batch_size = 1000),
terminator = trm("evals", n_evals = 1000),
acq_function = acq_function)
acq_optimizer$optimize()
}
#> Loading required namespace: DiceKriging
#> Loading required namespace: rgenoud
#> Loading required package: paradox
#> Loading required package: mlr3
#> x x_domain acq_ei .already_evaluated
#> <num> <list> <num> <lgcl>
#> 1: 1.187665 <list[1]> 5.305171 FALSE
```