Acquisition Function Stochastic Expected Improvement
Source:R/AcqFunctionStochasticEI.R
mlr_acqfunctions_stochastic_ei.Rd
Expected Improvement with epsilon decay.
\(\epsilon\) is updated after each update by the formula epsilon * exp(-rate * (t %% period))
where t
is the number of times the acquisition function has been updated.
While this acquisition function usually would be used within an asynchronous optimizer, e.g., OptimizerAsyncMbo, it can in principle also be used in synchronous optimizers, e.g., OptimizerMbo.
Dictionary
This AcqFunction can be instantiated via the dictionary
mlr_acqfunctions or with the associated sugar function acqf()
:
Parameters
"epsilon"
(numeric(1)
)
\(\epsilon\) value used to determine the amount of exploration. Higher values result in the importance of improvements predicted by the posterior mean decreasing relative to the importance of potential improvements in regions of high predictive uncertainty. Defaults to0.1
."rate"
(numeric(1)
)
Defaults to0.05
."period"
(integer(1)
)
Period of the exponential decay. Defaults toNULL
, i.e., the decay has no period.
Note
This acquisition function always also returns its current (
acq_epsilon
) and original (acq_epsilon_0
) \(\epsilon\). These values will be logged into the bbotk::ArchiveBatch of the bbotk::OptimInstanceBatch of the AcqOptimizer and therefore also in the bbotk::Archive of the actual bbotk::OptimInstance that is to be optimized.
References
Jones, R. D, Schonlau, Matthias, Welch, J. W (1998). “Efficient Global Optimization of Expensive Black-Box Functions.” Journal of Global optimization, 13(4), 455–492.
See also
Other Acquisition Function:
AcqFunction
,
mlr_acqfunctions
,
mlr_acqfunctions_aei
,
mlr_acqfunctions_cb
,
mlr_acqfunctions_ehvi
,
mlr_acqfunctions_ehvigh
,
mlr_acqfunctions_ei
,
mlr_acqfunctions_eips
,
mlr_acqfunctions_mean
,
mlr_acqfunctions_multi
,
mlr_acqfunctions_pi
,
mlr_acqfunctions_sd
,
mlr_acqfunctions_smsego
,
mlr_acqfunctions_stochastic_cb
Super classes
bbotk::Objective
-> mlr3mbo::AcqFunction
-> AcqFunctionStochasticEI
Public fields
y_best
(
numeric(1)
)
Best objective function value observed so far. In the case of maximization, this already includes the necessary change of sign.
Methods
Method new()
Creates a new instance of this R6 class.
Usage
AcqFunctionStochasticEI$new(
surrogate = NULL,
epsilon = 0.1,
rate = 0.05,
period = NULL
)
Arguments
surrogate
(
NULL
| SurrogateLearner).epsilon
(
numeric(1)
).rate
(
numeric(1)
).period
(
NULL
|integer(1)
).
Method update()
Update the acquisition function.
Sets y_best
to the best observed objective function value.
Decays epsilon.
Method reset()
Reset the acquisition function.
Resets the private update counter .t
used within the epsilon decay.
Examples
if (requireNamespace("mlr3learners") &
requireNamespace("DiceKriging") &
requireNamespace("rgenoud")) {
library(bbotk)
library(paradox)
library(mlr3learners)
library(data.table)
fun = function(xs) {
list(y = xs$x ^ 2)
}
domain = ps(x = p_dbl(lower = -10, upper = 10))
codomain = ps(y = p_dbl(tags = "minimize"))
objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain)
instance = OptimInstanceBatchSingleCrit$new(
objective = objective,
terminator = trm("evals", n_evals = 5))
instance$eval_batch(data.table(x = c(-6, -5, 3, 9)))
learner = default_gp()
surrogate = srlrn(learner, archive = instance$archive)
acq_function = acqf("stochastic_ei", surrogate = surrogate)
acq_function$surrogate$update()
acq_function$update()
acq_function$eval_dt(data.table(x = c(-1, 0, 1)))
}
#> acq_ei acq_epsilon acq_epsilon_0
#> <num> <num> <num>
#> 1: 4.374607 0.1 0.1
#> 2: 4.835292 0.1 0.1
#> 3: 5.262107 0.1 0.1