Hi!
I am trying to treat each iteration of BO as a certain time-step. In other words, would it be possible to enforce the BO to evaluate the objective function at constrained time-step? It seems dealing the
time-step as an additional input is a sensible choice, but how can I proceed in GPyOpt after considering the additional input in my kernel? Is there any need to change the formulation of EI?
Thanks for help!
Hi Ali,
Something you can do is to use the Expected improvement (EI) per unit of time. In GPyOpt you can do this by setting
cost_withGradients= 'evaluation_time'
when creating the optimization object. What it does is to use the empirical evaluation times to fit a GP to the log of the cost. The posterior mean, mu_cost(x), is used to compute the EI as:
EI_unittime(x) = EI(x)/mu_cost(x)
We still haven't written a notebook about this because it is a recent feature that we are still testing but you can give it a try. Just let us know if it worked for you.
Hope this help,. Thanks for the interest!
Javier
Thanks Javier!
That sounds great! Since it is a new feature of GPyOpt, would you please give more illustration about its functionality? More precisely, what do you mean by:
* if cost_withGradients = 'evaluation time' the evaluation time of the function is used to model a GP whose mean is used as cost.*
I have a dynamic black-box objective function which has to be evaluated every 30 minutes. Does this recent feature appropriate in this case? Do I still need to explicitly consider the time in my kernel as an additional input?
By the way, what is the best way to report bugs or typos?
Thanks for the help!
Hi Ali,
cost_withGradients = 'evaluation time' is an option that you can use when creating the original BO object with
GPyOpt.methods.BayesianOptimization(...)
I am working on the domumentation now so hopefully these things will be more clear soon. I also want to write a small notebook to illustrate how to use cost functions. I will keep you posted on this.
Regarding your problem, If I undestand well the problem that you have is that the objective change in time so the outcome of an evaluation _now_ at x chages if you evaluate at the same location in a few iterations. To decide where to evaluate the function you somehow need to weight your sample in a way that recent evaluations are more informative about the current shape of your function than old ones. As you say, you can take that dynamic component into account by using the time in the kernel. This paper may help you (if you don't know it yet!):
Most helpful comment
Hi Ali,
Something you can do is to use the Expected improvement (EI) per unit of time. In GPyOpt you can do this by setting
when creating the optimization object. What it does is to use the empirical evaluation times to fit a GP to the log of the cost. The posterior mean, mu_cost(x), is used to compute the EI as:
EI_unittime(x) = EI(x)/mu_cost(x)
We still haven't written a notebook about this because it is a recent feature that we are still testing but you can give it a try. Just let us know if it worked for you.
Hope this help,. Thanks for the interest!
Javier