emukit
emukit copied to clipboard
Refactor optimizer interface
Our optimizers have the following signature:
def optimize(self, x0: np.array, f: Callable=None, df: Callable=None, f_df: Callable=None)
where:
- f is the function
- df is the gradient of the function
- f_df is the tuple of previous two
This gives raise to doubts: what to do when all three are given?, what to do when df and f_df are given? Which ones take priority?
This is the remnant of gpyopt, which was brought in as we were getting rid of gpyopt as a dependency. However we should try and refactor this stuff out, and make the interface less error prone. Most logical idea seems to be to remove f_df
, but this is open to suggestions
Is there a reason that the method gets the callables instead of the optimizer object?
Regrading Andrei's comment, we could also have a boolean which is True if f also returns a gradient and false if not. like this we only have one callable, either f or f_df.
@mmahsereci do you mean the acquisition rather than optimizer object? This is leftover from GPyOpt where things like local penalization were implemented in a such a way that you were no longer optimizing an acquisition object but just an arbitrary python function. I'm not sure if we will need to do that in emukit.
I like the idea of passing in either f
or f_df
but we will still have to do some magic in the optimizers as the interface to scipy optimizers are inconsistent - the current lbfgs optimizer wants f_df
whereas the trust region optimizer wants f
and df
separately.
@marpulli magic inside is perfectly fine, if that's what scipy requires us to do. this is just an implementation detail. inconsistent/easy to misuse interface isn't