Performance & runtime improvements to info-theoretic acquisition functions (1/N)
A series of improvements directed towards improving the performance of PES & JES, as well as their MultiObj counterparts.
Motivation
As pointed out by @SebastianAment in this paper, the BoTorch variant of JES, and to an extent PES, is brutally slow an suspiciously ill-performing. To bring them up to their potential, I've added a series of performance improvements:
1. Improvement to get_optimal_samples and optimal_posterior_samples: As this is an integral part of their efficiency, I've added suggestions (similar to sample_around_best) to optimize_posterior_samples.
Marginal runtime improvement in acquisition optimization (sampling time practically unchanged):
Substantial performance improvement:
.
2. Added initializer to acquisition funcction optimization: Similar to KG, ES methods have sensible suggestions for acquisition function optimization in the form of the sampled optima. This drastically reduces the time of acquisition function optimization, which could on occasion take 30+ seconds when num_restarts was large >4.
Benchmarking INC
2b. Multi-objective support for initializer: By re-naming arguments of the multi-objective variants, we get consistency and support for MO variants.
3. Enabled gradient-based optimization for PES: The current implementation contains a while-loop which forces the gradients to be recursively computed. This commonly causes NaN gradients, which is why the recommended option is "with_grad": False in the tutorial. One detach() alleviates this issue, enabling gradient-based optimization.
NOTE: this has NOT been ablated, since the non-grad optimization is extremely computationally demanding.
Test Plan
Unit tests and benchmarking.
Related PRs
First of a couple!
Bonus: while benchmarking, I had issues repro'ing the LogEI performance initially. I found that sample_around_best made LogEI worse on Mich5. All experiments are otherwise a repro of the settings used in the LogEI paper.
@sdaulton has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
Thanks! It seems like sample_around_best could definitely lead to the AF optimization getting stuck in a local optima
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 99.99%. Comparing base (
78c04e2) to head (d2ee1de).
Additional details and impacted files
@@ Coverage Diff @@
## main #2748 +/- ##
=======================================
Coverage 99.99% 99.99%
=======================================
Files 203 203
Lines 18685 18691 +6
=======================================
+ Hits 18684 18690 +6
Misses 1 1
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@sdaulton for sure! I currently observe similar things for JES, but I'm not sure whether the found points are actually higher in acquisition function value or not (for either LogEI or JES)
That would be interesting to see
@sdaulton has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
@sdaulton has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
@sdaulton has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
@sdaulton has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.