Max Balandat

Results 476 comments of Max Balandat

Not sure why the MacOS tests are failing, but that's unrelated to this PR.

One question here is whether we're ok with solving an LP on each call of `optimize_acqf` as this isn't super lightweight. But I guess it's rather minimal compare to full...

Coverage gap is unrelated and fixed in #1234

So we do have a high-level description of some model types here: https://botorch.org/docs/models But this is not particularly detailed and also incomplete. We could either update that page, or add...

> are there any cases where we wouldn't want to sample_around_best? Probably not? I guess that requires some additional benchmarking > if it's so commonly useful for AF optimization why...

Can you provide the data under which you observe this behavior? How different are the predicted outcomes? Minimal differences are always expected due to the fact that the optimization of...

I see. We should consider automating the splitting in this case. @takafusui the issue here is that the larger much higher-dim problem for the batched mod is a lot harder...

Yeah the inconsistency with gpytorch prior shapes has been a long standing issue, unfortunately: https://github.com/cornellius-gp/gpytorch/issues/1317, https://github.com/cornellius-gp/gpytorch/issues/1318. We really ought to fix this...

If you're using a 200D function (not sure if you are from the code which is inconsistent with that), then it's plausible that there may be quite a bit of...

Other options include 1. Using KeOps to optimize the Kernel operations (https://github.com/cornellius-gp/gpytorch/blob/master/examples/02_Scalable_Exact_GPs/KeOps_GP_Regression.ipynb) 2. Using an approximate GP model, e.g. this one: https://github.com/cornellius-gp/gpytorch/blob/master/examples/02_Scalable_Exact_GPs/KISSGP_Regression.ipynb We don't have either of those models as...