Max Balandat

Results 476 comments of Max Balandat

Ideally we could do this in a way that doesn't require changing the forward/posterior methods. I haven't dug into this in detail, but is the issue here fundamentally that the...

> The FBGP doesn't have batch dimension in the inputs, since there's only one set of training data. The length- and outputscales obviously do, however, which gives a batch dimension...

Started a PR for this in https://github.com/cornellius-gp/gpytorch/pull/2307 @hvarfner can you check if this works for this use case? I haven't looked super closely through the prediction strategy, but as the...

cc @dme65 who introduced this check, but I believe it was in a context where we could not simply use SLSQP so making sure that the ICs satisfied the constraints...

> I still find the default SKLearn optimizer performs better at finding optimal hyperparameters for GPs Do you mean just the optimizer itself? Or do you mean the actual GP...

Hi @yyexela, great to see you interested in using derivative-enabled GPs with other acquisition functions. The error you're getting suggests that there may be numerical problems unrelated to the fact...

Also cc @pabloprf re derivative-enabled GPs.

Nice work tracking down this issue! It seems a bit weird to me to fix this by wrapping the covariance in a `DenseLinearOperator` though, especially since it represents the same...

@yyexela with the fix landed, is there anything outstanding on this feature request or can this be closed out?