Max Balandat

Results 476 comments of Max Balandat

> What do you think? Would you like me to open a PR for this? I think `MaxVar` is a bit of a misnomer, since the acquisition function is really...

> Probably adding an argument retain_graph = True will solve the issue. Hmm I'm not sure we want to do that, this may just end up masking a bigger issue...

I stress tested this (same versions, but on a M1 mac) by running this 100 times and I didn't get a single failure. So this is either a super rare...

I don't know if that can easily be done preemptively. What you could try is use the autograd [anomaly detection mode](https://pytorch.org/docs/stable/autograd.html#anomaly-detection): ``` from torch import autograd with autograd.detect_anomaly(): # your...

What exactly is `new_candidates`? That variable doesn't appear in your code. Do you mean `new_x`? Is `a_tensor` in normalized space? Noe that you're calling `new_x = unnormalize(candidates.detach(), bounds=self.standard_bounds)` but it...

> Should I divide by the maximum value of each constraint boundary Yes, I highly recommend performing the optimization with input domain normalized to the unit cube `[0, 1]^d`. Our...

Btw, if all you want to do is solve a multi-objective optimization problem and don't necessarily care about changing details / components of the optimization I recommend you take a...

Related discussion here: https://github.com/pytorch/botorch/issues/1326 tl;dr is that it's really hard to come up with a generic initializer for generic nonlinear constraints, so I don't think we should support that. From...

Thanks for fixing this. One thing that we should keep in mind here is that if the acquisition function values are small (as is often the case later during the...

I am not sure how widely used this is. The main thing I'm worried about is that the default behavior for non-negative but small-in-magnitude acquisition functions may change quite a...