Max Balandat
Max Balandat
I think we should be able to easily expose a `transform_observation_data()` method to make this less hacky - not sure why we haven't done this until now. @bletham is there...
Thanks for flagging this, this does seem like a bug. I'll take a closer look
OK so the issue is that the hit and run sampling fallback in Ax (see [here](https://github.com/facebook/Ax/blob/6c55c0fe344e28fdaed80097be570266b6e2e484/ax/models/random/base.py#L148-L165) currently neither applies the normalization that you mentioned nor, and more importantly, does it...
Thinning (removing all but every `k` sample) is a process to reduce autocorrelation of samples drawn from a MCMC chain. This is happening here: https://github.com/pytorch/botorch/blob/main/botorch/utils/sampling.py#L828 (this is a poor man's...
This has been fixed in #2492
@trevor-haas thanks for the accolades, happy to hear that you're enjoying Ax. And thanks for contributing to making it better by engaging with the community. As you said in general...
> Often, the generator provides three nearly identical recommendations. As @trevor-haas described, it will be negligibly different in a real experiment with noise in the inputs. Any best practices to...
@cheeseheist Ah I see when you are generating points in sequence you are not properly accounting for these "pending points". Can you share some code how exactly you're using the...
@trevor-haas this approach should work fine. For the "evaluate multiple configurations together to avoid overhead" scenario it is fine to manually group a set of individual trials together and use...
Actually, I need to correct myself: If you use the bare `Modelbridge`, creating and attaching the trial to the experiment is not enough. You have to manually pass in the...