Max Balandat
Max Balandat
This is great, thanks for adding this!
> I encountered a case where an experiment had float64 trial data where the data changed values after being saved and reloaded. It looks like the data was converted to...
> as a result some trials from ax_client.get_next_trial() end up being out-of-design This shouldn't really be happening in the first place. Seems like the candidate generation produces outputs that slightly...
+1 to understanding what versions of Ax/BoTorch you're currently using. On trunk, the input constructor for `qMaxValueEntropy` (which also subclasses `DiscreteMaxValueBase` should be a good starting point for `q qLowerBoundMaxValueEntropy`...
You should be able to follow the tutorial https://ax.dev/tutorials/factorial but just use a single `ChoiceParameter` instead (where the `values` are the list of the choices from your strategy). You can...
I don't understand - this is just the synthetic data generating process from the tutorial (which wouldn't apply in your setting with a non-factorial design - you'd either just write...
Hmm yeah this does not seem right; it does appear that there is something going on with the transforms being modified in some way that they shouldn't be.
Do you have observations for the observation noise? If so, take a look at the [HeteroskedasticSingleTaskGP](https://github.com/pytorch/botorch/blob/master/botorch/models/gp_regression.py#L226) that is implemented in BoTorch. That uses a nested GP model to model the...
Hi @ArnoVel , sorry for the delay here. Let me see if I can check off your questions. Generally, I should say that #250 is quite old and does not...
HI @IanDelbridge, yes, that's correct, and your solution is as well. It's actually not that hacky at all, all things considered. Ideally, we'd have an automated way of transforming the...