Max Balandat
Max Balandat
So the sampling will work if you instantiate your prior with scalar parameters, e.g. `gpytorch.priors.NormalPrior(1, 1)` instead of what you're doing. Fundamentally, this is related to a bug (ambiguity?) in...
Indeed - if you want priors with different (prior) parameters for different parameter dimensions, then this is the way to do it (for now, until we've found a proper solution...
Great question. So the idea behind the model is to use the `Log` `OutcomeTransform` in the noise model: https://github.com/pytorch/botorch/blob/master/botorch/models/gp_regression.py#L394 In general, the outcome transform *should* automatically untransform the predictions back...
Yeah, I'm not suggesting it wasn't, just flagging the issue of a BoTorch model with outcome transform being used in a context where it may not be proper.
Thanks for checking. Did you try this also with @btlorch' suggested modification (and removal of the Log outcome transform from the noise model)?
> I think the use of input_transform in the noise_model is also problematic since the train_X doesn't get stored as transformed here. Yeah. https://github.com/cornellius-gp/gpytorch/issues/1652 would help a lot here. Re...
Interesting. Yeah the fact that you have to do pre-training isn't ideal - jointly training the noise model and the outer model that utilizes the noise model will presumably result...
@yangshun we have a pretty funky setup (see e.g. [this script](https://github.com/facebook/Ax/blob/main/scripts/publish_site.sh)) where we are doing versioning of sphinx docs that we render via docusaurus. We do this by essentially manually...
Can we add a check for this in the unit test?
> I think Ax's best algrithm is ALEBO "best" in what way? Different algorithms will work well on different problems (ALEBO is targeted to very high-dimensional search spaces). What kinds...