botorch
botorch copied to clipboard
[Feature Request] Implement output selection in untransfrom_posterior for outcome transforms.
🚀 Feature Request
Implement output selection in untransfrom_posterior for outcome transforms.
Motivation
Currently no outcome transforms support untransforming posteriors when outputs are selected. This makes using multi-output SingleTaskGP or MultiTaskGP models inflexible for handling cases where each output requires a different outcome transform to properly treat objectives/constraints. For example, a batched SingleTaskGP can represent both objective functions (requiring a Standardize transform) and constraining functions (requiring a Bilog transform).
Pitch
I'm willing to open a PR to fix this issue, but that process could be sped up with some guidance on how to do this properly / problems that prevent this from being implemented.
This makes sense. I don't think there is anything particularly "hard" about this, it's just making sure to keep track of the indices properly and apply the transformations accordingly, ideally without a bunch of additional (and repeated) complexity.
One thing to think about more is how this works in TransformedPosterior. Essentially the subsetting and transformation would have to happen in there, and there are a couple of things one could consider, e.g. making the transform functions (such as sample_transform) apply correctly with the subsetting, or have some kind of mapping from indices to the transform functions on TransformedPosterior so that the standard functions can just be applied to a subset of the outputs.
We'll properly want some kind of wrapper that allow us to apply vectorized functions to a subset of tensor indices so that we don't have to copy-paste a lot of code.
I'd have to think a bit more through this, but this is what comes to mind immediately.
It occurs to me that I won't be able to work on this until mid June, so I'll take a stab at it then unless someone else gets to it first
Using ModelListGP with different outcome transforms for each enables this functionality. While it's not as fast as using a SingleTaskGP with multiple outputs (assuming that the input training set is the same for each output), its sufficient for my use case (for now). This probably won't get fixed for awhile until it becomes really necessary
Thanks so much for the update!