botorch
botorch copied to clipboard
[Feature Request] Make PosteriorTransform API (optionally) take features X
🚀 Feature Request
Allow the PosteriorTransform.evaluate() and PosteriorTransform.forward() (https://github.com/pytorch/botorch/blob/242ddd9efc158c1e00c0d0296151891f8922dbc7/botorch/acquisition/objective.py#L48-L70) to also accept the features X in addition to the values Y or posterior, respectively.
Motivation
See #1545
Additional context
This blocks on the input transform refactor - Until that is done, we tend to have the transformed X where we call the posterior_transform.
Hi! Coming here from #1545, after seeing @Balandat's comment.
Would it really require so many changes throughout the codebase if we were to have all OutcomeTransform optionally take X as well as Y and Yvar? This will require going through each GP class and changing
train_Y, train_Yvar = outcome_transform(train_Y, train_Yvar)
to
train_Y, train_Yvar = outcome_transform(train_Y, train_Yvar, train_X)
as well as changing all 5 occurrences of
self.outcome_transform.untransform_posterior(posterior)
to
self.outcome_transform.untransform_posterior(posterior, X)
and of course the prerequisite additional X variable to method headers in OutcomeTransform.
This would indeed require many changes throughout the codebase, however the change itself seems very straightforward (just copy and paste in all places), or am I missing something important?
This blocks on the input transform refactor - Until that is done, we tend to have the transformed X where we call the posterior_transform.
I don't have much context on this anymore, but @saitcakmak probably does since he's been thinking through the input transform refactor (and possibly landed parts of that)?
I haven't touched the input transform refactor diff in quite a while. IIRC, It was most of the way there, just needed cleanup of some remaining models & test. It'd probably require up to a week of work to bring it to a landable state.
however the change itself seems very straightforward (just copy and paste in all places), or am I missing something important?
One possible gotcha in your solution is that the self.outcome_transform.untransform_posterior(posterior, X) call would have access to input-transformed X. We would have to keep a copy of X before applying the input transforms, so that the outcome transform would get the X from the original space.
One possible gotcha in your solution is that the
self.outcome_transform.untransform_posterior(posterior, X)call would have access to input-transformedX. We would have to keep a copy ofXbefore applying the input transforms, so that the outcome transform would get theXfrom the original space.
Thanks for your reply. This is a good point. In my use case, this is not a problem since the transformed X is sufficient. However, it is difficult to imagine this would be true for every use case.