[Bug] Repeated MultitaskMultivariateNormal errors for prediction
🐛 Bug
I have been trying to modify the Batch Independent Multioutput GP example to use a single kernel with shared hyperparameters across the independent tasks (as opposed to a different set of kernel hyperparameters per task/batch dim).
One of the models I have tried is shown below, and fails when attempting prediction. I'm not sure if this is a bug or if I'm making a mistake and there is a better way of achieving what I want - any help would be much appreciated.
To reproduce
Replace the BatchIndependentMultitaskGPModel in the example with:
class SharedKernelMultitaskGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super().__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ZeroMean(
)
self.covar_module = gpytorch.kernels.ScaleKernel(
gpytorch.kernels.RBFKernel(),
)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultitaskMultivariateNormal.from_repeated_mvn(
gpytorch.distributions.MultivariateNormal(mean_x, covar_x), num_tasks=2
)
likelihood = gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=2)
model = BatchIndependentMultitaskGPModel(train_x, train_y, likelihood)
model.eval()
model(test_x)
** Stack trace/error message **
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-21-ed64434da46a>](https://localhost:8080/#) in <module>()
1 model.eval()
----> 2 model(test_x)
2 frames
[/usr/local/lib/python3.7/dist-packages/gpytorch/models/exact_gp.py](https://localhost:8080/#) in __call__(self, *args, **kwargs)
319 # Make the prediction
320 with settings._use_eval_tolerance():
--> 321 predictive_mean, predictive_covar = self.prediction_strategy.exact_prediction(full_mean, full_covar)
322
323 # Reshape predictive mean to match the appropriate event shape
[/usr/local/lib/python3.7/dist-packages/gpytorch/models/exact_prediction_strategies.py](https://localhost:8080/#) in exact_prediction(self, joint_mean, joint_covar)
252 # For efficiency - we can make things more efficient
253 if joint_covar.size(-1) <= settings.max_eager_kernel_size.value():
--> 254 test_covar = joint_covar[..., self.num_train :, :].evaluate()
255 test_test_covar = test_covar[..., self.num_train :]
256 test_train_covar = test_covar[..., : self.num_train]
[/usr/local/lib/python3.7/dist-packages/gpytorch/lazy/lazy_tensor.py](https://localhost:8080/#) in __getitem__(self, index)
2216 raise RuntimeError(
2217 "{}.__getitem__ failed! Expected a final shape of size {}, got {}. This is a bug with GPyTorch, "
-> 2218 "or your custom LazyTensor.".format(self.__class__.__name__, expected_shape, res.shape)
2219 )
2220
RuntimeError: BlockInterleavedLazyTensor.__getitem__ failed! Expected a final shape of size torch.Size([102, 302]), got torch.Size([0, 0]). This is a bug with GPyTorch, or your custom LazyTensor.
Expected Behavior
Should produce predictions at test points.
System information
- GPyTorch Version
1.6.0 - PyTorch Version
1.10.0+cu111 - Google Colab Notebook
Additional Information
When doing some basic print debugging, the problematic joint_covar lazy tensor seems to be fine (i.e. its size and contents seem to be correct with e.g. print(full_covar.shape))