gpytorch icon indicating copy to clipboard operation
gpytorch copied to clipboard

MultitaskGPModel()-GPU

Open wangchunlin opened this issue 1 year ago • 1 comments

🐛 Bug

To reproduce

** Code snippet to reproduce **

# Your code goes here
# Please make sure it does not require any external dependencies (other than PyTorch!)
# (We much prefer small snippets rather than links to existing libraries!)

import math import torch import gpytorch from matplotlib import pyplot as plt import numpy as np from sklearn.utils import shuffle import torch.utils.data as Data from sklearn.metrics import mean_squared_error

sample_lat = np.load('sample_lat_route11_15s.npy', allow_pickle=True) sample_lon = np.load('sample_lon_route11_15s.npy', allow_pickle=True) sample_lon=shuffle(sample_lon) sample_lat=shuffle(sample_lat)

length = len(sample_lon) num_train = int(length0.6) num_validation=int(length0.2) num_test=int(length*0.2) num_feature=9 """model training""" x_train = sample_lon[0:1000, 0:num_feature] y_train = sample_lon[0:1000, num_feature:] x_train=torch.FloatTensor(x_train) y_train=torch.FloatTensor(y_train)

x_validation = sample_lon[num_train:num_train+num_validation, 0:num_feature] y_validation= sample_lon[num_train:num_train+num_validation, num_feature:] x_validation=torch.FloatTensor(x_validation) y_validation=torch.FloatTensor(y_validation)

x_test = sample_lon[num_train+num_validation:-1, 0:num_feature] y_test= sample_lon[num_train+num_validation:-1, num_feature:] x_validation=torch.FloatTensor(x_test) y_validation=torch.FloatTensor(y_test) num_latents = 3 num_tasks = 4

class MultitaskGPModel(gpytorch.models.ApproximateGP): def init(self): # Let's use a different set of inducing points for each latent function inducing_points = torch.rand(num_latents, 16, 9)

    # We have to mark the CholeskyVariationalDistribution as batch
    # so that we learn a variational distribution for each task
    variational_distribution = gpytorch.variational.CholeskyVariationalDistribution(
        inducing_points.size(-2), batch_shape=torch.Size([num_latents])
    )

    # We have to wrap the VariationalStrategy in a LMCVariationalStrategy
    # so that the output will be a MultitaskMultivariateNormal rather than a batch output
    variational_strategy = gpytorch.variational.LMCVariationalStrategy(
        gpytorch.variational.VariationalStrategy(
            self, inducing_points, variational_distribution, learn_inducing_locations=True
        ),
        num_tasks=4,
        num_latents=3,
        latent_dim=-1
    )

    super().__init__(variational_strategy)

    # The mean and covariance modules should be marked as batch
    # so we learn a different set of hyperparameters
    self.mean_module = gpytorch.means.ConstantMean(batch_shape=torch.Size([num_latents]))
    self.covar_module = gpytorch.kernels.ScaleKernel(
        gpytorch.kernels.RBFKernel(batch_shape=torch.Size([num_latents])),
        batch_shape=torch.Size([num_latents])
    )

def forward(self, x):
    # The forward function should be written as if we were dealing with each output
    # dimension in batch
    mean_x = self.mean_module(x)
    covar_x = self.covar_module(x)
    return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)

likelihood=gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=3) model=MultitaskGPModel() device=torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

model.to(device) likelihood.to(device) model.train() likelihood.train() #use the adam optimizer optimizer=torch.optim.Adam([{'params':model.parameters()},{'params':likelihood.parameters()}],lr=0.001) #loss for GPs-the marginal log likelihood mll=gpytorch.mlls.VariationalELBO(likelihood,model,num_data=y_train.size(0))

training_iterations=50

torch_dataset=Data.TensorDataset(x_train,y_train) loader=Data.DataLoader(dataset=torch_dataset,batch_size=60,shuffle=True)

for i in range(training_iterations): for batch_x, batch_y in loader: batch_x = batch_x.to(device) batch_y = batch_y.to(device) output = model(batch_x) loss = -mll(output, batch_y) optimizer.zero_grad() loss.backward() print('Iter %d/%d - Loss: %.3f' % (i + 1, training_iterations, loss.item())) optimizer.step()

** Stack trace/error message **

// Paste the bad output here!

Expected Behavior

System information

Please complete the following information:

  • File "D:\OneDrive - NTNU\PhD research\Uncertainty trajectories prediction_trajectory_analysis\Pyfiles-journal\GPyTorch-DESKTOP-IR9503H.py", line 103, in loss = -mll(output, batch_y) File "D:\Anaconda111\lib\site-packages\gpytorch\module.py", line 30, in call outputs = self.forward(*inputs, **kwargs) File "D:\Anaconda111\lib\site-packages\gpytorch\mlls\variational_elbo.py", line 77, in forward return super().forward(variational_dist_f, target, **kwargs) File "D:\Anaconda111\lib\site-packages\gpytorch\mlls_approximate_mll.py", line 58, in forward log_likelihood = self._log_likelihood_term(approximate_dist_f, target, **kwargs).div(num_batch) File "D:\Anaconda111\lib\site-packages\gpytorch\mlls\variational_elbo.py", line 61, in _log_likelihood_term return self.likelihood.expected_log_prob(target, variational_dist_f, **kwargs).sum(-1) File "D:\Anaconda111\lib\site-packages\gpytorch\likelihoods\gaussian_likelihood.py", line 43, in expected_log_prob noise = noise.view(*noise.shape[:-1], *input.event_shape) RuntimeError: shape '[60, 4]' is invalid for input of size 180

Additional context

Add any other context about the problem here.

when i used batch sample to train model on CUDA, an error occurs"the model must be trained on input samples."

wangchunlin avatar Nov 13 '22 08:11 wangchunlin

Without knowing what your data looks like, I think there's an issue with your code. One line of your code has num_tasks=4, but your MultitaskGaussianLikelihood has num_tasks=3

likelihood=gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=3)

Given the error message, it seems like this discrepancy in batch sizes is the culprit.

gpleiss avatar Nov 29 '22 02:11 gpleiss