[Docs] Unexpected behavior setting kernel priors
Hi, I'm interested in setting better priors on my GP kernel. I am unsure if I am setting the priors correctly. Below are two ways I have tried to set the kernel priors. I expect both to behave the same but when I fit my GP, I find that cov_module1 works significantly better than covar_module2. Is there a reason for this behavior?
lengthscale_prior = gpytorch.priors.GammaPrior(3.0, 6.0)
outputscale_prior = gpytorch.priors.GammaPrior(2.0, 0.15)
covar_module1 = ScaleKernel(
MaternKernel(
nu=nu,
ard_num_dims=2,
lengthscale_prior=lengthscale_prior,
),
outputscale_prior=outputscale_prior,
)
covar_module2 = ScaleKernel(
MaternKernel(
nu=nu,
ard_num_dims=2,
),
)
covar_module2.base_kernel.lengthscale_prior = lengthscale_prior
covar_module2.outputscale_prior = outputscale_prior
Expected Behavior
cov_module1 and cov_module2 to perform similarly when they are used to fit a SingleTaskGP.
System information Please complete the following information: GPyTorch: 1.10 Pytorch: 2.0.1 Ubuntu 20.04 Python: 3.10
Fyi, this is a repeat issue from https://github.com/pytorch/botorch/issues/2307?notification_referrer_id=NT_kwDOANXgtbQxMDM4MDQ5MjEzMjoxNDAxNjY5Mw.
You can't just assign the attribute for the prior, you need to use the register_prior() method on the GPyTorch module so that gpytorch knows to use this in the computations for the MLL. In your setup for covar_module2 I don't think that prior is ever being used.
See e.g. https://github.com/cornellius-gp/gpytorch/blob/master/gpytorch/kernels/kernel.py#L205-L207 for how this is done in the Kernel.__init__() method.