Additional KeOps Kernels
🚀 Feature Request
Hi :wave:, I was very pleased to find that Keop's kernels are already natively integrated into GPyTorch. However, I am wondering why only the RBF and the Matern kernel are available. I guess it's simply because no one has taken care of implementing the remaining ones? If this is the reason, I'd be happy to open a PR for adding the Linear and Polynomial kernels, as I need them in one of my projects. If there is any other reason for why they are not there yet, I'd be happy if someone could let me know.
Motivation
Need a memory-efficient approach to handle cases with N>100.000 data points.
Pitch
Let's add the KeOps versions of the Linear and the Polynomial kernel. I'm happy to start a PR for this and draft the necessary code.
Sounds great - I don't think there is any reason for them not being available other than nobody getting to this yet. More than happy to accept a PR
@Balandat Since gpytorch has started using linear_operator for kernels, is there a benefit of implementing kernels using KeOps? From what I understand, both libraries use symbolic matrices where only the inputs used to calculate the matrices are specified, and the whole covariance matrices are never stored into memory.
I have started using the RBFKernelGrad kernel but it still initializes the full covariance matrix into memory. Adding derivative information increases the size of the covariance matrix, so I am curious if the code can be rewritten using BlockLinearOperator or KeOps, so that the whole covariance matrix does not need to be initialized.
From what I understand, both libraries use symbolic matrices where only the inputs used to calculate the matrices are specified, and the whole covariance matrices are never stored into memory.
This is not the case. KeOps does as you describe; the linear_operator library uses structure linear algebraic routines when they are available (e.g. when there is Kronecker structure for multitask, Toeplitz structure for gridded data, etc.)