probnum
probnum copied to clipboard
Explore kernel matrices as linear operators via KeOPs for scalability
Current State
Kernel matrices are currently implemented as (dense) arrays instead of linear operators.
Problem
This does not scale beyond datasets of $n \approx 10000$.
Suggestion
We can potentially scale any PN method using kernels to datasets of size >>10,000 If we implement probnum.kernels
via KeOPs.
The KeOPs library lets you compute reductions of large arrays whose entries are given by a mathematical formula or a neural network.
It is perfectly suited to the computation of kernel matrix-vector products, K-nearest neighbors queries, N-body interactions, point cloud convolutions and the associated gradients. Crucially, it performs well even when the corresponding kernel or distance matrices do not fit into the RAM or GPU memory.
KeOPs can be wrapped as LinearOperator
s: http://www.kernel-operations.io/keops/_auto_tutorials/backends/plot_scipy.html
To Do
- [ ] Create a small prototype (a kernel class) using KeOps which implements matrix-vector multiplication with a kernel matrix of size 100,000 or more