AbstractGPs.jl
AbstractGPs.jl copied to clipboard
Kernel for multidimensional output
I wanted to implement the Gaussian Process Implicit Surface (GPIS) algorithm in Julia using your package but was unable to figure out I could code a kernel for multi-dimensional outputs.
So far, from what I have read, I understand that I would have to create a custom kernel and provide a "kappa" function to define the covariance matrix. From the docs and the (excellent) examples you provided, I was able to test and implement a 1D version of GPIS but I am not sure how I could implement a 3D version of the same. I intend to implement an algorithm that roughly mimics this. The input is a vector of 3D vectors (a vector of 3D positions of points in space) and the output is a 4D vector (a vector of 3D normal vectors of the input points + the "sign distance field" value for that point). So, for 25 points in space, I would have a kernel of size 104x104.
Any help is much appreciated -- I have been struggling with this for over a week now and so I am here. I apologize if the solution is obvious/trivial (if so, please do direct me to the resource I can refer to). Thank you!
Hi @jakiroshah . Have you had a chance to review the Multi-Output section of the KernelFunctions.jl docs?
Thank you so much for directing me to that section -- I had completely missed it! Your answer addresses all the issues I posted earlier but I am still stuck in what I am attempting to implement.
I am still unclear about how I can correctly implement a 4x4 kernel given a 3D input (for a 4D output). Specifically, I am looking to implement a kernel defined by the following equation (i.e., the kernel given two input vectors x1
, x2
and a hyperparameter R
which is denoted by psi
in the equation):
So, the kernel must look like the following:
diff1 = (x2(1)-x1(1));
diff2 = (x2(2)-x1(2));
diff3 = (x2(3)-x1(3));
k=zeros(4,4);
k(1,1)=2*d^3 - 3*R*d^2 + R^3;
k(1,2)=(6*d-6*R) * diff1;
k(1,3)=(6*d-6*R) * diff2;
k(1,4)=(6*d-6*R) * diff3;
k(2,1)=-k(1,2);
k(2,2)=-6*diff1*diff1/d + 6*R-6*d;
k(2,3)=-6*diff2*diff1/d;
k(2,4)=-6*diff3*diff1/d;
k(3,1)=-k(1,3);
k(3,2)=-6*diff1*diff2/d;
k(3,3)=-6*diff2*diff2/d + 6*R-6*d;
k(3,4)=-6*diff3*diff2/d;
k(4,1)=-k(1,4);
k(4,2)=6*diff1*-diff3/d;
k(4,3)=6*diff2*-diff3/d;
k(4,4)=6*diff3*-diff3/d + 6*R-6*d;
From what I understood, the MOKernel still performs 1D computations in the "kappa" function of the kernel (since the input to the kappa is a Real
floating point number and not a Vector
). If my assumption is correct, would I have to independently hardcode each of the aforementioned equations in the MOKernel? What is the right way of implementing this?
Again, I apologize if what I am asking is trivial. Thank you for your help with this!
Ah, so none of the kernels have to perform 1D computations in the kappa
function, that's just a short-hand for when it's convenient.
I think you're probably better off implementing (::Kernel)(x, y)
and kernelmatrix
etc directly. Maybe take a look at the implementations for the IndependentMOKernel
and see if that makes sense?
edit: sorry for slow response times! Should be improved for the next couple of weeks.