stanbiryukov
stanbiryukov
Yes, was worried it would require a different loss. Your first option sounds reasonable. In practice I wonder how much the GP prior over all would help or potentially make...
Thanks for the direction, @jacobrgardner . This makes sense. I've drafted a more fleshed out example below. If I train with the default exact GP framework in `Simple_GP_Regression.ipynb`, I mostly...
Here's a better working example where it appears the parameters are being learned with SGD. However, I'm puzzled by what looks like `x2` changing shape in the forward function when...
Realized I can default to the built-in `self.covar_dist` instead of building the distance matrix. Nonetheless, @jacobrgardner, I'm not sure what changes in eval mode that makes `x2` a non-conforming shape....
Must be related to this concatenation given my training data is of length 100 and test is 51: https://github.com/cornellius-gp/gpytorch/blob/90489c5b41ec79f646cae26332ecbad2ce1fac5d/gpytorch/models/exact_gp.py#L297
@jacobrgardner It seems that embedding a neural network within a `gpytorch.kernels.Kernel` is leading to issues with posterior inference. Are there components that I'm missing in the above example or is...
Hi @jacobrgardner totally understand - thanks for the help as always. I'll see if I can make it to ICML this year. The unsqueeze makes sense so thanks for guiding...
Hi @jacobrgardner - with version 1.0.1 of gpytorch, I'm seeing a lazy evaluation error that I'm having trouble tracing back to the root cause. Any suggestions on how to now...
I was curious if/how those flags were used. Unfortunately, adding that didn't seem to work - received the exact same error.
@jacobrgardner Sure - here's a colab notebook with a MWE: https://colab.research.google.com/drive/1ymAidoiNmK73pa5xzG4BMnUAxX9wjZDD