AbstractGPs.jl
AbstractGPs.jl copied to clipboard
Deep kernel learning example: performance
It's currently a rather slow notebook.
For example, it seems rather inefficient that we have to compute posterior(fx, y_train) all over whenever we want to plot... isn't there some way to get it once together with the gradients?
For example, it seems rather inefficient that we have to compute posterior(fx, y_train) all over whenever we want to plot... isn't there some way to get it once together with the gradients?
By this, do you mean the fact that we have to both compute the log marginal likelihood and the posterior each time that we want to plot, meaning that we're probably doing roughly double the amount of work that we need to?