AbstractGPs.jl icon indicating copy to clipboard operation
AbstractGPs.jl copied to clipboard

Deep kernel learning example: performance

Open st-- opened this issue 3 years ago • 1 comments

It's currently a rather slow notebook.

For example, it seems rather inefficient that we have to compute posterior(fx, y_train) all over whenever we want to plot... isn't there some way to get it once together with the gradients?

st-- avatar Mar 28 '22 14:03 st--

For example, it seems rather inefficient that we have to compute posterior(fx, y_train) all over whenever we want to plot... isn't there some way to get it once together with the gradients?

By this, do you mean the fact that we have to both compute the log marginal likelihood and the posterior each time that we want to plot, meaning that we're probably doing roughly double the amount of work that we need to?

willtebbutt avatar Mar 28 '22 14:03 willtebbutt