st--
st--
Currently (v0.6), the `__call__` of a mean function expects a full (N x D) data matrix. This is inconsistent with kernels, for which the `__call__` simply evaluates on single points...
Deprecates the `approx_lml` defined in this package in favour of `AbstractGPs.approx_log_evidence` (see https://github.com/JuliaGaussianProcesses/AbstractGPs.jl/pull/361).
Include e.g. why we have to pass a jitter to `LatentGP` (see [comment](https://github.com/JuliaGaussianProcesses/ApproximateGPs.jl/pull/59/files?authenticity_token=eenE3GK8QMpUTnVNnhDwFD7lQnVjmDHueNjES3WJdJUa3E0XLSz7mfYQqbZsDB2dg0qylnT8VXX069chAoaviQ%3D%3D&file-filters%5B%5D=.jl&file-filters%5B%5D=.toml#r715694354))
We currently don't have any tests that a non-zero prior mean actually works. - [X] SparseVariationalApproximation: supported (with a bug fixed in #86) - [ ] LaplaceApproximation: might or might...
This is a very basic version to demonstrate proof-of-concept. See #92 for missing features.
Basic tasks: - [ ] EP marginal likelihood approximation (RW5) & tests for optimising hyperparameters - [ ] more efficient prediction (RW3) instead of piggybacking on `SVGP` - [ ]...
Would also be nice to add plots that compare the different approximate posteriors.
https://github.com/JuliaStats/Distributions.jl/pull/1536 is proposing different parametrizations of the NegativeBinomial likelihood, particularly the scale-shape parametrization that corresponds to the NBParamMean types introduced in #80. Once https://github.com/JuliaStats/Distributions.jl/pull/1536 is released as part of Distributions.jl,...
Can't we have a unique ``` (l::AbstractLikelihood)(fs::AbstractVector{