Laplace
Laplace copied to clipboard
Example: continual learning on toy data
Simple example showing continual learning with the Laplace approximation on toy data.
I plan to implement the following to functionalities together with #40:
- add
online
orkeep_H
to.fit(..)
flag that maintains the Hessian approximation and can thus be used for continual learning - add
.log_prob(theta)
as method toBaseLaplace
which allows to compute the regularizer in a CL setting This should make simple CL approaches very easy to implement or would we need anything else? Like this, we can just do
la = Laplace(model, ...) # init to prior
for task in tasks:
optimize(loss(task) + la.log_prob(), theta)
# optionally interleaved or after training:
optimize(la.log_marginal_likelihood(), hyperparams)
la.fit(task, online=True)
Do you see potential problems with this?
No, this looks exactly like what we need for an improved CL interface. I'll look into the details when the PR is ready.