Alex Immer
Alex Immer
Currently, [BCELoss](https://pytorch.org/docs/master/generated/torch.nn.BCELoss.html) where the neural network maps to a scalar for a single example and a vector for a batch, are not supported if I am not mistaken. Therefore, for...
Enables regression and provides new interface for several methods that need to be synced (corresponds to #44). There are two key quantities that require more investigation and tests: - [...
After neural network training, one can find a more appropriate stationary point of the linearized model or the last layer as proposed in Sec. 3.2 [here](https://arxiv.org/pdf/2008.08400.pdf). This can improve the...
Jacobians can also be computed naively but for general models using either `pytorch-functional` or using loops. This should be an option if the layers cannot be extended using either backpack...
Parts of the methods or classes implemented in the library are proposed in different papers. Instead of having a single reference list in the readme, we could therefore add references...
We don't really have any style guide and this would be the easiest as it's auto-enforced by `black laplace` after installing it. To discuss maybe: - default 100 linewidth? -...
Current version can be found [here](https://github.com/kazukiosawa/asdfghjkl/tree/0.1). For example, Kazuki Osawa mentioned that the `data_average` parameter now defaults to `True` but we require `False` for a proper Hessian approximation.
Would allow to implement other priors than Gaussian where the attribute `.delta` or `.prior_prec` simply returns the second derivative wrt. NN parameters and can be passed into the Laplace class....
The method proposed by [Kwon et al](https://openreview.net/pdf?id=Sk_P2Q9sG) should be implemented for the MC predictives.
Probably subclass from torch criteria and keep module parameters for specific library functions. Additionally could subclass from torch distributions for log probabilities and implement the predictive etc.