graphium
graphium copied to clipboard
Update `MetricWrapper` to work with the update/compute from torchmetrics
Currently, we cannot compute train metrics because it requires to save all the preds and targets over the epoch. The problem also affects the val and test sets, but since they're smaller, the effect is less noticeable.
To fix this, the MetricWrapper should have an update method taking the preds and target that calls the underlying self.metrics.update, and the compute method which no longer takes in the pred and target, but instead calls the self.metric.compute
- [x] Add the
updateto theMetricWrapper - [x] Modify the
MetricWrapper.computeto work with the update - [x] How to deal with missing labels??
Also, all TorchMetrics in spaces.py should become their class equivalent rather than functions.
- [x] Change the
spaces.pyto use classes rather than functions. Make sure the classes get initialized.
All metrics are moved to their Class versions in PR #511
Exceptions: AvPR and Auroc. Because they require to save all preds and labels, which breaks the GPU memory. Especially when using mean-per-label option, where it keeps thousands of vectors and faces memory leaks.
Instead, they use the MetricToConcatenatedTorchMetrics to wrap them