UPIT icon indicating copy to clipboard operation
UPIT copied to clipboard

Add metrics and test model tracking callbacks

Open tmabraham opened this issue 3 years ago • 5 comments

I want to add support for metrics, and even potentially include some common metrics, like FID, mi-FID, KID, and segmentation metrics (for paired) etc.

Additionally, monitoring the losses and metrics, I want to be able to use fastai's built-in callbacks for saving best model, early stopping, and reducing LR on plateau.

This shouldn't be too hard to include. A major part of this feature is finding good PyTorch/numpy implementations of some of these metrics and getting it to work.

tmabraham avatar Sep 02 '20 23:09 tmabraham

Do you know of Weights and Biases, their library is super cool for tracking metrics and visualisations, check this out: https://github.com/neomatrix369/awesome-ai-ml-dl/blob/master/data/about-Weights-and-Biases.md You can find many such examples in the above link

There's more than metrics and visualisations you get from W&B.

Happy to help with this one, let me know.

neomatrix369 avatar Sep 03 '20 06:09 neomatrix369

@neomatrix369 Thank you for the suggestion. I am aware of W&B, and in fact fastai has great support for W&B thanks to the work of Boris Dayma. I plan to look into using W&B for tracking my own experiments with image translation models. However, this enhancement issue isn't really focused on that, but rather making it easy to use metrics with the models, as well as provide a few implementations of common metrics.

I already have some code for getting metrics to work well with these models and I will add it soon. Since the output are normalized images, there's a bit of extra code needed to transform it correctly and apply some sort of AverageMetric. Once I add this to the codebase, I will have to see which metrics to add to the library. If you're interested, I'll update this issue at that point, and I would be happy to take contributions for metrics.

tmabraham avatar Sep 03 '20 19:09 tmabraham

@tmabraham That's fine if you are already aware of the library, and I forgot it was integrated with Fastai and others - so its all taken care of.

Happy to follow this issue as a curiosity to learning but you sound you have many angles covered. Anything opens up do let me know.

neomatrix369 avatar Sep 04 '20 09:09 neomatrix369

I have added FID (https://github.com/tmabraham/UPIT/commit/432f784a4ebcc88f7a5c76d4c4a629d8e484b74f) and tested it. Horse2Zebra FID reached ~91.7 with 10 epochs of training (here), which is close to a full training CycleGAN benchmark of 89.7 reported here.

tmabraham avatar Feb 10 '21 00:02 tmabraham

It seems like SaveModelCallback works fine, it's just that the default of monitoring the valid_loss obviously wouldn't work if there isn't any validation loss.

Apart from FID, I plan to implement the following metrics before closing this issue:

  • [ ] KID
  • [ ] Inception Score
  • [ ] LPIPS
  • [ ] Segmentation metrics
  • [ ] Comparison to paired data (ex: MSE, MAE loss of output and paired data of input)

tmabraham avatar Feb 13 '21 06:02 tmabraham