lightly icon indicating copy to clipboard operation
lightly copied to clipboard

Report all metrics at end of benchmark

Open guarin opened this issue 2 years ago • 1 comments

We should print all metrics at the end of a benchmark again to make them easier to extract from the logs.

TODO:

  • [ ] Collect metrics over a whole benchmark (online, knn, linear, and finetune eval)
  • [ ] Print metrics at end of benchmark script
  • [ ] Optional: Print also as markdown table

guarin avatar Jul 19 '23 09:07 guarin

@guarin i'm planning to take this up. seems to me like a good-first-issue and i'd get some understanding of the codebase. can u assign this to me and also, maybe, point out a few entry points where i could start with the codebase so as to speed up my understanding. thanks.

EricLiclair avatar Oct 19 '24 10:10 EricLiclair

Hi @EricLiclair, the idea is that the main.py script in the benchmarks should aggregate all evaluation metrics and print them as a table in the end. The relevant function calls are here: https://github.com/lightly-ai/lightly/blob/5ac38984d13f4053220fbc8c4b8a6eddf13131fc/benchmarks/imagenet/resnet50/main.py#L128-L170

So the knn_eval, linear_eval, and finetune_eval functions should each return their metrics from the function instead of just printing them. See for example here: https://github.com/lightly-ai/lightly/blob/5ac38984d13f4053220fbc8c4b8a6eddf13131fc/benchmarks/imagenet/resnet50/knn_eval.py#L92-L93

guarin avatar Oct 21 '24 06:10 guarin

hi @guarin i've added a draft pr here - #1706 for imagenet/vitb16; requesting comments on the approach.

based on your comments, i'll update (if needed) and add similar changes for imagenet/resnet50. (or lmk if i should raise another pr for imagenet/resnet50)

EricLiclair avatar Oct 22 '24 19:10 EricLiclair