mmengine icon indicating copy to clipboard operation
mmengine copied to clipboard

[Feature] Log metrics in test mode

Open mmeendez8 opened this issue 1 year ago • 8 comments

What is the feature?

Just noticed that when running mmseg with a pretrained model and a test set to evaluate its performance, the final metrics are not logged in my visbackend (mlflow).

I was exploring the source and noticed that the LoggerHook class is the one in charge of dumping metrics during training and eval.

I was wondering if there is any reason of why runner.visualizer.add_scalars() is not called after_test_epoch

Any other context?

No response

mmeendez8 avatar Jan 28 '24 10:01 mmeendez8

@HAOCHENYE any news? Just trying to figure out if I should patch this locally or send a PR here

mmeendez8 avatar Feb 08 '24 08:02 mmeendez8

@HAOCHENYE would it be possible to get some feedback on this?

mmeendez8 avatar Mar 14 '24 08:03 mmeendez8

Also need update on this!

fsbarros98 avatar Apr 19 '24 08:04 fsbarros98

Sorry for the late response. The reason for not calling add_scalars in after_test_epoch is that the test set typically does not have the ground truth, and we usually only calculate various metrics and statistics on the validation set.

HAOCHENYE avatar Apr 22 '24 03:04 HAOCHENYE

that is a somewhat valid response, but especially if I'm running a test.py script I would expect that test metrics would be logged in

fsbarros98 avatar Apr 22 '24 07:04 fsbarros98

I see... So is your plan to assume that the test set does not have the ground truth, or should we find a way to extract and log it when the ground truth is present?

mmeendez8 avatar Apr 22 '24 08:04 mmeendez8

If gt is not present do we even have metrics? I'm using this for MMagic, so in generation we also don't have gt, but we mainly compute metrics by comparing test features (generated samples) and train features... I believe that whenever metrics are calculated for testing they should also be added to the visualizer

fsbarros98 avatar Apr 22 '24 08:04 fsbarros98

Visualizer is a globally accessible variable, and you can get the visualizer at any location using visualizer = Visualizer.get_current_instance() and then call the interface like visualizer.add_scalar() to record the information you want. You can implement it in a custom hook, or any other places you want (maybe model.xxx, metric.xxx ...)

HAOCHENYE avatar Apr 23 '24 15:04 HAOCHENYE