MB text logging
Hi, I think it should be possible to log metrics for every MB to a file with the TextLogger and not just the last MB. In the end, it's the users decision to make a really lengthy log file, but there are use cases for this behavior. And if the option is turned of by default, it doesn't bother anyone who doesn't want this behavior.
Say I want to track how long the training on each minibatch takes and that I'm particularly interested in outliers in these timings. Then I'd be nice to have logs of every MB and it takes only a simple script to go through these files. I know it's already possible to do through WandB and Tensorboard, but it'd be nice to also make this possible with simple text files.
I already have an implementation done, which just implements after_training_iteration differently based on a boolean set at initialization, which defaults to False.
Let me know what you think :)
Hi @VerwimpEli , feel free to open a PR about this :smile:
Currently, metrics are logged:
- after train epoch
- after eval exp
- after eval
We might consider adding a pass-through flag to print incoming metrics directly in log_single_metric here: https://github.com/ContinualAI/avalanche/blob/d60eb90c8e71c6450a380b63d5f62b5a28d999d9/avalanche/logging/text_logging.py#L66
Adding a flag specific to after_training_iteration seems too tight because it would be specific to the training phase and to the iteration granularity only. In addition, most metrics do not log at the MB level in the eval phase.
What do you think @AntonioCarta?
The problem is that minibatch metrics are a lot and the text logger is a coarse-level view of the metrics. I think it's better to manipulate the metric dictionary or one of the other loggers for this specific use case.