mmengine
mmengine copied to clipboard
Update logger_hook.py
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers. By the way, if you're not familiar with how to use pre-commit to fix lint issues or add unit tests, please refer to Contributing to OpenMMLab.
Motivation
I believe the metrics computed during a test run to evaluate a model are not being logged to the visualizer (i.e.: MLflow).
Modification
It consists of simply calling the add_scalars method of the visualizer to log them
Use cases (Optional)
It would be desirable to visualize the test metrics on MLflow for instance.
Checklist
- Pre-commit or other linting tools are used to fix the potential lint issues.
- The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
- If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDetection or MMPretrain.
- The documentation has been modified accordingly, like docstring or example tutorials.
Thanks for your contribution! Please sign the CLA first 😆. This modification could be weird if you use the TensorBoardVisbackend
. I think it could be better to add a parameter to LoggerHook
to control whether to use the visualizer to write data during testing
Could you elaborate on what would be problematic for TensorBoardVisbackend? I'm not sure I follow, it also implements the VisBackend interface and provides the add_scalars method
Looking forward to this update.