mmsegmentation
mmsegmentation copied to clipboard
[Fix] Fix save best checkpoint bug when separate_eval is True
Motivation
When use save_best='mIoU' or other metric in cfg.evaluation, if separate_eval=True, the results metric keys will change to 0_mIoU and 1_mIoU... Lead to error when save best checkpoint
Modification
- calculate the average metric of datasets if test datasets have same dataset type e.g. results['mIoU'] = mean(results['0_mIoU], results['1_mIoU]..)
- disable
save_bestif test datasets have different dataset type and raise warning
Codecov Report
Attention: Patch coverage is 14.28571% with 6 lines in your changes missing coverage. Please review.
Project coverage is 90.25%. Comparing base (
3432ea9) to head (5f94f91). Report is 162 commits behind head on master.
| Files | Patch % | Lines |
|---|---|---|
| mmseg/apis/train.py | 14.28% | 6 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #1461 +/- ##
==========================================
- Coverage 90.31% 90.25% -0.07%
==========================================
Files 139 139
Lines 8303 8309 +6
Branches 1395 1397 +2
==========================================
Hits 7499 7499
- Misses 567 573 +6
Partials 237 237
| Flag | Coverage Δ | |
|---|---|---|
| unittests | 90.25% <14.28%> (-0.07%) |
:arrow_down: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.