mmpretrain
mmpretrain copied to clipboard
[Feature] Export test report from test result for analyzing confusion matrix and other metrics
Motivation
Classification models often use confusion matrix and P / R / F1 to analyze the model test results. The existing analysis scripts do not have the function of analyzing each classification separately. Therefore, a script is added to export metrics of each class by using the existing functions.
Modification
Export test report from test result for analyzing confusion matrix and other metrics of each class.
Please use English or English & Chinese for pull requests so that we could have broader discussion.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.
:white_check_mark: mzr1996
:x: dwSun
dwSun seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.
Thank you for your contribution, please sign the CLA so we can review your PR.
Codecov Report
Merging #773 (f6cb826) into dev (702c196) will increase coverage by
0.29%. The diff coverage isn/a.
:exclamation: Current head f6cb826 differs from pull request most recent head ad91c72. Consider uploading reports for the commit ad91c72 to get more accurate results
@@ Coverage Diff @@
## dev #773 +/- ##
==========================================
+ Coverage 86.68% 86.98% +0.29%
==========================================
Files 128 127 -1
Lines 8255 8068 -187
Branches 1422 1389 -33
==========================================
- Hits 7156 7018 -138
+ Misses 885 845 -40
+ Partials 214 205 -9
| Flag | Coverage Δ | |
|---|---|---|
| unittests | 86.89% <ø> (+0.27%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
| Impacted Files | Coverage Δ | |
|---|---|---|
| mmcls/apis/test.py | 23.93% <0.00%> (ø) |
|
| mmcls/utils/logger.py | 100.00% <0.00%> (ø) |
|
| mmcls/datasets/imagenet.py | 100.00% <0.00%> (ø) |
|
| mmcls/core/optimizers/lamb.py | 80.30% <0.00%> (ø) |
|
| mmcls/models/backbones/tnt.py | 99.08% <0.00%> (ø) |
|
| mmcls/models/utils/helpers.py | 100.00% <0.00%> (ø) |
|
| mmcls/models/utils/__init__.py | 100.00% <0.00%> (ø) |
|
| mmcls/models/heads/deit_head.py | 97.36% <0.00%> (ø) |
|
| mmcls/datasets/dataset_wrappers.py | 71.83% <0.00%> (ø) |
|
| mmcls/models/backbones/__init__.py | 100.00% <0.00%> (ø) |
|
| ... and 10 more |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact),ø = not affected,? = missing dataPowered by Codecov. Last update 58b21ee...ad91c72. Read the comment docs.
I notice that your output is an excel file, which is not a general file. Users may hard to read in Linux OS or without Offices.
We recommend using the rich library instead and adding an optional argument to output a result general file. such as '.csv'.
use csv as report file format
I'm sorry to inform you that the previous master(dev) branch has been abandoned, and therefore, this pull request (PR) based on the master(dev) branch will be closed.
We have integrated the previous mmcls and mmselfsup into a new repo named mmpretrain, and you are welcome to use it.