mmpose
mmpose copied to clipboard
[Doc] Refine algorithm readme with model performance table
Motivation
Update algorithm readme with model performance table and page links for algorithms for better user experience.
Modification
BC-breaking (Optional)
Use cases (Optional)
Checklist
Before PR:
- [ ] I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
- [ ] Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
- [ ] Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
- [ ] New functionalities are covered by complete unit tests. If not, please add more unit tests to ensure correctness.
- [ ] The documentation has been modified accordingly, including docstring or example tutorials.
After PR:
- [ ] CLA has been signed and all committers have signed the CLA in this PR.
Do you plan to use this page as an alternative of current algorithm docs? I suggest adding it into the docs instead of placing under the config folder. Besides, maybe an accuracy-sorted version is also necessary.
Do you plan to use this page as an alternative of current algorithm docs? I suggest adding it into the docs instead of placing under the config folder. Besides, maybe an accuracy-sorted version is also necessary.
This page is expected to serve as an index for users navigating the configs folder on GitHub. We can modify the doc compiling script to automatically involves these pages in the model zoo document page, or even generate a figure with the table content. That can be a future plan.
As for the sorting, one concern is that the performance comparison may not be completely fair (network scale, input size, ...) so the sorted results could be misleading in some cases. Maybe sort them by the publishing year?
Do you plan to use this page as an alternative of current algorithm docs? I suggest adding it into the docs instead of placing under the config folder. Besides, maybe an accuracy-sorted version is also necessary.
This page is expected to serve as an index for users navigating the configs folder on GitHub. We can modify the doc compiling script to automatically involves these pages in the model zoo document page, or even generate a figure with the table content. That can be a future plan.
As for the sorting, one concern is that the performance comparison may not be completely fair (network scale, input size, ...) so the sorted results could be misleading in some cases. Maybe sort them by the publishing year?
I understand your concern. But popular repos like timm and paperwithcodes, whose list is also not completely fair.


So this kind of ranking is just for convenience, and you know, users are always wondering "which one is the best model in your repo"
Codecov Report
Base: 77.73% // Head: 77.77% // Increases project coverage by +0.04% :tada:
Coverage data is based on head (
43a98fb) compared to base (4693395). Patch has no changes to coverable lines.
Additional details and impacted files
@@ Coverage Diff @@
## dev-1.x #1627 +/- ##
===========================================
+ Coverage 77.73% 77.77% +0.04%
===========================================
Files 204 204
Lines 11716 11716
Branches 1956 1956
===========================================
+ Hits 9107 9112 +5
+ Misses 2213 2206 -7
- Partials 396 398 +2
| Flag | Coverage Δ | |
|---|---|---|
| unittests | 77.77% <ø> (+0.04%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
| Impacted Files | Coverage Δ | |
|---|---|---|
| mmpose/datasets/transforms/common_transforms.py | 83.47% <0.00%> (+1.42%) |
:arrow_up: |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.