Add test coverage monitoring
Getting typing-extensions to 100% test coverage seems like a realistic and useful goal. Doing this ensures that we have thorough tests and also gives more confidence when adding new code.
Contributions towards this goal are welcome. We should start with running a coverage tool to get a sense of how much coverage we're currently missing, then add tests to cover the missing bits, then set up CI so that it enforces 100% coverage. A complication might be that lots of code is version-specific: coverage computation should ideally sum across all supported versions of Python.
The covdefaults coverage plugin might be useful for us. It allows you to suppress coverage calculations on a version branch only if you're running a specific Python version: https://github.com/asottile/covdefaults?tab=readme-ov-file#version-specific--pragma-no-cover
I recently discovered that there's some kind of Codecov.io integration set up for the python GitHub organization: https://app.codecov.io/gh/python. Maybe we could use this for the typing_extensions coverage workflow?
Some nice perks:
- Online dashboard for coverage analytics (very helpful for browsing which lines are missing coverage)
- Supports generating PR comments (example: python/bedevere#670 (comment))
- Supports combining coverage reports from different matrix runs (which would simplify the workflow)
- Less sketchy as a dependency? (c.f. #669 (comment))