[DNM] Generate SBOMs for repaired libraries
(Draft, do not merge) This is my initial pull request for adding automatic SBOM generation based on package manager info for libraries that are repaired into wheels. I've tested locally by building wheels from various projects and operating systems, will work on getting those pulled into the test suite.
The majority of the "logic" comes from the project "whichprovides" which gets bundled into auditwheel as a single file. You can review that project in its entirety within this pull request. If you'd like to submit comments that you have about "whichprovides" you can do so here and I'll get them addressed in the upstream project.
Closes #541 Closes #398
Codecov Report
:white_check_mark: All modified and coverable lines are covered by tests.
:white_check_mark: Project coverage is 93.00%. Comparing base (909b1bd) to head (822bfad).
:warning: Report is 1 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #577 +/- ##
==========================================
+ Coverage 92.83% 93.00% +0.16%
==========================================
Files 21 22 +1
Lines 1773 1815 +42
Branches 333 338 +5
==========================================
+ Hits 1646 1688 +42
Misses 77 77
Partials 50 50
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
Taking this pull request out of draft to begin inviting reviews. I still need to add some integration tests to show that the SBOM documents get generated for many different operating systems offered by manylinux images :)
cc @lkollar, @mayeut, and @captn3m0 who expressed interested in the linked issues.
Okay @mayeut and @auvipy this pull request is now ready to be reviewed.
- Codecov is showing 92.92% coverage on the website compared to this pull request. Not sure what I can do to unstick that?
- ~3.9 was failing, but I am not sure why? The new test I've added does the same setup as
test_numpybut that one isn't failing for the same reason, maybe you have seen an issue like this before?~ - I've addressed the Copilot review comments.
Let me know what else I can do to get this PR ready :)
Thanks for addressing feedback @mayeut, I believe there isn't anything left for me to do on this PR :)
Thanks @sethmlarson
Thank you for the reviews @mayeut! :pray: