Add CVSSv4 support to Dependency-Track
Description
This adds support for CVSSv4 scores to Dependency-Track. Scores are stored in the database, will be returned as part of the relevant HTTP API resources following existing conventions, can be updated via the REST API, and will be processed by parsers for most vulnerability sources. They will also be preferred over CVSSv3 and CVSSv2 scores when determining the severity of a vulnerability.
Addressed Issue
fixes #4707
Additional Details
CVSSv4 handling not implemented in certain parsers
The following parsers do not have handling for CVSSv4 scores added by this PR, since the APIs they are based on are commercial offerings that I do not have access to:
- Snyk
- VulnDB
Only backend work completed for now
This PR obviously only addresses the server / API side of implementing CVSSv4 support. I am planning to work on support in the frontend next, but wanted to put my work on this repository out there first. I hope to get some feedback on this so I don't run in the wrong direction with any frontend changes.
Trivy protobufs updated
To be able to process CVSSv4 scores supplied by trivy, I have updated the protobuf files stored in this repo to the state of their release/v0.67 branch. I have kept the customizations / changes to the headers of the protobuf files the same, though. For the remainder of each protobuf file, I chose easier copy/paste-ability in the future over keeping the diff small. I hope that's okay.
Checklist
- [X] I have read and understand the contributing guidelines
- [ ] This PR fixes a defect, and I have provided tests to verify that the fix is effective
- [X] This PR implements an enhancement, and I have provided tests to verify that it works as intended
- [X] This PR introduces changes to the database model, and I have added corresponding update logic
- [ ] This PR introduces new or alters existing behavior, and I have updated the documentation accordingly
- NOTE: This is likely an open to-do item and I'd love some feedback on what needs documenting.
:white_check_mark: Snyk checks have passed. No issues have been found so far.
| Status | Scanner | Total (0) | ||||
|---|---|---|---|---|---|---|
| :white_check_mark: | Open Source Security | 0 | 0 | 0 | 0 | 0 issues |
:computer: Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse.
Coverage summary from Codacy
See diff coverage on Codacy
| Coverage variation | Diff coverage |
|---|---|
| :white_check_mark: -0.02% (target: -1.00%) | :white_check_mark: 77.78% (target: 70.00%) |
Coverage variation details
| Coverable lines | Covered lines | Coverage | |
|---|---|---|---|
| Common ancestor commit (07316d9b82e4c8e8fa68d2103540154665d2b820) | 24077 | 19476 | 80.89% |
| Head commit (e983b9833da41794fa60787cc3e572bc40a5e567) | 24176 (+99) | 19552 (+76) | 80.87% (-0.02%) |
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>
Diff coverage details
| Coverable lines | Covered lines | Diff coverage | |
|---|---|---|---|
| Pull request (#5456) | 135 | 105 | 77.78% |
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%
See your quality gate settings Change summary preferences
NotificationPublisherResourceTest.testNotificationRuleTest failed.
Well that's new. I haven't seen that test fail in my local test runs yet. I'll have a look to see if I can reproduce... any info on if it's just a bit brittle?
EDIT: From what I can tell it seems we just found a particularly busy Actions runner. The test takes between 5 and 6 seconds to run on my local machine (AMD Ryzen 9 9900X, 64 GB RAM). I could increase the timeout for this test to 15 seconds to make it less likely for this to re-occur. Not sure if that's desirable though.
I could increase the timeout for this test to 15 seconds to make it less likely for this to re-occur. Not sure if that's desirable though.
IMHO you can do that as long as it's not excessive. I have done so myself in other PRs to mitigate some flaky tests that were clearly failing because of low resources in the CI environment.
IMHO you can do that as long as it's not excessive. I have done so myself in other PRs to mitigate some flaky tests that were clearly failing because of low resources in the CI environment.
Alright, thank you. In that case: Extended timeout to 20s to hopefully alleviate the flakiness.