New IgnoreSuppressedFindings feature
Description
Added a new toggle in the scheduled alert creation menu called "Ignore suppressed findings". Included the implementation and the related test(s). The logic in NotificationRule and ScheduledNotificationDispatchTask follows the pattern of the existing "Skip publish if unchanged" toggle, adapted for the different behavior. The option is propagated via the JSON for NewVulnerabilitySummary/NewPolicyViolationSummary and subsequently handled within the e-mail template, etc. The UI toggle is provided in a separate PR in the frontend repository. Also introduced a slightly different e-mail template when suppressed findings are ignored. Added a new test to verify the correct template is used, another one that checks whether the DispatchTask works correctly and updated the docs to reflect the new functionality.
Addressed Issue
Addresses issue #5488.
Additional Details
Used GitHub Copilot and ChatGPT to understand the codebase and debug, and to suggest approaches that fit the existing patterns.
Checklist
- [x] I have read and understand the contributing guidelines
- [ ] This PR fixes a defect, and I have provided tests to verify that the fix is effective
- [x] This PR implements an enhancement, and I have provided tests to verify that it works as intended
- [ ] This PR introduces changes to the database model, and I have added corresponding update logic
- [x] This PR introduces new or alters existing behavior, and I have updated the documentation accordingly
:white_check_mark: Snyk checks have passed. No issues have been found so far.
| Status | Scanner | Total (0) | ||||
|---|---|---|---|---|---|---|
| :white_check_mark: | Open Source Security | 0 | 0 | 0 | 0 | 0 issues |
:computer: Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse.
Coverage summary from Codacy
See diff coverage on Codacy
| Coverage variation | Diff coverage |
|---|---|
| Report missing for 72dfb1a55073e4095dd7cc15814b5a38cb29e54a[^1] | :white_check_mark: 100.00% (target: 70.00%) |
Coverage variation details
| Coverable lines | Covered lines | Coverage | |
|---|---|---|---|
| Common ancestor commit (72dfb1a55073e4095dd7cc15814b5a38cb29e54a) | Report Missing | Report Missing | Report Missing |
| Head commit (8927c5d33b656e200691cfa389f4ee58509e6837) | 24120 | 19519 | 80.92% |
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>
Diff coverage details
| Coverable lines | Covered lines | Diff coverage | |
|---|---|---|---|
| Pull request (#5489) | 50 | 50 | 100.00% |
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%
See your quality gate settings Change summary preferences
[^1]: Codacy didn't receive coverage data for the commit, or there was an error processing the received data. Check your integration for errors and validate that your coverage setup is correct.
@nscuro Please consider this PR in release 4.13.6 .
My review wasn't that deep, especially since I'm not familiar with the areas these changes interact with. You mentioned using AI - that's fine IMHO to get a basic coverage, but please make sure you understand the changes, especially in areas you're not familiar with.
I appreciate your review and am happy to include most of the suggestions you made. Nevertheless, I want to clarify that I am using AI to code faster and more efficiently, but not to let the AI do my job. Since you reviewed my implementation, I think you can agree that the implementation is logical, correct and in line with the existing codebase.
I appreciate your review and am happy to include most of the suggestions you made. Nevertheless, I want to clarify that I am using AI to code faster and more efficiently, but not to let the AI do my job. Since you reviewed my implementation, I think you can agree that the implementation is logical, correct and in line with the existing codebase.
To be clear, I am using AI myself to code at times, so I'm not against it per se. All I wanted to say is: be careful and check its outputs closely. But this should not turn into a discussion about AI (although I believe you used AI to augment/refine your response, which is totally fine).
After all, these changes look fine now. But I'm not a maintainer, I'm not familiar with these areas, so a maintainer should take a look at these changes now, it seems to be ready to stand the true test ;)