external-provisioner
external-provisioner copied to clipboard
test: add tirvy vulnerability scanner github action
What type of PR is this? /kind failing-test
What this PR does / why we need it: test: add tirvy vulnerability scanner github action
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
none
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: andyzhangx
To complete the pull request process, please assign jsafrane after the PR has been reviewed.
You can assign the PR to them by writing /assign @jsafrane
in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
/assign @msau42
Same concern as in other PRs which add GitHub actions in specific repos: we should have a consistent policy for all Kubernetes-CSI repos.
agreed, maybe they could be symlinks from .github/workflows/<.yaml>
to release-tools/workflows/<.yaml>
I agree, such symlinks might work (haven't tried it). But before we dive into implementation details I would like to have a discussion about how we use these additional checks. For example, why only check the master branch? Isn't it even more important to check supported release branches? And the elephant in the room: what do we do if such a check fails? Who signs up to deal with it?
@pohly @mauriciopoppe
I agree, such symlinks might work (haven't tried it)
i have tested that and it seems that github does not resolve links, and the action will fail to run...
why only check the master branch? Isn't it even more important to check supported release branches?
it is more important to check PRs.
in my version for nfs-subdir-external-provisioner
(https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/211) it test both master
and PRs.
what do we do if such a check fails? Who signs up to deal with it?
the SECURITY_CONTACTS
(or any other volunteer) can be in charge of creating an issue with the CVE details. they can fix it themselves or tag the issue with the help wanted
and kind/failing-test
labels.
there is also a question whether we should fail the test if the CVE's are still 'unfixed'?
it is more important to check PRs.
Why? Because a PR adds a new dependency which is vulnerable? We don't add much new code, so this seems unlikely.
A much more common situation will be that a new vulnerability is detected in an existing dependency, and that then affects both the master branch and all release branches.
the SECURITY_CONTACTS
That's for actual, serious vulnerabilities, not for ongoing triaging of scan results.
(or any other volunteer)
So you volunteer? :stuck_out_tongue_closed_eyes:
can be in charge of creating an issue with the CVE details
Let me be rather blunt here: I know companies care about metrics like "zero known CVEs", but in practice the ones that are found are often not applicable. If a company cares about zero CVEs found by this or that scanner, then they should scan regularly themselves and submit fixes or provide an engineer who does that work upstream.
Just my two cents...
What I've seen is that most of the vulnerability reports come from using a base image that is not distroless (e.g. if it's any of the builder images in https://github.com/kubernetes/release/blob/master/images/build/debian-base/variants.yaml), as @pohly says it usually doesn't come from new go code or by adding new dependencies.
Isn't it even more important to check supported release branches?
Yes, I think we could have periodic jobs that use the image scanner on master and previous releases.
What I've seen is that most of the vulnerability reports come from using a base image that is not distroless
That mirrors my experience, and furthermore it's those vulnerabilities that then do not affect the sidecar container apps because they don't trigger the conditions for the CVE.
What I've seen is that most of the vulnerability reports come from using a base image that is not distroless (e.g. if it's any of the builder images in https://github.com/kubernetes/release/blob/master/images/build/debian-base/variants.yaml), as @pohly says it usually doesn't come from new go code or by adding new dependencies.
Isn't it even more important to check supported release branches?
Yes, I think we could have periodic jobs that use the image scanner on master and previous releases.
@mauriciopoppe This vuln scanner gh action would scan both image and go binary, so even it's distroless base image, there could be still vulnerability issue. Not sure how to make this change into csi release-tools
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Is there still interest in this? We at the EBS CSI Driver are also impacted by sidecar vulns.
The current github action form is too dirsuptive, which blocks non-related PRs from merging. If there is a more out of band way to scan and open bug reports, then that would be preferable.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I saw a comment on #sig-testing that folks are moving away from trivy because it generates too many false positives. The main problem is that it doesn't consider which code is actually used by a Go program - any vulnerability in a dependency is assumed to apply.
So far, I have not found it useful for the sidecars.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closedYou can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.