external-provisioner icon indicating copy to clipboard operation
external-provisioner copied to clipboard

test: add tirvy vulnerability scanner github action

Open andyzhangx opened this issue 2 years ago • 14 comments

What type of PR is this? /kind failing-test

What this PR does / why we need it: test: add tirvy vulnerability scanner github action

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

none

andyzhangx avatar May 18 '22 12:05 andyzhangx

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: andyzhangx To complete the pull request process, please assign jsafrane after the PR has been reviewed. You can assign the PR to them by writing /assign @jsafrane in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

k8s-ci-robot avatar May 18 '22 12:05 k8s-ci-robot

/assign @msau42

andyzhangx avatar May 18 '22 12:05 andyzhangx

Same concern as in other PRs which add GitHub actions in specific repos: we should have a consistent policy for all Kubernetes-CSI repos.

pohly avatar May 18 '22 13:05 pohly

agreed, maybe they could be symlinks from .github/workflows/<.yaml> to release-tools/workflows/<.yaml>

mauriciopoppe avatar May 18 '22 22:05 mauriciopoppe

I agree, such symlinks might work (haven't tried it). But before we dive into implementation details I would like to have a discussion about how we use these additional checks. For example, why only check the master branch? Isn't it even more important to check supported release branches? And the elephant in the room: what do we do if such a check fails? Who signs up to deal with it?

pohly avatar May 19 '22 08:05 pohly

@pohly @mauriciopoppe

I agree, such symlinks might work (haven't tried it)

i have tested that and it seems that github does not resolve links, and the action will fail to run...

why only check the master branch? Isn't it even more important to check supported release branches?

it is more important to check PRs. in my version for nfs-subdir-external-provisioner (https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/211) it test both master and PRs.

what do we do if such a check fails? Who signs up to deal with it?

the SECURITY_CONTACTS (or any other volunteer) can be in charge of creating an issue with the CVE details. they can fix it themselves or tag the issue with the help wanted and kind/failing-test labels.

there is also a question whether we should fail the test if the CVE's are still 'unfixed'?

yonatankahana avatar Jun 07 '22 15:06 yonatankahana

it is more important to check PRs.

Why? Because a PR adds a new dependency which is vulnerable? We don't add much new code, so this seems unlikely.

A much more common situation will be that a new vulnerability is detected in an existing dependency, and that then affects both the master branch and all release branches.

pohly avatar Jun 07 '22 16:06 pohly

the SECURITY_CONTACTS

That's for actual, serious vulnerabilities, not for ongoing triaging of scan results.

(or any other volunteer)

So you volunteer? :stuck_out_tongue_closed_eyes:

can be in charge of creating an issue with the CVE details

Let me be rather blunt here: I know companies care about metrics like "zero known CVEs", but in practice the ones that are found are often not applicable. If a company cares about zero CVEs found by this or that scanner, then they should scan regularly themselves and submit fixes or provide an engineer who does that work upstream.

Just my two cents...

pohly avatar Jun 07 '22 16:06 pohly

What I've seen is that most of the vulnerability reports come from using a base image that is not distroless (e.g. if it's any of the builder images in https://github.com/kubernetes/release/blob/master/images/build/debian-base/variants.yaml), as @pohly says it usually doesn't come from new go code or by adding new dependencies.

Isn't it even more important to check supported release branches?

Yes, I think we could have periodic jobs that use the image scanner on master and previous releases.

mauriciopoppe avatar Jun 07 '22 17:06 mauriciopoppe

What I've seen is that most of the vulnerability reports come from using a base image that is not distroless

That mirrors my experience, and furthermore it's those vulnerabilities that then do not affect the sidecar container apps because they don't trigger the conditions for the CVE.

pohly avatar Jun 07 '22 17:06 pohly

What I've seen is that most of the vulnerability reports come from using a base image that is not distroless (e.g. if it's any of the builder images in https://github.com/kubernetes/release/blob/master/images/build/debian-base/variants.yaml), as @pohly says it usually doesn't come from new go code or by adding new dependencies.

Isn't it even more important to check supported release branches?

Yes, I think we could have periodic jobs that use the image scanner on master and previous releases.

@mauriciopoppe This vuln scanner gh action would scan both image and go binary, so even it's distroless base image, there could be still vulnerability issue. Not sure how to make this change into csi release-tools

andyzhangx avatar Jun 08 '22 11:06 andyzhangx

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 06 '22 11:09 k8s-triage-robot

/remove-lifecycle stale

Is there still interest in this? We at the EBS CSI Driver are also impacted by sidecar vulns.

ConnorJC3 avatar Sep 23 '22 15:09 ConnorJC3

The current github action form is too dirsuptive, which blocks non-related PRs from merging. If there is a more out of band way to scan and open bug reports, then that would be preferable.

msau42 avatar Sep 23 '22 16:09 msau42

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 22 '22 17:12 k8s-triage-robot

/remove-lifecycle stale

mauriciopoppe avatar Jan 04 '23 21:01 mauriciopoppe

I saw a comment on #sig-testing that folks are moving away from trivy because it generates too many false positives. The main problem is that it doesn't consider which code is actually used by a Go program - any vulnerability in a dependency is assumed to apply.

So far, I have not found it useful for the sidecars.

pohly avatar Jan 04 '23 21:01 pohly

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 04 '23 22:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 04 '23 23:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Jun 03 '23 23:06 k8s-triage-robot

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jun 03 '23 23:06 k8s-ci-robot