sig-security
sig-security copied to clipboard
[Umbrella] Artifact Vulnerability Scanning and Triage Policy
Goal: Implement automated scanning capabilities that are tool agnostic for identifying vulnerabilities in Kubernetes related artifacts, followed by a documented private triage process to resolve the identified vulnerability, with a programmable way for Kubernetes users to consume this vulnerability information.
Background
Over the years, multiple different community members in Security Response Committee (formerly PSC), SIG Release, Architecture, Security, Auth have contributed to several standalone efforts related to vulnerability management for https://github.com/kubernetes/kubernetes. We have made tremendous progress but there are still some opportunities to improve :-)
Scope
This issue is created to act as a single place to find current state of the work, in progress and planned work that fall in the overall theme of vulnerability management of Kubernetes artifacts. In scope artifacts include but are not limited to build time dependencies and container images. Adding any missing issues or related work as a comment is encouraged :-)
Artifact Vulnerability Scanning
Build time Dependencies
- [x] Implement automated scanning with prow and test-grid for k/k HEAD (main branch) (https://github.com/kubernetes/kubernetes/issues/101528)
- [X] Parsing improvements (https://github.com/kubernetes/test-infra/pull/22756)
- [X] Ensure scan fails when a vulnerability is found (https://github.com/kubernetes/test-infra/pull/22833)
- [x] https://github.com/kubernetes/test-infra/issues/23112
- [ ] #95
Container Images
- [x] https://github.com/kubernetes/sig-security/issues/4
- [x] Explore and identify scanners that can detect vulnerabilities in distroless++ images
- [x] Explore using SBOM to programmatically get a list of images in each kubernetes release (https://github.com/kubernetes/release/pull/2095)
- [x] Implement automated scanning with prow and test-grid for k/k HEAD
- [x] Ensure scan fails when a vulnerability is found
Ongoing Maintenance
- https://github.com/kubernetes/test-infra/pull/27309
- https://github.com/kubernetes/test-infra/pull/26777
- https://github.com/kubernetes/test-infra/pull/24857
- https://github.com/kubernetes/test-infra/pull/24446
Triage Policy Definition and Implementation
- [ ] Solicit feedback from SRC and SIG Security Co-chairs for Triage and Resolution policy (https://github.com/kubernetes/community/pull/5853)
- [X] Create a new group for private triage (https://github.com/kubernetes/k8s.io/pull/2342)
- [x] Drive an end to end triage for an identified vulnerability to resolution
- [ ] Update the triage and resolution policy based on end to end experience
- [ ] https://github.com/kubernetes/sig-security/issues/1
- [ ] Define and Measure mean time to triage, false positive rate for each identified vulnerability
- [ ] Create a rotating triage role for taking action on identified vulnerability
Related Issues and PRs
- Original issue to track kubernetes build time dependencies: https://github.com/kubernetes/community/issues/2992
- CVE RSS feed broken: https://github.com/kubernetes/website/issues/29142
- Request for base image patching: https://github.com/kubernetes/release/issues/1833
- Examples of CVE fixes:
- Build time dependency bumps: https://github.com/kubernetes/kubernetes/issues/117094
- Debian base image bumps: https://github.com/kubernetes/kubernetes/pull/102302
- Distroless base image bumps: https://github.com/kubernetes/kubernetes/pull/100566
- Build time dependency bumps: https://github.com/kubernetes/kubernetes/issues/100401
- CNCF TAG Security discussion: https://github.com/cncf/tag-security/issues/170
/sig security release architecture auth /area config testing code-organization dependency release-eng release-eng/security /committee product-security /kind feature
@PushkarJ: The label(s) area/config, area/testing, area/release-eng, area/release-eng/security
cannot be applied, because the repository doesn't have them.
In response to this:
Goal: Implement automated scanning capabilities that are tool agnostic for identifying vulnerabilities in Kubernetes related artifacts, followed by a documented private triage process to resolve the identified vulnerability, with a programmable way for Kubernetes users to consume this vulnerability information.
Background
Over the years, multiple different community members in Security Response Committee (formerly PSC), SIG Release, Architecture, Security, Auth have contributed to several standalone efforts related to vulnerability management for github.com/kubernetes/kubernetes. We have made tremendous progress but there are still some opportunities to improve :-)
Scope
This issue is created to act as a single place to find current state of the work, in progress and planned work that fall in the overall theme of vulnerability management of Kubernetes artifacts. In scope artifacts include but are not limited to build time dependencies and container images. Adding any missing issues or related work as a comment is encouraged :-)
Artifact Vulnerability Scanning
Build time Dependencies
- [ ] Implement automated scanning with prow and test-grid for k/k HEAD (main branch) (https://github.com/kubernetes/kubernetes/issues/101528)
- [X] Parsing improvements (https://github.com/kubernetes/test-infra/pull/22756)
- [X] Ensure scan fails when a vulnerability is found (https://github.com/kubernetes/test-infra/pull/22833)
- [ ] Update alerting to send alerts to group under kubernetes.io
- [ ] Evaluate go vuln-db tool as an additional scanning tool
Container Images
- [ ] Identify a list of container images managed by github.com/kubernetes/release
- [ ] Explore and identify scanners that can detect vulnerabilities in distroless++ images
- [ ] Explore using SBOM to programmatically get a list of images in each kubernetes release (https://github.com/kubernetes/release/pull/2095)
- [ ] Implement automated scanning with prow and test-grid for k/k HEAD
- [ ] Ensure scan fails when a vulnerability is found
Triage Policy Definition and Implementation
- [ ] Draft Triage and Resolution Policy and solicit feedback from SRC and SIG Security Co-chairs (https://github.com/kubernetes/community/pull/5853)
- [X] Create a new group for private triage (https://github.com/kubernetes/k8s.io/pull/2342)
- [ ] Drive an end to end triage for an identified vulnerability to resolution
- [ ] Update the triage and resolution policy based on end to end experience
- [ ] Create a periodically auto-refreshing list of fixed CVEs and publish to github.com/kubernetes/website
- [ ] Define and Measure mean time to triage, false positive rate for each identified vulnerability
- [ ] Create a rotating triage role for taking action on identified vulnerability
Related Issues and PRs
- Original issue to track kubernetes build time dependencies: https://github.com/kubernetes/community/issues/2992
- CVE RSS feed broken: https://github.com/kubernetes/website/issues/29142
- Examples of CVE fixes:
- Debian base image bumps: https://github.com/kubernetes/kubernetes/pull/102302
- Distroless base image bumps: https://github.com/kubernetes/kubernetes/pull/100566
- Build time dependency bumps: https://github.com/kubernetes/kubernetes/issues/100401
- CNCF TAG Security discussion: https://github.com/cncf/tag-security/issues/170
/sig security release architecture auth /area config testing code-organization dependency release-eng release-eng/security /committee product-security /kind feature
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
cc @kubernetes/sig-security-leads / @tabbysable / @IanColdwater , @dims , @navidshaikh , @puerco , @justaugustus
/assign @PushkarJ
In addition to the SBOM item, you can find the images we produce in the SBOM to close this one:
Identify a list of container images managed by github.com/kubernetes/release
I can help with that!
/cc
I'd love to help with. Evaluate go vuln-db tool as an additional scanning tool
and any of the Container Images
- tasks.
I remember use talking about automation for updating k8s with latest debian-[base, iptables] images. Are we tracking that in this? I'd love to help with that.
/transfer sig-security
@PushkarJ: The label(s) area/config, area/testing, area/code-organization, area/release-eng, area/release-eng/security
cannot be applied, because the repository doesn't have them.
In response to this:
Goal: Implement automated scanning capabilities that are tool agnostic for identifying vulnerabilities in Kubernetes related artifacts, followed by a documented private triage process to resolve the identified vulnerability, with a programmable way for Kubernetes users to consume this vulnerability information.
Background
Over the years, multiple different community members in Security Response Committee (formerly PSC), SIG Release, Architecture, Security, Auth have contributed to several standalone efforts related to vulnerability management for https://github.com/kubernetes/kubernetes. We have made tremendous progress but there are still some opportunities to improve :-)
Scope
This issue is created to act as a single place to find current state of the work, in progress and planned work that fall in the overall theme of vulnerability management of Kubernetes artifacts. In scope artifacts include but are not limited to build time dependencies and container images. Adding any missing issues or related work as a comment is encouraged :-)
Artifact Vulnerability Scanning
Build time Dependencies
- [ ] Implement automated scanning with prow and test-grid for k/k HEAD (main branch) (https://github.com/kubernetes/kubernetes/issues/101528)
- [X] Parsing improvements (https://github.com/kubernetes/test-infra/pull/22756)
- [X] Ensure scan fails when a vulnerability is found (https://github.com/kubernetes/test-infra/pull/22833)
- [x] https://github.com/kubernetes/test-infra/issues/23112
- [ ] Evaluate go vuln-db tool as an additional scanning tool
Container Images
- [ ] https://github.com/kubernetes/community/issues/5960
- [ ] Explore and identify scanners that can detect vulnerabilities in distroless++ images
- [x] Explore using SBOM to programmatically get a list of images in each kubernetes release (https://github.com/kubernetes/release/pull/2095)
- [ ] Implement automated scanning with prow and test-grid for k/k HEAD
- [ ] Ensure scan fails when a vulnerability is found
Triage Policy Definition and Implementation
- [ ] Solicit feedback from SRC and SIG Security Co-chairs for Triage and Resolution policy (https://github.com/kubernetes/community/pull/5853)
- [X] Create a new group for private triage (https://github.com/kubernetes/k8s.io/pull/2342)
- [ ] Drive an end to end triage for an identified vulnerability to resolution
- [ ] Update the triage and resolution policy based on end to end experience
- [ ] https://github.com/kubernetes/community/issues/5923
- [ ] Define and Measure mean time to triage, false positive rate for each identified vulnerability
- [ ] Create a rotating triage role for taking action on identified vulnerability
Related Issues and PRs
- Original issue to track kubernetes build time dependencies: https://github.com/kubernetes/community/issues/2992
- CVE RSS feed broken: https://github.com/kubernetes/website/issues/29142
- Request for base image patching: https://github.com/kubernetes/release/issues/1833
- Examples of CVE fixes:
- Debian base image bumps: https://github.com/kubernetes/kubernetes/pull/102302
- Distroless base image bumps: https://github.com/kubernetes/kubernetes/pull/100566
- Build time dependency bumps: https://github.com/kubernetes/kubernetes/issues/100401
- CNCF TAG Security discussion: https://github.com/cncf/tag-security/issues/170
/sig security release architecture auth /area config testing code-organization dependency release-eng release-eng/security /committee product-security /kind feature
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale