autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

Upgrade Autoscaler Components to use Debian 12 Distroless

Open jhawkins1 opened this issue 1 year ago • 1 comments

Which component are you using?: Cluster Autoscaler and VPA

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:

Since Debian 12 Distroless is available, as well as, the latest releases of Kubernetes and many of the Kubernetes related projects have moved to Debian 12, we would request that Autoscaler Components move to Debian 12 Distroless. The other benefit of moving to Debian 12 is there is a population of current and future Operating System Vulnerabilities (CVEs) that Debian is not addressing in Debian 11 but only Debian 12.

Alignment of OS to other Kubernetes related Projects. Reduces fan-out of multiple OSes or different versions of OSes across components. Potential benefits as to being able to obtain OS patches for vulnerabilities where Debian has decided only to fix in latest LTS release.

Describe the solution you'd like.: Upgrade Autoscaler components to use Debian 12 Distroless.

Describe any alternative solutions you've considered.: n/a

Additional context.: n/a

jhawkins1 avatar May 14 '24 20:05 jhawkins1

Following up on the feature request for upgrading the Autoscaler components, specifically Cluster Autoscaler and VPA, to Debian 12 Distroless. This upgrade is crucial for ensuring these components align with the broader Kubernetes ecosystem, which has largely transitioned to Debian 12. The primary motivation for this request is the enhanced security posture Debian 12 offers, particularly regarding the handling of operating system vulnerabilities (CVEs) that are not being addressed in Debian 11.

The benefits of this migration include improved alignment with Kubernetes-related projects, reduced complexity in managing multiple OS versions, and enhanced security through access to OS patches for vulnerabilities addressed exclusively in the latest LTS release.

Could you please provide an update on the status of this request? Specifically, it would be helpful to know if there is a targeted release date or version number by which this upgrade is expected to be completed.

kady1711 avatar Jun 21 '24 14:06 kady1711

/area cluster-autoscaler /area vertical-pod-autoscaler

adrianmoisey avatar Jul 08 '24 18:07 adrianmoisey

/remove-area vertical-pod-autoscaler

adrianmoisey avatar Sep 24 '24 08:09 adrianmoisey

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 23 '24 08:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jan 22 '25 09:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Feb 21 '25 09:02 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Feb 21 '25 09:02 k8s-ci-robot