autoscaler
autoscaler copied to clipboard
PV topology-aware scheduling even for Multi-AZ AWS ASG
Which component are you using?: EKS 1.21 and correspond CAS
Currently AWS and CAS documentation recommend using Availability Zone bounded Auto Scaling groups for usage of persistent storage in form of EBS based PersistentVolume (PV)
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
Imagine you have a multi-AZ ASG which is scaled to 0. You have deployments with pods using already provisioned PV (bound to PVC) it happens quite often after scale from 0 that CAS provisions node in wrong AZ i.e. node is created in another zone then PV which leads to "pending" pods
Describe the solution you'd like.:
Topology-aware volumes are available since K8s v1.12. If a PVC is bound to a PV that topology information is available to drive a proper scheduling decision for nodes in to the same Availability zomne (AZ) as the PV
Describe any alternative solutions you've considered.:
AWS Karpenter already solves this problem - see: https://karpenter.sh/v0.7.3/tasks/scheduling/#persistent-volume-topology https://github.com/aws/karpenter/blob/main/pkg/controllers/selection/volumetopology.go
Additional context.:
/area provider/aws
Hey @youwalther65, the reason Karpenter is able to handle this nicely whilst the CA doesn't currently is their different operating models, with Karpenter directly creating nodes, whilst the CA manipulates ASGs. This means we're restricted to setting the desired capacity of the ASG and letting the ASG decide which AZ to launch new instances in, so if it's a multi-AZ ASG, that decision's left to AWS' backend systems.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.