sig-storage-local-static-provisioner
sig-storage-local-static-provisioner copied to clipboard
Allow autoscaling based on local static provisioned volumes
Is your feature request related to a problem? Please describe.
To summarise: in an autoscaling scenario the autoscaler cannot scale based in volumes provisioned by the local static provisioner, as the usage of provisioner: "kubernetes.io/no-provisioner"
prevents the autoscaling from been triggered due to how the scheduler binder works
The full problem is described in the autoscaler repository by myself. But after thinking about it, I think it somehow makes more sense that it is solved in the provisioner side.
Describe the solution you'd like in detail First of all, I have no insight on how CSI drivers work, but a solution to this issue could be that, optionally, the local provisioner could deploy a simple csidriver, that somehow allowed the PVCs to be bound to the PV created by the static provisioner. My guess is that the driver should "inform" the PVC about the name of the newly created PV, but this is just a guess.
Describe alternatives you've considered
Introducing another keyword instead of provisioner: "kubernetes.io/no-provisioner"
that could be used to indicate to the scheduler binder that the volume will exist, even although it does not exist now (like it does with dynamic provisioners). This would need to be changed in the Kubernetes scheduler binder first, and then introduced in the local static provisioner.
Additional context Full context in the mentioned comment and some previous comments in that thread: https://github.com/kubernetes/autoscaler/issues/1658#issuecomment-1036205889
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten