enhancements
enhancements copied to clipboard
In-place-update-pod-resources should align with Pod Scheduling Readiness
Enhancement Description
- One-line enhancement description (can be used as a release note): 1287-in-place-update-pod-resources
- Kubernetes Enhancement Proposal: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1287-in-place-update-pod-resources
- Discussion Link:
- Primary contact (assignee):
- Responsible SIGs:
- Enhancement target (which target equals to which milestone):
- Alpha release target (x.y):
- Beta release target (x.y):
- Stable release target (x.y):
- [ ] Alpha
- [ ] KEP (
k/enhancements) update PR(s): - [ ] Code (
k/k) update PR(s): - [ ] Docs (
k/website) update PR(s):
- [ ] KEP (
The Pod Scheduling Readiness feature empowers users to implement their custom resource quotas. In-place-update-pod-resources should align with Pod Scheduling Readiness enabling users to define and apply their specific resourceQuota implementations.
There is a need to incorporate the ability to add a scaling readiness gate, acting as a finalizer/scheduling gate. This enables users to dynamically remove it using their own controller, ensuring the validity of newly allocated resources.
/cc @Huang-Wei @SergeyKanzhelev @liggitt
Why is there a separate issue for this? Shouldn't this be folded into the in-place KEP and issue (1287)
Sorry i'm new in this Repo. I will comment in the issue you sent.
em. currently, in-place-vpa doesn't support custom resource yet. this requires a more concrete plan and use cases and I think that's the beta item. Can we consider the alignment in beta phase?
em. currently, in-place-vpa doesn't support custom resource yet. this requires a more concrete plan and use cases and I think that's the beta item. Can we consider the alignment in beta phase?
@Jeffwan I'd like to clarify that this problem isn't about custom resources.Rather, it concerns a recently introduced Kubernetes feature called "Pod Scheduling Readiness". You can find detailed information about this feature here: https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/3521-pod-scheduling-readiness.
In essence, the concept of a scheduling gate is somewhat similar to that of a finalizer. When a pod has a scheduling gate, it won't be scheduled until the gate is removed, just as a pod with a finalizer won't be deleted until the finalizer is removed.
One significant motivation behind introducing this feature is to empower users to implement their own ResourceQuota, as outlined in User Story 1 in the link provided above.
I initiated this issue because I believe that there should be an option for scaling readiness gate with in-place-vpa . Without it, the implementation of ResourceQuota in scenarios where in-place-vpa is allowed may not be feasible.
@Barakmor1 Thanks for the details. I misunderstood the scope and I will take a look at the KEP and come back to you on the potential change
/sig node /sig scheduling
/cc @ahg-g
/cc @mrunalp
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.