descheduler
descheduler copied to clipboard
Should strategy names be shortened?
The naming pattern for the original few strategies was RemovePodsViolating...
and this pattern has since drifted in some strategies (RemoveDuplicates
, PodLifeTime
, LowNodeUtilization
) which raises the question, is the "RemovePods" prefix necessary?
It seems implied that each strategy will be removing pods that violate some constraint, so for ease of readability and config it might be worth it to remove the prefix from the existing strategies (possibly still supporting the old names for compatibility).
It may not be a big enough issue to change old strategy names, but we should consider setting a precedent of whether or not this is necessary for future strategies. Open to any feedback on this!
/kind cleanup
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
This can be targetted for the v2alpha version bump.
RemoveDuplicates ==> Duplicates LowNodeUtilization HighNodeUtilization RemovePodsViolatingInterPodAntiAffinity ==> PodsViolatingInterPodAntiAffinity RemovePodsViolatingNodeAffinity ==> PodsViolatingNodeAffinity RemovePodsViolatingNodeTaints ==> PodsViolatingNodeTaints RemovePodsViolatingTopologySpreadConstraint ==>PodsViolatingTopologySpreadConstraint RemovePodsHavingTooManyRestarts ==> PodsHavingTooManyRestarts PodLifeTime
Agree that the prefix Remove
is inherently implied that the strategy is going to remove pods
@hanumanthan I would go a step further and drop all the redundant prefixes, not just Remove
, so eg
RemovePodsViolatingInterPodAntiAffinity => InterPodAntiAffinity
RemovePodsViolatingNodeAffinity => NodeAffinity
... etc
the question is if we actually want to do this. I don't think it needs to be strictly tied to an API version, as we could easily just code a translation layer in the config parsing logic. But it will eventually be breaking when we remove that layer
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-contributor-experience at kubernetes/community. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-contributor-experience at kubernetes/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
From RemovePodsViolatingNodeTaints:
Summarizing:
- descheduler never removes a pod, it only evicts one =>
Remove
keyword can be drop - descheduler always evicts pods only =>
Pods
keyword is redundant - taints are always present on nodes only => should we drop
Node
keyword?
Turning the name into ViolatingTaints
. Checking naming under https://github.com/kubernetes/kubernetes/tree/master/pkg/scheduler/framework/plugins and https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg many plugins mentions a feature. E.g.
nodeaffinity, podtopologyspread, coscheduling, crossnodepreemption, podstate. Given we are checking against certain features, we might follow the same pattern. E.g. Taints
, NodeAffinity
, TopologySpreadConstraint
(given all plugin are evicting pods "violating" some constraint we can leave out Violating
keyword as well). Turning all plugin names into:
-
duplicates
-
failedpods
-
nodeaffinity
(https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) -
taints
(https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) -
interpodantiaffinity
(https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) -
podlifetime
-
toomanyrestarts
-
topologyspreadconstraint
(https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) -
nodeutilization
Plugins such as nodeaffinity
, taints
, interpodantiaffinity
and topologyspreadconstraint
operator with known features. In the remaining cases it's up to us to choose the naming.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.