scheduler-plugins
scheduler-plugins copied to clipboard
Coscheduling: Condition of PodGroup Failed Phase
I found that a PodGroup is in Failed phase when it meets the requirements below:
- There are Pods in the Failed phase
- The number of Pods in the Running/Succeeded/Failed phases is greater than minMember
https://github.com/kubernetes-sigs/scheduler-plugins/blob/84d3e79188fcbf11f0e7bfdf1a261116ef7f12c9/pkg/controller/podgroup.go#L259-L262
I am wondering whether we should add a condition: The number of Pods in the Scheduled/Running/Succeeded phases is smaller than minMember.
So, the number of Pods in the Succeeded phase will finally be smaller than minMember.
The number of Pods in the Scheduled/Running/Succeeded phases is smaller than minMember. So, the number of Pods in the Succeeded phase will finally be smaller than minMember.
This is somewhat captured in the previous if
block, but we don't compose the logic in an if...else...
manner, so yes, it's possible the failed != 0, but running/completed pods have already meet the quorum.
@denkensk I think we should polish the logic a bit: consider the conditions thoroughly, and compose a if...else...
-like flow; otherwise, the current logic is a bit crappy - the latter if can overwrite the phase set previously, which doesn't read well and not maintainable.
This is somewhat captured in the previous
if
block, but we don't compose the logic in anif...else...
manner, so yes, it's possible the failed != 0, but running/completed pods have already meet the quorum.
Yes, to be specific, if the failed != 0, but running/completed pods have already meet the quorum, the expected status of the PodGroup will be Running
; while the actual status of the PodGroup will be set to Failed
:
- first it will be captured in the previous
if
block, and the PodGroup's status will be set toRunning
; - later it will be captured in this block, and the PodGroup's status will be set to
Failed
.
I think we can simply add a condition to the judgement of Failed
phase, or refactor the whole state machine to an if...else...
manner. If we need to polish the logic, I will be willing to have a try.
Also, I am wondering whether the current Scheduling
phase is actually needed. I think maybe we can combine PreScheduling
and Scheduling
to one phase.
A Scheduled
Pod is a Pod that has been already bound, which implies the number of Pods that can be scheduled (bound or not bound) has reached minMember
. So the current definition of Scheduling
phase is: there is at least one bound Pod, while the total number of bound Pods is smaller than minMember
.
I think we should polish the logic a bit: consider the conditions thoroughly, and compose a if...else...-like flow; otherwise, the current logic is a bit crappy - the latter if can overwrite the phase set previously, which doesn't read well and not maintainable.
+1 It's better to change it to if...else... flow. The current implementation is hard to read well.
Also, I am wondering whether the current Scheduling phase is actually needed. I think maybe we can combine PreScheduling and Scheduling to one phase.
I think it is better to merge Scheduling
and PreScheduling
together. We should try to avoid updating the status of the pod group in the scheduler.
I think it is better to merge
Scheduling
andPreScheduling
together. We should try to avoid updating the status of the pod group in the scheduler.
I suppose the merged phase's name is Scheduling
, and its condition is :
- the current phase is
Pending
; - the number of created Pods >= minMember;
- the number of
Scheduled
Pods < minMember (this condition is implied in the condition ofScheduled
phase).
@denkensk could u please give some suggestions?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.