cache assigned pod count
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR will enhance the speed of the Coscheduling plugin in counting Pods that have already been assumed.
Which issue(s) this PR fixes:
Fix #707
Special notes for your reviewer:
Does this PR introduce a user-facing change?
NONE. This is a performance enhancement. Users do not need to do anything to use it.
Hi @KunWuLuan. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Deploy Preview for kubernetes-sigs-scheduler-plugins canceled.
| Name | Link |
|---|---|
| Latest commit | 1c63722a8288800760a4c35c27724c7fb2faa161 |
| Latest deploy log | https://app.netlify.com/sites/kubernetes-sigs-scheduler-plugins/deploys/6618a37f99df4600082a280c |
/ok-to-test
Could you help fix the CI failures?
@Huang-Wei Hi, I have fix the CI failures. Please have a look when you have time, thanks
I forgot one thing about the cache's consistency during one scheduling cycle - we will need to:
snapshot the pg->podNames map at the beginning of the scheduling cycle (PreFilter), so that we can treat it as source of truth during the whole scheduling cycle
support preemption
- implement the Clone() function
- for each PodAddition dryrun, if the pod is hit, add it
- for each PodDeletion dryrun, if the pod is hit, remove it
We only check the number of pods assigned in Permit, so I think there is no inconsistency during one scheduling cycle.
And postFilter will not check Permit plugin, so implementation of PodAddition and PodDeletion will have no effect on preemption, right?
What we can do is return framework.Unschedulable if the PodDeletion will make a podgroup rejected, but I think it is not enought for preemption of coscheduling.
I think support preemption for coscheduling is complecated, maybe in another issue. We can determine the expected behaviro for preemption of coscheduling. WDYT? #581
And postFilter will not check Permit plugin, so implementation of PodAddition and PodDeletion will have no effect on preemption, right?
Yes, the current preemption skeleton code assumes each plugin only use PreFilter to pre-calculate state. But for coscheduling, PreFilter can fail early (upon inadequate quorum).
I think scheduler framework should open up a hook for out-of-tree plugin to choose whether or not to run PreFilter as part of the preemption; otherwise, out-of-tree plugin has to rewrite the PostFilter impl. to hack that part.
I think support preemption for coscheduling is complecated, maybe in another issue. We can determine the expected behaviro for preemption of coscheduling. WDYT?
Let's consolidate all the cases and use a new PR to try to tackle it. Thanks.
@KunWuLuan are you ok with postpone this PR's merge after I cut release for v0.28, so that we have more time for soak testing.
And could you add a release-note to highlight it's a performance enhancement?
@KunWuLuan are you ok with postpone this PR's merge after I cut release for v0.28, so that we have more time for soak testing.
And could you add a release-note to highlight it's a performance enhancement?
Ok, no problem.
And postFilter will not check Permit plugin, so implementation of PodAddition and PodDeletion will have no effect on preemption, right?
Yes, the current preemption skeleton code assumes each plugin only use PreFilter to pre-calculate state. But for coscheduling, PreFilter can fail early (upon inadequate quorum).
I think scheduler framework should open up a hook for out-of-tree plugin to choose whether or not to run PreFilter as part of the preemption; otherwise, out-of-tree plugin has to rewrite the PostFilter impl. to hack that part.
I think support preemption for coscheduling is complecated, maybe in another issue. We can determine the expected behaviro for preemption of coscheduling. WDYT?
Let's consolidate all the cases and use a new PR to try to tackle it. Thanks.
Ok. I will try to design a preemption framework in postFilter, and if implementation in postFilter is enough, I will create a new pr to track the kep. Otherwise I will try to open a discuss in kubernetes/scheduling-sigs.
/cc
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@KunWuLuan could you resolve the conflicts? and it'll be good to be merged afterwards
/label tide/merge-method-squash
Deploy Preview for kubernetes-sigs-scheduler-plugins canceled.
| Name | Link |
|---|---|
| Latest commit | edd5da8aaf7be8037c083025bd14791c63f4e192 |
| Latest deploy log | https://app.netlify.com/sites/kubernetes-sigs-scheduler-plugins/deploys/6784fd8ce61d9c00086b9e79 |
@Huang-Wei Hi, I have resolved the conflicts and make the tests passed. Please have a look when you have time. Thanks
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: Huang-Wei, KunWuLuan
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [Huang-Wei]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment