kube-scheduler-simulator
kube-scheduler-simulator copied to clipboard
New feature idea: kube-scheduler-simulator-operator
What I want to introduce
custom resource Simulator
and custom controller to manage that
The controller will be included in the simulator backend binary/container image.
And what I'd like to propose is not create Simulator
CRD on real kube-apiserver, but create it on kube-apiserver included in our simulator. (This could be a point for discussion tho.) All users need to do to use kube-scheduler-simulator-operator is only deploy kube-scheduler-simulator in their cluster with the permission to create Pods on their cluster. (So, they don't need to deploy controller or create CRD by themselves!)
Yeah, this feature requires that the simulator is deployed on Kubernetes since the controller need to start simulators as new container. I believe this feature brings new and great possibilities to this project.
Why I want to introduce it
With this, we can easily create/use the simulator as sandbox for the simulation. This feature makes it easier to handle multiple simulators and to create or modify simulators from the outside via kubectl or any other kube-apiserver clients.
#140 wants to add the scenario based simulation feature to kube-scheduler-simulator and it will be one of the usecases of kube-scheduler-simulator-operator. For more details about the scenario based simulation, please see https://bit.ly/scenario-based-scheduler-simulation.
We might add SimulatorName
to Scenario's spec and the Scenario will be run on that simulator. One big advantage of this is we can use a different scheduler without re-deploying the simulator. (Currently, scenarios, created in the simulator, will run on the same simulator and it means all scenarios will be run on a same simulator.)
I'll write the detailed proposal later.
/kind feature /assign /priority important-longterm
+1 for the SimulatorName for each different scenarios!
/triage accepted Not to mark as stale.
/area simulator
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten /lifecycle frozen
We've got a KEP for this already. I'll close this and will open other issues for the development of this feature.
/close
@sanposhiho: Closing this issue.
In response to this:
We've got a KEP for this already. I'll close this and will open other issues for the development of this feature.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.