kubespray
kubespray copied to clipboard
add two new options --max-requests-inflight and --max-mutating-requests-inflight to control-plane's default options
What would you like to be added:
APF (API Priority and Fairness) is a mechanism to control the behavior of the Kubernetes API server in an overload situation and is a key task for cluster administrators. The kube-apiserver has some controls available (i.e. the --max-requests-inflight and --max-mutating-requests-inflight command-line flags) to limit the amount of outstanding work that will be accepted, preventing a flood of inbound requests from overloading and potentially crashing the API server.
So, that'd be great if we could customize these parameters in Kubespray. There is also --enable-priority-and-fairness
flag that controls the behavior of enabling/disabling the APF feature. Maybe we could add another option for this.
Why is this needed:
To get more context about the feature: https://kubernetes.io/docs/concepts/cluster-administration/flow-control/
I can volunteer to do this!
Note that you can use kube_kubeadm_apiserver_extra_args
to pass additional configuration to kubeadm and thus to the apiserver. Is it insufficient ?
I agree, but I thought we could create another group of variables that could be configured together in terms of APF features, for example, if you enable the APF feature we could give a reasonable amount of default values to these flags, also APF is enabled by default since 1.20, so for someone who might want to disable it, we could easily configure it by giving an on/off option without them knowing what is going on behind the scenes.
does that make sense?
if you enable the APF feature we could give a reasonable amount of default values to these flags, also APF is enabled by default since 1.20 ...
I think the defaults bundled in the apiserver are reasonable enough, don't you ?
so for someone who might want to disable it, we could easily configure it by giving an on/off option without them knowing what is going on behind the scenes.
I don't see a usecase where you would disable APF without having at least some knowledge of how it works. If you're trying to debug the apiserver not responding fast enough to some requests, you certainly need to know something about APF.
I think the defaults bundled in the apiserver are reasonable enough, don't you ?
great point though, I agree!
If you're trying to debug the apiserver not responding fast enough to some requests, you certainly need to know something about APF.
I agree! but we can still streamline passing these variables to kube-apiserver by eliminating knowing the flags.
I agree! but we can still streamline passing these variables to kube-apiserver by eliminating knowing the flags.
I don't see the benefit. Knowing the flags or knowing the kubespray variables is pretty equivalent, and that's another thing we need to document + grow the template.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.