Document feature gate lookup
This is a Feature Request
What would you like to be added
- Per https://github.com/kubernetes/kubernetes/issues/87869#issuecomment-1465634605 you can find out which feature gates are enabled for the API server using an HTTP request.
- The other option is to look at mirror pods and DaemonSets to see what command line options and configuration files are being used.
- where config files are mounted from the control plane nodes, you may need to look there
- this is what you'd do to see feature gates for
kube-proxy(for example) - there's no API providing that in core Kubernetes and I don't think the metrics for kube-proxy are as easy to access.
Why is this needed People working with Kubernetes want to know what features their cluster provides. It can be hard to work out what feature gates are enabled, especially if you don't have superuser access to the control plane.
Comments /language en /kind feature
/triage accepted
BTW, I am thinking about adding a json or table feed for https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates-removed/ and https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/. Then users can easily get if a feature gate is enabled in a kube version.(It may be removed after GAed)
This request is like the https://kubernetes.io/docs/reference/issues-security/official-cve-feed/.
@pacoxu, you might like to reopen https://github.com/kubernetes/website/issues/25645 as a related task
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.