kube-state-metrics
kube-state-metrics copied to clipboard
Support regex expression in --namespaces and --namespaces-denylist command line arguments
What would you like to be added:
Support regex expression in the new namespaces-denylist
and namespaces
flags.
Why is this needed: Users with a large number of namespaces that are created dynamically currently cannot exclude them from Kube-state-metrics as the namespace name is not known at the time of deploying kube-state-metrics.
Describe the solution you'd like
Ability to use standard regex patterns, similar to the way the metric-allowlist
and metric-denylist
flags work.
Additional context
Thanks for creating the issue, would https://github.com/kubernetes/kube-state-metrics/pull/1656 solve your problem?
Well it depends if the label value needs to be fully qualified aswell or if a regex can be used.
I still think being able to accept a regex in the namespace cli arguments would still be beneficial.
@stevejr, currently #1656 only contains the parsing of the syntax to allow for more complex arguments to the --namespaces
option. Thus right now the PR is only for transforming the syntax into a data structure (a map in this case) which we can later use. The eventual implementation / the way the data is used is open for discussion.
I think using a regular expression for this is the appropriate choice and more inline with the existing --metrics-allowlist
and --metric-denylist
options (which you yourself mentioned as well).
@fpetkovski, maybe close this issue in favor of the existing #1631 in which there already was some discussion about using wildcards (not the same as regular expressions so I think this should be discussed a bit more)?
@Serializator Thanks for chiming in with the implementation of #1656. Extending that work would definitely be a good way to add support for regex matchers. Let's keep this issue open until we have a concrete decision so that others can add their feedback as well.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.