kube-state-metrics
kube-state-metrics copied to clipboard
Ability to configure --metric-allowlist and --metric-denylist after startup
What would you like to be added: Ability to configure --metric-allowlist and --metric-denylist after startup. Today, we can only configure these as cli arguments at startup.
Why is this needed: We need this functionality because we are installing kube-state-metrics as a dependent chart and we will need to redeploy the container if these settings have to be changed.
Describe the solution you'd like One of the ways we could configure this after startup is by having a configmap which could contain this setting and which is mounted as a volume to the container. If there is any change to the configmap a liveness probe can be configured to restart the container to pick up the new settings.
This would imply some sort of a config file that KSM would watch or reload based on an http request. I am not sure if the complexity of maintaining it would be justified. What are your concerns with simply recreating the KSM container?
Thanks @fpetkovski - In our onboarding scenario we have the KSM deployment folded into a helm chart that deploys all the necessary components. Recreating the container seems like an overhead for configuration update which can be done in place without having to redeploy the entire chart.
+1 . There should be an option for providing cli-arguments for k-s-m thru a config-map. It would be lot simpler than redeploying k-s-m
There is a similar discussion here: https://github.com/kubernetes/kube-state-metrics/pull/1710#discussion_r886082800
The questions with having a config file are:
- How will KSM know when to reload the config
- Which options should be configurable in the file
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /assign
I don't think this would add any complexity since users would still be able to reconfigure ksm by reloading it. To me this effort would be a great step forward toward hot-reloading of kube-state-metrics.
What we could do first is to add a logic to watch for a config file and restart the kube-state-metrics server with the new configuration without having the container creation overhead. This is very simple and can be done via context cancellation.
Having that logic would open the door to have hot-reloading in the future with the goal of reducing downtime when ksm configuration is updated. This used to be very rare but with the new custom metrics configuration is can see it being more frequent than it used to. To do proper hot-reloading we would have to take into account the changes to the resource that are watched and update our list of informers/builders on the fly. Also with metrics being removed by the configuration, we would have to invalidate some part of the cache.