cluster-proportional-autoscaler
cluster-proportional-autoscaler copied to clipboard
Mount ConfigMap into container instead of fetching it from Apiserver
Kubernetes already provides ConfigMap mounting feature --- the mounted ConfigMap gets dynamically reloaded on disk on changes. It seems not wise to re-implement a mechanism provided by the cluster.
Beside, with current implementation the autoscaler has to be granted ConfigMap read access rights to the cluster. It also seems unfit because only one specific ConfigMap is needed for providing scaling parameters.
However, one problem with the ConfigMap mounting solution is that the container will not be started running if the ConfigMap is no exist(mount error), which means we need an initial process to create it ahead. This is not desired either, because we want to handle the entire lifecycle of ConfigMap by autoscaler itself.
Good news is folks already start working on the optional ConfigMap feature(kubernetes/community#175), which is also needed by kube-dns
. We should consider re-write the ConfigMap polling logic here after this feature is implemented.
Optional config maps have landed.
Are you still planning on changing the implementation? Do you have timelines?
@amrmahdi I might not have bandwidth to do this shortly, but would love to review and merge if someone else can pick this up :)
kube-dns has implemented similar functionality: https://github.com/kubernetes/dns/blob/master/pkg/dns/config/sync.go
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@dudicoco: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Are there any plans on implementing this issue? It would be best practice to create the configmap beforehand.
Still reffed downstream https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml#L40
Worth fixing? /reopen
@afirth: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
Still reffed downstream https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml#L40
Worth fixing? /reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Definitely worth fixing. If anyone can help send a PR that would speed things up quite a lot :) /reopen
@MrHohn: Reopened this issue.
In response to this:
Definitely worth fixing. If anyone can help send a PR that would speed things up quite a lot :) /reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/lifecycle frozen
I'd be willing to help and take the issue to completion, if this is still required ?
@ipochi Thanks for chiming in, yes this is still something we desired.
I'd like some help , background context in order to accomplish the task.