autoscaler
autoscaler copied to clipboard
Cluster-autoscaler for CAPI create status configmap in the workload cluster
Which component are you using?: cluster-autoscaler for Cluster API.
What version of the component are you using?:
Component version: v1.27.2
What k8s version are you using (kubectl version
)?:
v1.27.4
What environment is this in?: AWS managed by CAPA.
What did you expect to happen?:
The status configmap should be created in the cluster using the client resulted from the --cloud-config
parameter.
What happened instead?:
The status configmap is created in the workload cluster.
How to reproduce it (as minimally and precisely as possible):
Deploy cluster-autoscaler with a topology as Autoscaler running in management cluster using service account credentials, with separate workload cluster and check that the status configmap is created in the workload cluster.
Anything else we need to know?:
I would like to work in this task.
The client used to write the configmap seems to come from https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/main.go#L392, set in line https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/main.go#L457 for AutoscalerOptions
. That being said, as --cloud-config
is not kubernetes client for other cloud providers, I believe there's no way for these change be just in the ClusterAPI provider.
thanks for reporting this, it sounds like this might be really difficult to fix from the capi provider.
perhaps we should start with a docs update so that users know the configmap will be created in the cluster specified by the --kubeconfig
parameter?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
i think we still need to deal with somehow
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
this needs fixing
/remove-lifecycle stale
/help-wanted
/help
@elmiko: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/triage accepted /lifecycle frozen