autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

Cluster-autoscaler for CAPI create status configmap in the workload cluster

Open jonathanbeber opened this issue 1 year ago • 9 comments

Which component are you using?: cluster-autoscaler for Cluster API.

What version of the component are you using?:

Component version: v1.27.2

What k8s version are you using (kubectl version)?:

v1.27.4

What environment is this in?: AWS managed by CAPA.

What did you expect to happen?:

The status configmap should be created in the cluster using the client resulted from the --cloud-config parameter.

What happened instead?:

The status configmap is created in the workload cluster.

How to reproduce it (as minimally and precisely as possible):

Deploy cluster-autoscaler with a topology as Autoscaler running in management cluster using service account credentials, with separate workload cluster and check that the status configmap is created in the workload cluster.

Anything else we need to know?:

I would like to work in this task.

jonathanbeber avatar Sep 26 '23 18:09 jonathanbeber

The client used to write the configmap seems to come from https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/main.go#L392, set in line https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/main.go#L457 for AutoscalerOptions. That being said, as --cloud-config is not kubernetes client for other cloud providers, I believe there's no way for these change be just in the ClusterAPI provider.

jonathanbeber avatar Sep 26 '23 18:09 jonathanbeber

thanks for reporting this, it sounds like this might be really difficult to fix from the capi provider.

perhaps we should start with a docs update so that users know the configmap will be created in the cluster specified by the --kubeconfig parameter?

elmiko avatar Dec 18 '23 19:12 elmiko

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 17 '24 20:03 k8s-triage-robot

i think we still need to deal with somehow

/remove-lifecycle stale

elmiko avatar Mar 19 '24 14:03 elmiko

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 19 '24 14:06 k8s-triage-robot

this needs fixing

/remove-lifecycle stale

/help-wanted

elmiko avatar Jun 20 '24 15:06 elmiko

/help

elmiko avatar Jun 20 '24 15:06 elmiko

@elmiko: This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jun 20 '24 15:06 k8s-ci-robot

/triage accepted /lifecycle frozen

Shubham82 avatar Jun 21 '24 09:06 Shubham82