cluster-api-provider-aws icon indicating copy to clipboard operation
cluster-api-provider-aws copied to clipboard

Refactor the AWS & EKS control-plane controllers to split the kubeconfig secret into two for Cluster Autoscaler

Open cnmcavoy opened this issue 2 years ago • 6 comments
trafficstars

/kind feature

Describe the solution you'd like Currently, the AWSControlPlane and AWSManagedControlPlane controllers create and manage the cluster kubeconfig secrets ("%s-kubeconfig" in the cluster's namespace). The contents of this secret is a single key-value, with the value being the cluster's kubeconfig in the shape of:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t...
    server: https://XXXXX.us-east-2.eks.amazonaws.com
  name: cluster-name
contexts:
- context:
    cluster: cluster-3
    user: cluster-3-capi-admin
  name: cluster-3-capi-admin@cluster-3
current-context: cluster-3-capi-admin@cluster-3
kind: Config
preferences: {}
users:
- name: cluster-3-capi-admin
  user:
    token: k8s-aws-v1...

In the case of EKS clusters, the bearer token embedded in the token field expires quickly and the CAPA controller regularly updates the secret to keep it fresh. However, this shape of config is hard for other controllers, like the cluster autoscaler to consume, and leads to bugs like https://github.com/kubernetes/autoscaler/issues/4784 which can not be easily resolved by those controllers. If they reload the entire config, the user or other critical content may change, if they reload only the token within, then they are disregarding the contents and making unsafe assumptions.

I have discussed this problem with the folks over in SIG API-Machinery and it was suggested to split this secret into two key-values, in this shape:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t...
    server: https://XXXXX.us-east-2.eks.amazonaws.com
  name: cluster-name
contexts:
- context:
    cluster: cluster-3
    user: cluster-3-capi-admin
  name: cluster-3-capi-admin@cluster-3
current-context: cluster-3-capi-admin@cluster-3
kind: Config
preferences: {}
users:
- name: cluster-3-capi-admin
  user:
    tokenFile: ./bearer-token

And a second key-value, "bearer-token" with the contents:

k8s-aws-v1...

Additional context: https://github.com/kubernetes/autoscaler/pull/5951#discussion_r1378786610

This will allow controllers like the Cluster Autoscaler to mount the secret contents to a path, and it will populate two files. The existing api machinery then should be able to handle the reloading of the 2nd bearer-token file without complications.

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] Changing the secret value to depend on the 2nd value existing will break any users that consume only the one secret key-value explicitly, in a subPath volume mount. Normal volume mounts should work, as both files will exist relative to each other.

cnmcavoy avatar Nov 01 '23 20:11 cnmcavoy

This issue is currently awaiting triage.

If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 01 '23 20:11 k8s-ci-robot

/assign

cnmcavoy avatar Nov 01 '23 20:11 cnmcavoy

\o that's great. There is the same need and the same issue for FluxCD.

mloiseleur avatar Dec 19 '23 09:12 mloiseleur

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 18 '24 09:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 17 '24 10:04 k8s-triage-robot

/remove-lifecycle rotten

cnmcavoy avatar Apr 17 '24 14:04 cnmcavoy

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 16 '24 15:07 k8s-triage-robot