eks-anywhere
eks-anywhere copied to clipboard
Upgrading an EKS-A cluster with the original cluster config without the SSH keys removes the SSH keys
trafficstars
What happened: After running the upgrade command with the original cluster config, where the SSH keys are auto-generated, removes the SSH key from the cluster nodes.
Describing the KubeadmControlPlane shows that the SSH Authorized keys are empty
Users:
Name: ec2-user
Ssh Authorized Keys:
Sudo: ALL=(ALL) NOPASSWD:ALL
What you expected to happen:
Since the SSH keys were auto-generated during the create cluster command, it is expected that the same keys will be used for upgrades as well.
But if you run upgrade with the original cluster config, the SSH keys are removed from the cluster node
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig
metadata:
name: abhinav-workload
spec:
datastore: "WorkloadDatastore"
diskGiB: 25
folder: "abhinav-workload"
memoryMiB: 8192
numCPUs: 2
osFamily: bottlerocket
resourcePool: "Compute-ResourcePool"
users:
- name: ec2-user
How to reproduce it (as minimally and precisely as possible):
- Create a cluster without specifying the SSH keys
- EKS-A cli will generate an SSH key, and configure it on the cluster nodes
- Run the upgrade command using the same cluster config
- The auto-generated SSH key gets removed from the cluster nodes
Anything else we need to know?:
Environment:
- EKS Anywhere Release: v0.11.1
- EKS Distro Release: v1.23
- OS: Bottlerocket