eks-anywhere
eks-anywhere copied to clipboard
Bare Metal: When removing worker node groups `kubectl get nodes` output includes removed node(s)
Summary
When following a specific sequence of tasks that removes a worker node group from a bare meatal cluster the associated nodes still appear in kubectl get nodes output with scheduling disabled.
NAME STATUS ROLES AGE VERSION
eksa-dev01 Ready control-plane 43m v1.27.4-eks-cedffd4
eksa-dev02 Ready <none> 37m v1.27.4-eks-cedffd4
eksa-dev03 NotReady,SchedulingDisabled <none> 11m v1.27.4-eks-cedffd4
The issue occurs only with the reproduction steps below. It does not occur when the cluster is created with N worker node groups and is upgraded to N-1 groups.
Reproduce
- Create a bare metal cluster with 1 worker node group.
- Add a new worker node group, reusing the existing machine config, and upgrade the cluster.
- Remove the newly added worker node group and upgrade the cluster.
- Observe the node still appearing in
kubectl get nodesoutput.
Environment: Using EKS-A mainline @ https://github.com/aws/eks-anywhere/commit/95457b086b695ef73f13408120ef55c4854e3bf6