cluster-api icon indicating copy to clipboard operation
cluster-api copied to clipboard

Node UID in MachinePool nodeRef mismatch post Kubernetes upgrade

Open jayesh-srivastava opened this issue 5 months ago • 3 comments

What steps did you take and what happened?

When a K8s upgrade is performed on a Managed cluster, new nodes will come up with new UIDs. However, the MachinePool controller has an early return condition that only validates the count of NodeRefs but doesn't check if the UIDs are still valid. This leads to MachinePools retaining stale NodeRef UIDs after upgrades, causing UID mismatches that persist until manual intervention.

What did you expect to happen?

Expected the MachinePool nodeRef to contain correct Node UIDs even after Kubernetes upgrade.

Cluster API version

Cluster API - v1.9.4

Kubernetes version

Kubernetes version - v1.30 & v1.31

Anything else you would like to add?

Created AKS cluster Cluster-API-Provider-Azure version - v1.18.0

Label(s) to be applied

/kind bug One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

jayesh-srivastava avatar Jun 23 '25 15:06 jayesh-srivastava

This issue is currently awaiting triage.

If CAPI contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jun 23 '25 15:06 k8s-ci-robot

/assign

jayesh-srivastava avatar Jun 23 '25 15:06 jayesh-srivastava

/assign @jayesh-srivastava

cc @richardcase @mboersma

sbueringer avatar Jun 25 '25 13:06 sbueringer

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 23 '25 15:10 k8s-triage-robot

/remove-lifecycle stale

richardcase avatar Nov 10 '25 16:11 richardcase

cc @richardcase / @mboersma :-)

chrischdi avatar Nov 12 '25 13:11 chrischdi