Annotations are getting deleted on infrastructure machines
What steps did you take and what happened?
We're not entirely sure how to reproduce this. We subsequently upgraded to 1.4.0 (last week) and 1.4.1 (yesterday) and are now observing infrastructure Machines without annotations. Both our custom annotations (for IPAM) as well as CAPI's annotations (like cluster.x-k8s.io/cloned-from-groupkind) are gone.
This doesn't happen on all machines. It seems to be related to SSA, since all machines without annotations have f:metadata.f:annotations: {} in their managedFields.
output generated by
k get vspheremachine -o 'custom-columns=name:metadata.name,annotations:metadata.annotations,managedFields:metadata.managedFields[*].fieldsV1.f:metadata.f:annotations'
Three machines with different annotations and managed fields. I've removed some annotations so it's readable.
xx-1-md-29-b78lx map[cluster.x-k8s.io/cloned-from-groupkind:VSphereMachineTemplate.infrastructure.cluster.x-k8s.io ipam.schiff.telekom.de/0-InfobloxNetworkView:TDCN ipam.schiff.telekom.de/0-Subnet:<removed>] map[],map[f:ipam.schiff.telekom.de/0-InfobloxNetworkView:map[]]
yy-1-cp-17-9qgbv map[cluster.x-k8s.io/cloned-from-groupkind:VSphereMachineTemplate.infrastructure.cluster.x-k8s.io ipam.schiff.telekom.de/0-InfobloxNetworkView:TDCN ipam.schiff.telekom.de/0-Subnet:<removed>] <none>
xx-1-md-20-4kq5b <none> map[]
What did you expect to happen?
Cluster API version
1.4.0 and/or 1.4.1
Kubernetes version
1.23.16
Anything else you would like to add?
No response
Label(s) to be applied
/kind bug
/cc @sbueringer @killianmuldoon
/triage accepted
/assign
Thx for reporting!!
I'll take a look at the code and try to see if I can figure out how this might happen.
@schrej Would it maybe be possible to either run the controller with a high log level or enable audit logging on the apiserver to see the request body of the patch call the controller is sending?
@schrej Which Kubernetes version is your mgmt cluster using?
1.23.16
@schrej It's a bit strange that "cluster.x-k8s.io/cloned-from-name" doesn't seem to exist on any of the VSphereMachines
Would it be possible to get the full YAML sections of metadata.annotations and managed fields of: xx-1-md-29-b78lx, xx-1-md-20-4kq5b (omitted annotation values are obviously fine)
f:metadata.f:annotations: {} in managedFields for the "capi-machineset" manager is expected if the MachineSet controller doesn't have any annotations it wants to set based on MachineSet.Spec.Template.Annotations.
This just shouldn't drop any other pre-existing annotations. I agree though it's highly likely that this is related to SSA.
@schrej Can you provide some context around how the ipam.schiff.telekom.de annotations are set? Just to make it easier to reason about how they might be dropped
Note: We continue in Slack for now for quicker turnaround
Thread: https://kubernetes.slack.com/archives/C8TSNPY4T/p1681381461235689
Quick update. We're actively debugging the issue. Would be great to hear from other folks if they also encounter the issue, it's hard to reproduce currently.
This issue is currently awaiting triage.
If CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Couldn't reproduce on my side. We need more information on how to reproduce before this becomes actionable
/unassign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.