client-go
client-go copied to clipboard
apimachinery.ExtractInto generates diff
When creating a pod:
- with server-side apply
- with an empty affinity
pod := coreac.Pod(name, ns).
WithSpec(
coreac.PodSpec().
coreac.WithAffinity(coreac.Affinity()) // notice the empty affinity value here
// other spec values omitted for brevity
)
pod, err := client.CoreV1().Pods(ns).Apply(ctx, pod, meta.ApplyOptions{FieldManager: smtg})
if err != nil {
return fmt.Errorf("pod.Apply: %w", err)
}
When trying to later update it by using ExtractPodInfo:
pod, err = w.client.CoreV1().Pods(ns).Get(ctx, podName, meta.GetOptions{})
if err != nil {
return fmt.Errorf("client.Get: %w", err)
}
podApplyConfig, err := coreac.ExtractPod(pod, fm)
if err != nil {
return fmt.Errorf("coreac.ExtractPod: %w", err)
}
// updating something
podApplyConfig.Finalizers = nil
_, err = w.client.CoreV1().Pods(ns).Apply(ctx, podApplyConfig, meta.ApplyOptions{FieldManager: smtg})
if err != nil {
return fmt.Errorf("client.Apply: %w", err)
}
The Apply operation fails with the following error:
If the object is empty, we avoid setting an empty affinity, which leads to issue
while patching the pod later.
client.Apply: Pod "pod-name" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`,`spec.initContainers[*].image`,`spec.a
ctiveDeadlineSeconds`,`spec.tolerations` (only additions to existing tolerations),`spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)
core.PodSpec{
... // 15 identical fields
Subdomain: "",
SetHostnameAsFQDN: nil,
- Affinity: &core.Affinity{},
+ Affinity: nil,
SchedulerName: "default-scheduler",
... // 13 identical fields
}
Using k8s.io/client-go v0.28.8
This can be fixed by setting the affinity to nil
when creating the pod (instead of setting the zero-value).
However, I'm wondering if this should be handled by client-go (this seems to be implemented in apimachinery).
What do you think?
I've been keen to learn Go and more about Kubernetes - so challenged myself to put together a reproduction of this.
How do we progress to getting an opinion on whether there is anything to fix in client-go? @liggitt you seem active in this project recently - any ideas? Is this issue raised in the wrong project?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.