Failing to merge two CommonLabels entities
What happened?
I've have in one of my 'base' configurations this entity:
commonLabels:
- group: agent.k8s.elastic.co
kind: Agent
path: "spec/deployment/podTemplate/metadata/labels"
and in 'overlay' part I've this code:
commonLabels:
- path: spec/deployment/podTemplate/metadata/labels
create: true
kind: Agent
When testing this code, I'm having this error:
E0712 14:29:35.025557 434445 run.go:74] "command failed" err="accumulating resources: accumulation err='accumulating resources from './fleet-server': '/home/username/Projects/apm-anthos-configsync/src/overlays/sandbox-v2/fleet-server' must resolve to a file': recursed merging from path '/home/username/Projects/apm-anthos-configsync/src/overlays/sandbox-v2/fleet-server': failed to merge CommonLabels fieldSpec: conflicting fieldspecs"
What did you expect to happen?
It should work flawlessly, but I've found a warkaround: some of the paths should have '/' in the beginning. So, code in overlay looking like this:
commonLabels:
- path: /spec/deployment/podTemplate/metadata/labels
create: true
kind: Agent
doesn't produce error at all.
How can we reproduce it (as minimally and precisely as possible)?
I suppose there'll be enough to write down some entity in base:
apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
CommonLabels in base:
commonLabels:
- group: agent.k8s.elastic.co
kind: Agent
path: "spec/deployment/podTemplate/metadata/labels"
and CommonLabels in overlay:
commonLabels:
- path: spec/deployment/podTemplate/metadata/labels
create: true
kind: Agent
I can try to provide more detailed data if it's needed.
Expected output
No response
Actual output
No response
Kustomize version
v5.0.1
Operating system
Linux
It looks like the issue is that the base and overlay disagree on whether an absent field path should be created. kustomize cannot merge the two configurations cleanly since it cannot honor both "create" (as configured in the overlay) and "don't create" (as configured in the base).
The leading slash for one of the overlays provides a workaround for the check, and it looks like that is providing the behavior that you're expecting. Other users might expect the opposite behavior, though. The failure looks to be kustomize's way to try to force the user to make an explicit decision on whether field creation should be honored.
Hi there, @gpu-pug!
I tried reproducing the issue described, but it seems something is missing. In your described input, the commonLabels field is an array, but our commonLabels field of the Kustomization type is a map[string]string. Could you please provide some extra detail about how your kustomization.yaml file is structured or perhaps a minimal example that reproduces the issue?
Thanks in advance!
/triage needs-information
@stormqueen1990, here's what I used for investigation.
$ kustomize-v5.0.1 build overlay
Error: merging config <skipping a lot of content here> failed to merge CommonLabels fieldSpec: conflicting fieldspecs
$
Note that v5.4.3 has the some behavior. As I implied earlier, if the base kustomizeconfig.yaml is updated to include create: true then things work cleanly.
base/kustomization.yaml
resources:
- resources.yaml
configurations:
- kustomizeconfig.yaml
commonLabels:
base-key: base-value
base/kustomizeconfig.yaml
commonLabels:
- kind: Agent
group: agent.k8s.elastic.co
path: "spec/deployment/podTemplate/metadata/labels"
base/resources.yaml
apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
name: fleet-server
spec:
version: 8.13.2
kibanaRef:
name: kibana
elasticsearchRefs:
- name: elasticsearch
mode: fleet
fleetServerEnabled: true
policyID: eck-fleet-server
deployment:
replicas: 1
podTemplate:
spec:
serviceAccountName: fleet-server
automountServiceAccountToken: true
securityContext:
runAsUser: 0
overlay/kustomization.yaml
resources:
- ../base
configurations:
- kustomizeconfig.yaml
commonLabels:
overlay-key: overlay-value
overlay/kustomizeconfig.yaml
commonLabels:
- kind: Agent
path: "spec/deployment/podTemplate/metadata/labels"
create: true
Hi @ephesused, thanks for the info! I agree that this behaviour is still reproducible and with your previous explanation as to why this is happening.
This doesn't seem to be a bug, but it would be helpful to get a more descriptive message for the error and perhaps not print the entire configuration.
/remove-kind bug /kind cleanup /kind documentation /triage accepted
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten