kustomize
kustomize copied to clipboard
kustomize complains that "behavior must be merge or replace" despite dynamic name produced by configMapGenerator
When using configMapGenerator
in an overlay, with behavior: create
, I run into the following error, although the name of the generated configMap once appended with the "-abcde12345" suffix would not result in a collision:
$ kustomize build overlay-attempt
Error: merging from generator &{0xc001e95110 <nil>}: id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1",
Kind:"ConfigMap", isClusterScoped:false}, Name:"cm", Namespace:""} exists; behavior must be merge or replace
This seems to happen because the configMap base name collides with a resource existing in the base kustomization, which seems unexpected because the configmap defined in my overlay has a dynamic name (does not use options.disableNameSuffixHash: true
).
It seems to me that, perhaps, "behavior must be merge or replace" should only be enforced when disableNameSuffixHash: true
is used.
Files that can reproduce the issue
$ tree
.
├── base
│ ├── kustomization.yaml
│ └── manifest.yaml
├── overlay-attempt
│ └── kustomization.yaml
└── overlay-working
└── kustomization.yaml
Base:
$ cat base/manifest.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cm
data:
foo: 42
---
apiVersion: a.b.c/v1
kind: Pod
metadata:
name: x
spec:
volumes:
- name: a
configMap:
name: cm
$ cat base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- manifest.yaml
Overlay definition not behaving as hoped:
$ cat overlay-attempt/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
configMapGenerator:
- name: cm
behavior: create
literals:
- bar=43
patches:
- target:
kind: Pod
name: x
patch: |
- op: add
path: /spec/volumes/-
value:
name: b
configMap:
name: cm
An overlay that does nearly the same thing and that is working:
$ cat overlay-working/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
configMapGenerator:
- name: cm-X
behavior: create
literals:
- bar=43
patches:
- target:
kind: Pod
name: x
patch: |
- op: add
path: /spec/volumes/-
value:
name: b
configMap:
name: cm-X
Expected output
I would expect kustomize build overlay-attempt
to succeed and give the following:
apiVersion: v1
data:
foo: 42
kind: ConfigMap
metadata:
name: cm
---
apiVersion: v1
data:
bar: "43"
kind: ConfigMap
metadata:
name: cm-86kfb9ch5m
---
apiVersion: a.b.c/v1
kind: Pod
metadata:
name: x
spec:
volumes:
- configMap:
name: cm
name: a
- configMap:
name: cm-86kfb9ch5m
name: b
Note that kustomize build overlay-working
works as expected:
apiVersion: v1
data:
foo: 42
kind: ConfigMap
metadata:
name: cm
---
apiVersion: v1
data:
bar: "43"
kind: ConfigMap
metadata:
name: cm-X-86kfb9ch5m
---
apiVersion: a.b.c/v1
kind: Pod
metadata:
name: x
spec:
volumes:
- configMap:
name: cm
name: a
- configMap:
name: cm-X-86kfb9ch5m
name: b
See below, this approach (use "cm-X" instead of "cm" in the overlay) does not work well for me in my actual use case where I would need to avoid to use a different configmap base name in the overlay.
Actual output
$ kustomize build overlay-attempt
Error: merging from generator &{0xc001e95110 <nil>}: id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", Kind:"ConfigMap", isClusterScoped:false}, Name:"cm", Namespace:""} exists; behavior must be merge or replace
Given that "cm" is the base name of my generated configmap, and not its final name, it seems to me that it is a bug to have this error message.
Kustomize version
$ kustomize version
{Version:kustomize/v4.5.7 GitCommit:56d82a8378dfc8dc3b3b1085e5a6e67b82966bd7 BuildDate:2022-08-02T16:35:54Z GoOs:linux GoArch:amd64}
Platform
Linux amd64
Additional context
This example reflects what I need to do, except that my actual use case is not with multiple volumes inside a Pod definition. I picked this illustration to make a simplest as possible example.
I can somehow live with the workaround used in "overlay-working" where we make sure that the configmap base name defined in the overlay is different than the one used in the base.
However I would like to have multiple levels of inheritance :
- overlay 1 on top of base
- overlay 2 on top of overlay 1
- overlay 3 on top of overlay 2
- etc. ... and I would like to generate overlay2 and overlay3 in a context where I don't want to have to pick a different name at each layer.
Note that "behavior: merge" would also not work in my actual use case, where I need distinct configmap at each layer (a common configmap with distinct keys under "data" does not work for me).
@tmmorin: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I realize need to provide more information to explain better.
First of all, one may wonder why the configMapGenerator[0].name
is cm
just like the name of the ConfigMap in the manifest.yaml
in the base. Indeed it does not have to be and using a configMapGenerator[0].name
that is not equal to any manifest produced by base solves the issue in this specific example.
However, the same error arises even if the configmap in base is defined with a configMapGenerator rather than being defined as a plain manifest.
Here are the files that can be used to observe this:
$ cat base/manifest.yaml
apiVersion: a.b.c/v1
kind: Pod
metadata:
name: x
spec:
volumes:
- name: a
configMap:
name: cm
$ cat base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- manifest.yaml
configMapGenerator:
- name: cm
behavior: create
literals:
- foo=42
$ kustomize build base
---
apiVersion: v1
data:
foo: "42"
kind: ConfigMap
metadata:
name: cm-7kkbgdfk7d
---
apiVersion: a.b.c/v1
kind: Pod
metadata:
name: x
spec:
volumes:
- configMap:
name: cm-7kkbgdfk7d
name: a
(this works as expected, not issue or suprise at this point)
However, the overlay fails (using the same overlay-attempt
as the one in the description of this issue):
$ kustomize build overlay-attempt
Error: merging from generator &{0xc001f37a00 <nil>}: id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", Kind:"ConfigMap", isClusterScoped:false}, Name:"cm", Namespace:""} exists; behavior must be merge or replace
Thanks for the detailed explanation @tmmorin. I understand the confusions. I think this is actually the nice resource reference
feature kustomize has.
How kustomize understand the case
Kustomize refers its ConfigMap by original
ID, when the ConfigMap is changed, it will update wherever that ConfigMap is referred correspondingly (e.g. the Pod spec.volumes[].configMap
field in your example). See the two examples below
Why fail
two ConfigMap have the same original
ID "cm", kustomize cannot distinguish which one will be referred to. What's more, because kustomize has the base/overlay model, even if the "cm" is not referred (say no Pod
with spec.volumes[].configMap.name=cm
) in your case, your kustomize directory can be treated as the base of other overlays, so kustomize fails as long as it discovers duplicate original
ID.
Two examples
Example 1 Update the Pod volumes referencing a ConfigMap In overlay/kustomization.yaml
resources:
- ../base
namePrefix: new-
In base/manifest.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm
data:
foo: 42
---
apiVersion: v1
kind: Pod
metadata:
name: x
spec:
volumes:
- name: a
configMap:
name: cm
output
apiVersion: v1
data:
foo: 42
kind: ConfigMap
metadata:
name: new-cm # new- prefix
---
apiVersion: v1
kind: Pod
metadata:
name: new-x # new- prefix
spec:
volumes:
- configMap:
name: new-cm # changed because the referred ConfigMap is changed
name: a
Example 2 Update the Pod volumes referencing a ConfigMapGenerator
In overlay/kustomization.yaml
resources:
- ../base
configMapGenerator:
- name: cm
behavior: create
literals:
- bar=43
In base/manifest.yaml
apiVersion: v1
kind: Pod
metadata:
name: x
spec:
volumes:
- name: a
configMap:
name: cm
output
apiVersion: v1
data:
bar: "43"
kind: ConfigMap
metadata:
name: cm-86kfb9ch5m
---
apiVersion: v1
kind: Pod
metadata:
name: x
spec:
volumes:
- configMap:
name: cm-86kfb9ch5m # changed because it refers to the ConfigMapGenerator
name: a
/triage accepted
@tmmorin I'll leave this issue open, let me know if you have any questions.
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted
(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.