kustomize
kustomize copied to clipboard
ConfigMap with behavior merge does not get its hash appended
What happened?
ConfigMap with behavior merge inside configMapGenerator does not get its hash appended, it works if behavior is create
What did you expect to happen?
To produce the configmap with the hash suffix in the metadata.name
How can we reproduce it (as minimally and precisely as possible)?
Just add this snippet:
configMapGenerator:
- name: the-map
behavior: merge
literals:
- TEST="test"
to the kustomization.yaml
file in your examples: https://github.com/kubernetes-sigs/kustomize/blob/401118728a3ae0415caebd6f925182f02ae2c777/examples/helloWorld/kustomization.yaml
Expected output
apiVersion: v1
data:
TEST: test
altGreeting: Good Morning!
enableRisky: "false"
kind: ConfigMap
metadata:
labels:
app: hello
name: the-map-SOMEHASH
Actual output
apiVersion: v1
data:
TEST: test
altGreeting: Good Morning!
enableRisky: "false"
kind: ConfigMap
metadata:
labels:
app: hello
name: the-map
Kustomize version
5.0.3
Operating system
Linux
@matejc Thank you for filing the issue.
While going through your example, we would like to understand more about how you are using behavior: merge
. It seems that you are using it in the base
configuration in your example, but our understanding from the docs is that behavior: merge
should only be used in an overlay, to merge the ConfigMap with ConfigMap from a base.
Could you provide more information about whether you are using behavior: merge
in a base or overlay, and what you are using it for? If you find that it does something unexpected when in an overlay, could you provide a complete example showing all your base and overlay files?
/triage needs-information
@natasha41575 True, in this issue example I am trying to merge ConfigMap in base... But I can reproduce even with this:
resources:
- https://github.com/kubernetes-sigs/kustomize//examples/helloWorld?ref=401118728a3ae0415caebd6f925182f02ae2c777
configMapGenerator:
- name: the-map
behavior: merge
literals:
- TEST="test"
And I get this (see the name of ConfigMap named the-map - it does not have any hash suffix):
apiVersion: v1
data:
TEST: test
altGreeting: Good Morning!
enableRisky: "false"
kind: ConfigMap
metadata:
labels:
app: hello
name: the-map
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hello
name: the-service
spec:
ports:
- port: 8666
protocol: TCP
targetPort: 8080
selector:
app: hello
deployment: hello
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello
name: the-deployment
spec:
replicas: 3
selector:
matchLabels:
app: hello
deployment: hello
template:
metadata:
labels:
app: hello
deployment: hello
spec:
containers:
- command:
- /hello
- --port=8080
- --enableRiskyFeature=$(ENABLE_RISKY)
env:
- name: ALT_GREETING
valueFrom:
configMapKeyRef:
key: altGreeting
name: the-map
- name: ENABLE_RISKY
valueFrom:
configMapKeyRef:
key: enableRisky
name: the-map
image: monopole/hello:1
name: the-container
ports:
- containerPort: 8080
I tested this with:
❯ kustomize version
5.1.0
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.