local-only configMap causing `may not add resource with an already registered id` error.
What happened?
Due to the processing order change in #4895 using components with replacements causes errors in 5.0.0+ where it did not in 4.4.x and 4.5.x.
The use case was to include a component to alter dns for blue/green deployment which would allow me to change the dns set-identifier and weight in a single place for all kustomizations that include the component.
What did you expect to happen?
Create deployment yaml without error.
How can we reproduce it (as minimally and precisely as possible)?
# deployment/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../ingress1
- ../ingress2
# ingress1/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ingress.yaml
components:
- ../components
# ingress1/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress1
spec:
defaultBackend:
service:
name: svc1
port:
number: 80
# ingress2/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ingress.yaml
components:
- ../components
# ingress2/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress2
spec:
defaultBackend:
service:
name: svc2
port:
number: 80
# components/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
configMapGenerator:
- literals:
- deployment=blue
- weight=100
name: deployment
options:
disableNameSuffixHash: true
annotations:
config.kubernetes.io/local-config: "true"
replacements:
- source:
kind: ConfigMap
name: deployment
fieldPath: data.deployment
targets:
- select:
kind: Ingress
fieldPaths:
- metadata.annotations.[external-dns.alpha.kubernetes.io/set-identifier]
options:
create: true
- source:
kind: ConfigMap
name: deployment
fieldPath: data.weight
targets:
- select:
kind: Ingress
fieldPaths:
- metadata.annotations.[external-dns.alpha.kubernetes.io/aws-weight]
options:
create: true
Expected output
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
external-dns.alpha.kubernetes.io/aws-weight: "100"
external-dns.alpha.kubernetes.io/set-identifier: blue
name: ingress1
spec:
defaultBackend:
service:
name: svc1
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
external-dns.alpha.kubernetes.io/aws-weight: "100"
external-dns.alpha.kubernetes.io/set-identifier: blue
name: ingress2
spec:
defaultBackend:
service:
name: svc2
port:
number: 80
Actual output
Error: accumulating resources: accumulation err='accumulating resources from '../ingress2': '/private/tmp/ingress2' must resolve to a file': recursed merging from path '/private/tmp/ingress2': may not add resource with an already registered id: ConfigMap.v1.[noGrp]/deployment.[noNs]
Kustomize version
5.0.1
Operating system
MacOS
Hi @mleklund, thank you for documenting your setup so clearly! I was able to reproduce the issue.
We'd like to understand your use case better before making a decision on the change in #4895. Would moving the call ../components from ingress1/kustomization.yaml and ingress2/kustomization.yaml to deployment/kustomization.yaml work for you?
/kind regression /triage under-consideration
Hi @mleklund, I've discussed this issue with the team and the consensus is that it's not a regression. The justification is the behavior that this issue desires was never documented, and only a side effect of the code. In other words, documentation never specified that local-only resources should be local to its kustomization instead of the entire build process.
Having said that, we'd like to try our best to resolve your issue. Here are my thoughts about workarounds
- As suggested in the previous comment, it seems like you want to run the same
Componenton bothIngresses. Could you call thisComponentfrom the overlay Kustomization instead? - I don't think you're meant to generate resources in
Components. See the proposal. The idea was to run a set of operations on the same resources in the overlay. Could you break up your currentComponentso that theConfigMaps are like "global variables" in your overlay thatreplacementsin theComponentcan reference? - Have you considered using
patchesinstead ofreplacements? I think they're ideal for your use case given that you're referencing a constant instead of an existing resource field. Is the pain point withpatchesthat they don't support features like regex thatreplacementsdo?
Please share your thoughts.
/kind support /triage not-an-issue
/remove-kind bug /remove-kind regression /remove-triage under-consideration
/triage needs-information /remove-triage not-an-issue
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.