nameSuffix improperly adds suffix to base resource deployment persistentVolumeClaim/claimName
What happened?
When building a base resource deployment and an overlay resource deployment together from a parent kustomization.yaml, persistentVolumeClaim/claimName is patched on both the base resource and the overlay resource.
Example configuration below. I've observed this behavior on MacOS kustomize 5.4.1 and 5.6.0, and in my cluster running on Linux in kustomize-controller of flux v2.2.2 (uncertain of the kustomize version embedded there)
What did you expect to happen?
The overlay deployment should be the only resource whose persistentVolumeClaim/claimName gets patched with the nameSuffix
How can we reproduce it (as minimally and precisely as possible)?
# myapp/kustomization.yaml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- pvc.yaml
# myapp/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: mynamespace
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
spec:
containers:
- name: myapp
image: lscr.io/linuxserver/myapp:4.7.5
volumeMounts:
- name: myapp-configs
mountPath: /config
volumes:
- name: myapp-configs
persistentVolumeClaim:
claimName: myapp-configs
# myapp/pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-configs
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 2Gi
# myapp-variant/kustomization.yaml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../myapp
labels:
- pairs:
app: myapp-variant
includeSelectors: true
nameSuffix: -variant
# parent kustomization.yaml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- myapp
- myapp-variant
Expected output
When built from parent kustomization.yaml
$ kustomize build .
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-configs
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: myapp-variant
name: myapp-configs-variant
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: myapp
namespace: mynamespace
spec:
selector:
matchLabels:
app: myapp
template:
spec:
containers:
- image: lscr.io/linuxserver/myapp:4.7.5
name: myapp
volumeMounts:
- mountPath: /config
name: myapp-configs
volumes:
- name: myapp-configs
persistentVolumeClaim:
claimName: myapp-configs <-------- this should remain as defined in the base config
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp-variant
name: myapp-variant
namespace: mynamespace
spec:
selector:
matchLabels:
app: myapp-variant
template:
metadata:
labels:
app: myapp-variant
spec:
containers:
- image: lscr.io/linuxserver/myapp:4.7.5
name: myapp
volumeMounts:
- mountPath: /config
name: myapp-configs
volumes:
- name: myapp-configs
persistentVolumeClaim:
claimName: myapp-configs-variant
Actual output
Correct output when building each app independently:
$ kustomize build myapp
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-configs
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: myapp
namespace: mynamespace
spec:
selector:
matchLabels:
app: myapp
template:
spec:
containers:
- image: lscr.io/linuxserver/myapp:4.7.5
name: myapp
volumeMounts:
- mountPath: /config
name: myapp-configs
volumes:
- name: myapp-configs
persistentVolumeClaim:
claimName: myapp-configs
$ kustomize build myapp-variant
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: myapp-variant
name: myapp-configs-variant
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp-variant
name: myapp-variant
namespace: mynamespace
spec:
selector:
matchLabels:
app: myapp-variant
template:
metadata:
labels:
app: myapp-variant
spec:
containers:
- image: lscr.io/linuxserver/myapp:4.7.5
name: myapp
volumeMounts:
- mountPath: /config
name: myapp-configs
volumes:
- name: myapp-configs
persistentVolumeClaim:
claimName: myapp-configs-variant
BUT, incorrect output in one spot when building from parent kustomization.yaml
$ kustomize build .
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-configs
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: myapp-variant
name: myapp-configs-variant
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: myapp
namespace: mynamespace
spec:
selector:
matchLabels:
app: myapp
template:
spec:
containers:
- image: lscr.io/linuxserver/myapp:4.7.5
name: myapp
volumeMounts:
- mountPath: /config
name: myapp-configs
volumes:
- name: myapp-configs
persistentVolumeClaim:
claimName: myapp-configs-variant <-------- why is nameSuffix added here?
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp-variant
name: myapp-variant
namespace: mynamespace
spec:
selector:
matchLabels:
app: myapp-variant
template:
metadata:
labels:
app: myapp-variant
spec:
containers:
- image: lscr.io/linuxserver/myapp:4.7.5
name: myapp
volumeMounts:
- mountPath: /config
name: myapp-configs
volumes:
- name: myapp-configs
persistentVolumeClaim:
claimName: myapp-configs-variant
Kustomize version
5.6.0
Operating system
MacOS
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
probably a bug. You need to set a non empty nameSuffix in myapp, as workaround.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
probably a bug. You need to set a non empty nameSuffix in myapp, as workaround.
I totally agree that this is a bug! That's exactly why I went to the trouble of opening the issue.
I appreciate the idea for the workaround. Doing so has ramifications that I prefer to avoid, but nevertheless it could help keep my configuration DRY as originally intended.
Hi, can I work on this issue?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/reopen
@conlon: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Hi, can I work on this issue?
@adoramshoval go for it, please! submit a PR, no one is stopping you :)
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.