kustomize
kustomize copied to clipboard
Statefulset volumeClaimTemplates labels not properly merged after patch
What happened?
I encountered a case were label pairs are not properly merged after applying a patch.
What did you expect to happen?
I expected source labels pair to be still present in the volumeClaimTemplates[]/metadata/labels.
How can we reproduce it (as minimally and precisely as possible)?
# source/kustomization
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- sts.yaml
labels:
- pairs:
app.kubernetes.io/name: vmcluster
app.kubernetes.io/stack: victoriametrics
includeSelectors: true
includeTemplates: true
# source/sts.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/component: vmstorage
name: vmstorage
spec:
template:
metadata:
labels:
app.kubernetes.io/component: vmstorage
spec:
containers:
- name: vmstorage
image: quay.io/victoriametrics/vmstorage:v1.128.0-cluster
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/component: vmstorage
name: vmstorage-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: gp3
volumeMode: Filesystem
# kustomization
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- source
labels:
- pairs:
app.kubernetes.io/instance: vmcluster-internal
includeSelectors: true
includeTemplates: true
patches:
- path: patch.yaml
# patch.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/component: vmstorage
name: vmstorage
spec:
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/component: vmstorage
name: vmstorage-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 600Gi
storageClassName: gp3
volumeMode: Filesystem
Expected output
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/component: vmstorage
app.kubernetes.io/instance: vmcluster-internal
app.kubernetes.io/name: vmcluster
app.kubernetes.io/stack: victoriametrics
name: vmstorage
spec:
...
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/component: vmstorage
app.kubernetes.io/instance: vmcluster-internal
app.kubernetes.io/name: vmcluster
app.kubernetes.io/stack: victoriametrics
name: vmstorage-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 600Gi
storageClassName: gp3
volumeMode: Filesystem
Actual output
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/component: vmstorage
app.kubernetes.io/instance: vmcluster-internal
app.kubernetes.io/name: vmcluster
app.kubernetes.io/stack: victoriametrics
name: vmstorage
spec:
...
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/component: vmstorage
app.kubernetes.io/instance: vmcluster-internal
name: vmstorage-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 600Gi
storageClassName: gp3
volumeMode: Filesystem
Kustomize version
v5.7.1
Operating system
MacOS
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.