kustomize
kustomize copied to clipboard
kustomize does not remove items of an array using strategic merge
Describe the bug
Deleting items from an array using strategic merge patch does not work as expected in my base-overlay scenario. Instead of removing the items from the list, it replaces the list content.
Please see my example below. The outputs are those of kustomize build ./001
command.
Files that can reproduce the issue
Base layer - directory ./000
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- prometheus-rules.yaml
prometheus-rules.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app.kubernetes.io/name: kube-prometheus
app.kubernetes.io/part-of: kube-prometheus
prometheus: k8s
role: alert-rules
name: kubernetes-monitoring-rules
namespace: monitoring
spec:
groups:
- name: kubernetes-system-scheduler
rules:
- alert: KubeSchedulerDown
annotations:
description: KubeScheduler has disappeared from Prometheus target discovery.
runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeschedulerdown
summary: Target disappeared from Prometheus target discovery.
expr: |
absent(up{job="kube-scheduler"} == 1)
for: 15m
labels:
severity: critical
- name: kubernetes-system-controller-manager
rules:
- alert: KubeControllerManagerDown
annotations:
description: KubeControllerManager has disappeared from Prometheus target
discovery.
runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubecontrollermanagerdown
summary: Target disappeared from Prometheus target discovery.
expr: |
absent(up{job="kube-controller-manager"} == 1)
for: 15m
labels:
severity: critical
- name: kubernetes-system-kube-proxy
rules:
- alert: KubeProxyDown
annotations:
description: KubeProxy has disappeared from Prometheus target discovery.
runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeproxydown
summary: Target disappeared from Prometheus target discovery.
expr: |
absent(up{job="kube-proxy"} == 1)
for: 15m
labels:
severity: critical
Overlay - directory ./001
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../000
patchesStrategicMerge:
- prometheus-rules-merge.yaml
prometheus-rules-merge.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kubernetes-monitoring-rules
namespace: monitoring
spec:
groups:
- $patch: delete
name: kubernetes-system-scheduler
- $patch: delete
name: kubernetes-system-controller-manager
Expected output
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app.kubernetes.io/name: kube-prometheus
app.kubernetes.io/part-of: kube-prometheus
prometheus: k8s
role: alert-rules
name: kubernetes-monitoring-rules
namespace: monitoring
spec:
groups:
- name: kubernetes-system-kube-proxy
rules:
- alert: KubeProxyDown
annotations:
description: KubeProxy has disappeared from Prometheus target discovery.
runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeproxydown
summary: Target disappeared from Prometheus target discovery.
expr: |
absent(up{job="kube-proxy"} == 1)
for: 15m
labels:
severity: critical
Actual output
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app.kubernetes.io/name: kube-prometheus
app.kubernetes.io/part-of: kube-prometheus
prometheus: k8s
role: alert-rules
name: kubernetes-monitoring-rules
namespace: monitoring
spec:
groups:
- $patch: delete
name: kubernetes-system-scheduler
- $patch: delete
name: kubernetes-system-controller-manager
Kustomize version
{Version:kustomize/v4.5.4 GitCommit:cf3a452ddd6f83945d39d582243b8592ec627ae3 BuildDate:2022-03-28T23:06:20Z GoOs:darwin GoArch:amd64}
Platform macOS
Additional context
@oliver-goetz: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
stumbled about exactly the same issue when trying to paste away (exactly the same) elements in a list.
There is really no easy and reliable way of removing elements from lists with kustomize. What I ended up doing was patching via reference to an index
/spec/groups/5
which will cause problems as soon as upstream project will reorder elements.
What would be usefull is to support field expressions in delete patches /spec/groups/.[name=kubernetes-system-controller-manager]
Is there any workaround for this (without using the index number, which is not reliable)? I have exactly the same issue (with the same Prometheus rules I want to remove).
I suspect the problem here is that you're trying to patch a custom resource without providing the schema for it (via the openapi field). By default, SMPs replace arrays. If you provide a schema with the extensions for spec.groups
specified and that doesn't fix it, please reopen.
/triage duplicate of https://github.com/kubernetes-sigs/kustomize/issues/4175, https://github.com/kubernetes-sigs/kustomize/issues/4514 and others /kind support /remove-kind bug /triage resolved
@KnVerey: The label(s) triage/of, triage/https://github.com/kubernetes-sigs/kustomize/issues/4175,, triage/https://github.com/kubernetes-sigs/kustomize/issues/4514, triage/and, triage/others
cannot be applied, because the repository doesn't have them.
In response to this:
I suspect the problem here is that you're trying to patch a custom resource without providing the schema for it (via the openapi field). By default, SMPs replace arrays. If you provide a schema with the extensions for
spec.groups
specified and that doesn't fix it, please reopen./triage duplicate of https://github.com/kubernetes-sigs/kustomize/issues/4175, https://github.com/kubernetes-sigs/kustomize/issues/4514 and others /kind support /remove-kind bug /triage resolved
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.