kustomize
kustomize copied to clipboard
"$patch: delete" not working as expected
Describe the bug
While trying to remove an element from a list, the output has all elements removed and the text from the patch added as the only element.
Files that can reproduce the issue
Expected output
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app.kubernetes.io/name: kube-prometheus
app.kubernetes.io/part-of: kube-prometheus
name: kubernetes-monitoring-rules
namespace: monitoring
spec:
groups:
- name: kube-apiserver-burnrate.rules
rules:
- expr: up
- name: kube-apiserver-histogram.rules
rules:
- expr: up
Actual output
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app.kubernetes.io/name: kube-prometheus
app.kubernetes.io/part-of: kube-prometheus
name: kubernetes-monitoring-rules
namespace: monitoring
spec:
groups:
- $patch: delete
name: kubernetes-system-kube-proxy
Kustomize version
{Version:kustomize/v4.2.0 GitCommit:d53a2ad45d04b0264bcee9e19879437d851cb778 BuildDate:2021-06-30T22:49:26Z GoOs:darwin GoArch:amd64}
I have tried upgrading and downgrading same result (prefer to stick with v4.2.0 as that is the current version in our CICD)
Platform
macOS
@cten: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/triage needs-information
Can you please provide the input files (i.e. your Kustomization and patch) as well?
resource.yaml:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app.kubernetes.io/name: kube-prometheus
app.kubernetes.io/part-of: kube-prometheus
name: kubernetes-monitoring-rules
namespace: monitoring
spec:
groups:
- name: kubernetes-system-kube-proxy
rules:
- alert: KubeProxyDown
- name: kube-apiserver-burnrate.rules
rules:
- expr: up
- name: kube-apiserver-histogram.rules
rules:
- expr: up
kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
resources:
- resource.yaml
patchesStrategicMerge:
- |-
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kubernetes-monitoring-rules
namespace: monitoring
spec:
groups:
- name: kubernetes-system-kube-proxy
$patch: delete
Output:
kind: PrometheusRule
metadata:
labels:
app.kubernetes.io/name: kube-prometheus
app.kubernetes.io/part-of: kube-prometheus
name: kubernetes-monitoring-rules
namespace: monitoring
spec:
groups:
- $patch: delete
name: kubernetes-system-kube-proxy
Expected output:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app.kubernetes.io/name: kube-prometheus
app.kubernetes.io/part-of: kube-prometheus
name: kubernetes-monitoring-rules
namespace: monitoring
spec:
groups:
- name: kube-apiserver-burnrate.rules
rules:
- expr: up
- name: kube-apiserver-histogram.rules
rules:
- expr: up
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I also found this issue whilst trying to $path:delete an openshift route from a namespace deployment that doesn't need it. Using kubectl Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1-5-g76a04fc", GitCommit:"d1ffb3c4b74fa8a478f63be1df429479182ee3b0", GitTreeState:"clean", BuildDate:"2021-08-16T01:34:13Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
Input
We use a base file for some reason
Base kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- reader-admin-service.yaml
- reader-admin-route.yaml
reader-admin-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: reader-admin
name: reader-admin
spec:
ports:
- name: http
port: 8090
protocol: TCP
targetPort: 8090
selector:
app: reader-admin
type: ClusterIP
reader-admin-route.yaml
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: reader-admin
name: reader-admin
spec:
port:
targetPort: http
to:
kind: Service
name: reader-admin
weight: 100
Namespace kustomization
This is the kustomization specific file for our routes and services for the admin reader in our build-test namespace
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../../../base/overlays/reader
patchesStrategicMerge:
- "strategic_merge/delete-reader-admin.yaml"
delete-reader-admin.yam
$patch: delete
apiVersion: v1
kind: Service
metadata:
name: reader-admin
---
$patch: delete
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: reader-admin
Output
With patchesStrategicMerge
{}
---
$patch: delete
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: reader-admin
name: reader-admin
spec:
port:
targetPort: http
to:
kind: Service
name: reader-admin
weight: 100
Without patchesStrategicMerge
apiVersion: v1
kind: Service
metadata:
labels:
app: reader-admin
name: reader-admin
spec:
ports:
- name: http
port: 8090
protocol: TCP
targetPort: 8090
selector:
app: reader-admin
type: ClusterIP
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: reader-admin
name: reader-admin
spec:
port:
targetPort: http
to:
kind: Service
name: reader-admin
weight: 100
Also this a reduced amount of our original files since I'm not allowed to share every aspect of our deployment, but I hope that it illustrates that it works on a default kubernetes resources but not other resources
P.S. Just noticed some warnings/error when applying our compelete manifest for the namespace with the $patch: delete
unable to decode "install_manifest.yaml": Object 'Kind' is missing in '{"metadata":{"namespace":"build-test"}}'
seems like it still applies namespace to {} resources
[...]
---
metadata:
namespace: build-test
---
[...]
In @cten case, I think the issue has to do with the fact that the example targets a list in a CRD type for which no openapi schema is provided. By default, lists are atomically replaced, not merged. So, in the absence of a schema indicating otherwise, Kustomize will replace the entire groups list, regardless of what's inside it (it never even sees the $patch key). Please try providing the appropriate schema via the openapi field.
For example, the same thing works fine for lists in a built-in type, for which Kustomize has the openapi built in:
# resources.yaml
---
apiVersion: v1
kind: PodTemplate
metadata:
name: foo
template:
spec:
containers:
- name: foo
- name: bar
# kustomization.yaml
resources:
- resources.yaml
patchesStrategicMerge:
- |-
apiVersion: v1
kind: PodTemplate
metadata:
name: foo
template:
spec:
containers:
- name: foo
$patch: delete
- name: bar
# result
---
apiVersion: v1
kind: PodTemplate
metadata:
name: foo
template:
spec:
containers:
- name: bar
In @agreedSkiing case, the problem is that the directive is attempting to target an entire resource/document instead of a field/element. That is not supported, nor is resource removal in general--see our Eschewed Features.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Feel free to reopen if my assessment that the problem is missing openapi schema was incorrect.
/kind support /triage resolved /close