kustomize
kustomize copied to clipboard
patches replace op doesn't work as patchesJson6902 with path: /metadata/namespace
What happened?
We are using patchesJson6902, but in near future it will be deprecated, so we decided to change it to patches. We have the following code:
patchesJson6902:
- patch: |-
- op: replace
path: /metadata/namespace
value: new_value
target:
kind: KafkaTopic
version: v1beta2
name: .*
It works correct and replace all namespaces we need with new_value. For example here we have such result:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
namespace: new_value
Try to replace it with patches using kustomize edit fix and we get:
patches:
- patch: |-
- op: replace
path: /metadata/namespace
value: new_value
target:
kind: KafkaTopic
name: .*
version: v1beta2
Try to build it, and all namespaces stay the same. But if we try to replace for example path: /kind it works ok.
For example here we have such result:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
namespace: old_value
What did you expect to happen?
Expect the same behavior for patches and patchesJson6902, so in the example above should get
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
namespace: new_value
How can we reproduce it (as minimally and precisely as possible)?
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resources.yaml
patches:
- patch: |-
- op: replace
path: /metadata/namespace
value: new_value
target:
kind: ConfigMap
name: .*
# resources.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-object
namespace: test-namespace
data:
placeholder: data
Expected output
apiVersion: v1
kind: ConfigMap
metadata:
name: test-object
namespace: new_value
data:
placeholder: data
Actual output
apiVersion: v1
kind: ConfigMap
metadata:
name: test-object
namespace: test-namespace
data:
placeholder: data
Kustomize version
5.0.1
Operating system
MacOS
Hello @pufffikk!
I'm not able to reproduce this using the file specifications provided above, on MacOC with kustomize version 5.0.1

Can you provide me more information about your specific setup? Did you run this with just the files used in your reproduction and still get the namespace unchanged?
/triage not-reproducible
Hello, I am facing likely the same issue.
I do have a "namespace: test_namespace" in the kustomization.yaml, then I see that patchesJson6902 changes the namespace to "new_value", while patch is not effective.
What could be the best approach to preserve the namespace for some resources?
Fabio.
Same here, I used the kustomize edit fix and now the namespace replacement is not working anymore. Running on linux and inside devcontainer in vscode.
update kustomization.yaml file to reproduce error, if in kustomization file we have namespace: test-namespace, patch doesn't change namespace in output file. Other files could stay the same.
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test-namespace
resources:
- resources.yaml
patches:
- patch: |-
- op: replace
path: /metadata/namespace
value: new_value
target:
kind: ConfigMap
name: .*
Hello @cailynse please have a look at last comment
Same here
$ kustomize version
v5.0.3
# kustomization.yaml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: velero
resources:
- release.yaml
patches:
- target:
group: ""
version: v2beta1
kind: HelmRelease
name: velero
patch: |-
- op: replace
path: /metadata/namespace
value: flux-system
The namespace is overridden by the Kustomization, and patches is not working as expected.
$ kustomize build .
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: velero
namespace: velero
spec:
chart:
spec: ...
And using patchesJson6902 instead of patches , it works as expected
$ kustomize build .
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: velero
namespace: flux-system
spec:
chart:
spec: ...
Same here: kustomize version v5.1.0.
When using patchesJson6902 one is able to overwrite namespace: definition in a resulting resource.
Using patches instead does not work (namespace: definition will persist).
What is actually intended? I personally would say that overwriting the namespace in patches is odd, as specifying namespace: ... in a kustomization.yaml specifies the intention to deploy resources to a specific namespaces. Using patchesJson6902 is a hack at the least.
I get the same results with more resources where patchesJson6902 works and using kustomize edit fix does not convert to a working solution.
We also have exact the same behaviour where patchesJson6902 uses the namespace from the op: replace value: velero
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test-namespace
patchesJson6902:
# places backup schedule into velero namespace
- target:
kind: Schedule
name: argocd-scheduled
version: v1
patch: |-
- op: replace
path: /metadata/namespace
value: velero
output:
apiVersion: velero.io/v1
kind: Schedule
metadata:
labels:
app: argocd
app.kubernetes.io/instance: argocd
argocd.argoproj.io/instance: argocd
name: argocd-scheduled
namespace: velero
spec:
schedule: 5 1 * * *
template:
defaultVolumesToRestic: true
hooks: {}
includeClusterResources: true
includedNamespaces:
- argocd
storageLocation: xyz
ttl: 120h0m0s
If we change the patchesJson6902 to patches, we get the namespace which is defined in the kustomization.yaml namespace: test-namespace
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test-namespace
patchesJson6902:
# places backup schedule into velero namespace
- target:
kind: Schedule
name: argocd-scheduled
version: v1
patch: |-
- op: replace
path: /metadata/namespace
value: velero
output:
apiVersion: velero.io/v1
kind: Schedule
metadata:
labels:
app: argocd
app.kubernetes.io/instance: argocd
argocd.argoproj.io/instance: argocd
name: argocd-scheduled
namespace: argocd
spec:
schedule: 5 1 * * *
template:
defaultVolumesToRestic: true
hooks: {}
includeClusterResources: true
includedNamespaces:
- argocd
storageLocation: xyz
ttl: 120h0m0s
To me it looks like the order with patches is different compared to patchesJson6902.
It looks like, when all patches are applied, the resource is patched again with the "namespace" definition from the main kustomization.yaml. If we remove the namespace: test-namespace we get the correct namespace: velero
Is this behavior intended @cailynse ?
Hello @cailynse The same issue
$ kustomize version
v5.2.1
To reproduce
$ tree kustomize
kustomize
├── base
│ ├── cadvisor
│ │ └── kustomization.yaml
└── overlays
├── prod
│ ├── cadvisor
│ │ └── kustomization.yaml
Where base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/google/cadvisor/deploy/kubernetes/base?ref=v0.48.1
The overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: metrics-system
resources:
- ../../../base/cadvisor
patches:
- patch: |-
$patch: delete
apiVersion: v1
kind: Namespace
metadata:
name: cadvisor
labels:
app: cadvisor
- target:
group: apps
version: v1
kind: DaemonSet
name: cadvisor
labelSelector: app=cadvisor
patch: |-
- op: replace
path: /metadata/name
value: "cadvisor-2"
- op: replace
path: /spec/template/spec/containers/0/name
value: "cadvisor-2"
- op: replace
path: /spec/template/spec/serviceAccountName
value: "cadvisor-2"
As a result the metadata.name, spec.template.spec.containers[0].name and spec.template.spec.serviceAccountName - not changed.
If I change my overlay overlays/prod/kustomization.yaml to form like below it will work as expected.
patchesJSON6902:
- target:
group: apps
version: v1
kind: DaemonSet
name: cadvisor
labelSelector: app=cadvisor
patch: |-
- op: replace
path: /metadata/name
value: "cadvisor-2"
- op: replace
path: /spec/template/spec/containers/0/name
value: "cadvisor-2"
- op: replace
path: /spec/template/spec/serviceAccountName
value: "cadvisor-2"
kustomize build overlays/prod/cadvisor
# Warning: 'patchesJson6902' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cadvisor-2
namespace: metrics-system
spec:
...
containers:
- image: gcr.io/cadvisor/cadvisor:v0.45.0
name: cadvisor-2
...
serviceAccountName: cadvisor-2
/triage not-reproducible
it is reproducible, any updates on this @cailynse ?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Same issue on my side, any update?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Not sure if anyone is/will be in the same situation as us, but we had this issue not realising that we had to update the patch target name as we used namePrefix in the kustomization file. With patchesJson6902 it worked the replacement with the target name containing the namePrefix, while with patches it didn't anymore.
So this is how our kustomization.yaml file changed:
Before:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
...
namePrefix: example-
patchesJson6902:
- target:
kind: PersistentVolumeClaim
name: example-files-claim
version: v1
path: replace-and-add-pvc-data.yaml
After:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
...
namePrefix: example-
patches:
- path: replace-and-add-pvc-data.yaml
target:
kind: PersistentVolumeClaim
name: files-claim
version: v1
As pointed out by husira to change a namespace only for specific targets ( and without using patchesJson6902) we would need to remove namespace in the kustomization, and apply individual patches for the namespace(s) we need.
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ... namespace: default # <- remove this row ... patches:
-
patch: |-
- op: replace path: /metadata/name value: "default"
-
patch: |- target: kind: ServiceAccount # my custom need"
- op: replace path: /metadata/name value: "custom"
The first patch applies the default namespace everywhere, the second one only to the custom resources I need. It works for me at least. ;-)