kustomize
kustomize copied to clipboard
Add an extra kind of patch application condition to workround JSON patch limitations
Is your feature request related to a problem? Please describe :
Hi
I have the following file structure :
__ base
| |_ deployments
| | |_ d1.yaml
| | \_ d2.yaml
| \_ globalPatches
| |_ p1.patch
| \_ p2.patch
\_ clusters
|_ c1
| |_ component1
| | \_ kustomization.yaml
| \_ component2
| \_ kustomization.yaml
|_ c2
| |_ component1
| | \_ kustomization.yaml
| \_ component2
| \_ kustomization.yaml
\_ global_kustomization.yaml
My trouble is related to the global_kustomization.yaml
file, in which one I'm trying to globally apply on each clusters the global patch found in /base/globalPatches
Notice that those patch files are written in yaml, but seen as regular Json Patch "scripts"
Both of these patch files are in charge of adding extra entries to the path: /spec/template/spec/containers/0/env/-
of the kind: Deployment
files.
Here is an exemple of such a patch file :
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
And here is the way they are applied in the global_kustomization.yaml
file :
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ...
patches:
- target:
kind: Deployment
version: v1
path: ../base/globalPatches/p1.patch
- target:
kind: Deployment
version: v1
path: ../base/globalPatches/p2.patch
...
Finally, notice that :
-
d1.yaml
already have an array entry defined in thepath: /spec/template/spec/containers/0/env
- Whereas
d2.yaml
doesn't
My trouble is that, due to JSON patch limitation :
- the
path: /spec/template/spec/containers/0/env/-
formulation works fine ond1.yaml
because there is already an array here in order to append an extra environment variable definition - but the same formulation is not working fine with
d2.yaml
because there is no array available in the first place.
I've tried to workaround by adding an extra initial patch file, in charge of adding the missing array with an empty one.
But without success : in that case it's working as expected for d2.yaml
, but it's also erasing the environment variable initially defined in d1.yaml
with an empty array ...
Do you know a workaround for that kind of trouble using JSON patch with Kustomize ?
Describe the solution you'd like :
If there is no solution right now, I'd like to be able to add a condition in the patches:
sections.
Such a condition would be fulfilled if and only if the path it designate does not exist in the ressource to be patch.
This way, it would be possible to conditionally add an initial empty array whenever it's missing before applying other patch files. Such as :
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ...
patches:
- target:
kind: Deployment
version: v1
missingPathEntry: /spec/template/spec/containers/0/env
path: ../base/globalPatches/init.patch
- target:
kind: Deployment
version: v1
path: ../base/globalPatches/p1.patch
- target:
kind: Deployment
version: v1
path: ../base/globalPatches/p2.patch
...
Describe alternatives you've considered :
Be allowed to use an alternative technology to JSON Patch that don't have similar limitations.
@Mumeii: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
We recently discussed a similar issue at the kustomize bug scrub this morning: https://github.com/kubernetes-sigs/kustomize/issues/4685, which requests that we add an option to the JSON patch field to create the fieldpath if it is missing. In that issue, we recommended the user switch to using a strategic merge patch, as that as the ability to create missing elements. We'd like to know if this solution would also work for you.
/triage needs-information
I just hit the same issue and using a strategic merge patch won't work in my case, as it can't merge the arrays due to the lack of x-kubernetes-patch-merge-key
.
kustomization.yaml
:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- release.yaml
- release2.yaml
patches:
- patch: |
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: not-used
spec:
postRenderers: []
target:
kind: HelmRelease
- patch: |
- op: add
path: /spec/postRenderers
value: []
- op: add
path: /spec/postRenderers/-
value:
kustomize:
patchesStrategicMerge:
- apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
spec:
secretStoreRef:
name: foo-backend
kind: ClusterSecretStore
target:
kind: HelmRelease
release.yaml
:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: podinfo
namespace: default
spec:
releaseName: podinfo
chart:
spec:
chart: podinfo
sourceRef:
kind: HelmRepository
name: podinfo
release2.yaml
:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: podinfo2
namespace: default
spec:
releaseName: podinfo2
chart:
spec:
chart: podinfo
sourceRef:
kind: HelmRepository
name: podinfo
postRenderers:
- kustomize:
patchesStrategicMerge:
- apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
spec:
secretStoreRef:
kind: ClusterSecretStore
name: bar-backend
If I run kubectl kustomize
I get:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: podinfo
namespace: default
spec:
chart:
spec:
chart: podinfo
sourceRef:
kind: HelmRepository
name: podinfo
postRenderers:
- kustomize:
patchesStrategicMerge:
- apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
spec:
secretStoreRef:
kind: ClusterSecretStore
name: foo-backend
releaseName: podinfo
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: podinfo2
namespace: default
spec:
chart:
spec:
chart: podinfo
sourceRef:
kind: HelmRepository
name: podinfo
postRenderers:
- kustomize:
patchesStrategicMerge:
- apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
spec:
secretStoreRef:
kind: ClusterSecretStore
name: foo-backend
releaseName: podinfo2
As it can be seen the postRenderers
for podinfo2
isn't merged. Please ignore the fact, that the two patches would be conflicting, it is just a example.
If I removes this JSON patch:
- op: add
path: /spec/postRenderers
value: []
and the strategic merge patch:
- patch: |
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: not-used
spec:
postRenderers: []
target:
kind: HelmRelease
kubectl kustomize
fails with:
error: add operation does not apply: doc is missing path: "/spec/postRenderers/-": missing value
If I add either of them back (the JSON patch or strategic merge patch) it just replaces postRenderers
instead of merging it.
We recently discussed a similar issue at the kustomize bug scrub this morning: #4685, which requests that we add an option to the JSON patch field to create the fieldpath if it is missing. In that issue, we recommended the user switch to using a strategic merge patch, as that as the ability to create missing elements. We'd like to know if this solution would also work for you.
/triage needs-information
Hi @natasha41575
Sorry for this late reply
It's been a long time I've created this ticket, so I don't remember the many trials I've made before creating it, but for sur I've tried the strategic merge way several times before falling back on the JSON Patch way.
Unfortunately, either I've missing something at that time that would have it working, or it's not compatible with what I'm looking to achieve.
Can you show me a way to do it with such a kind of patch ?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.