kustomize icon indicating copy to clipboard operation
kustomize copied to clipboard

Create documentation for migrating from deprecated (removed in 5.0.0) `patchesStrategicMerge` to `patches`

Open karlschriek opened this issue 1 year ago • 20 comments

Eschewed features

  • [X] This issue is not requesting templating, unstuctured edits, build-time side-effects from args or env vars, or any other eschewed feature.

What would you like to have added?

Create documentation that explains how to transition from using patchesStrategicMerge and patchesJson6902 to using patches

Why is this needed?

We use patchesStrategicMerge extensively. I have read several issues that say that patches is a superset of patchesStrategicMerge and patchesJson6902, but I am yet to come across a document that explain how to the exact same outcome using the patches directive. Since the old ones have been deprecated, some documentation on how to migrate would be useful.

Can you accomplish the motivating task without this feature, and if so, how?

Yes, if someone could tell me here in this issue how use patches in order to do what I previously did with patchesStrategicMerge

What other solutions have you considered?

None.

Anything else we should know?

No response

Feature ownership

  • [ ] I am interested in contributing this feature myself! 🎉

karlschriek avatar Feb 17 '23 08:02 karlschriek

I fully support this since I ran into this issue today: https://github.com/kubernetes-sigs/kustomize/issues/3481#issuecomment-1434407293

lblazewski avatar Feb 17 '23 10:02 lblazewski

I have noticed the following changes.

  1. patchesStrategicMerge allowed multiple patches to exist in a single patch file separated by ---, patches doesn't
  2. regardless of the Kind, Group, Metadata.Name etc existing in the patch it has to be specified in the target

mgazza avatar Feb 23 '23 09:02 mgazza

@mgazza What you've described is a separate issue, and is captured by https://github.com/kubernetes-sigs/kustomize/issues/5049, which we have decided to accept.

natasha41575 avatar Feb 23 '23 16:02 natasha41575

Create documentation for migrating from deprecated (removed in 5.0.0)

Just so we are on the same page: the patchesStrategicMerge and patchesJson6902 fields are deprecated in v5, not removed. They will never be removed from the Kustomization v1beta1 type, but at some point, we will create a Kustomization v1 type that will no longer include them. After that (likely years away), we will eventually stop supporting v1beta1. We've announced the deprecation early so that folks will start with and migrate to the newer fields, and report any shortcomings we need to address in them, such as #5049 .

Ideally, the migration path is simple: you run kustomize edit fix, and your Kustomization is updated for you (#5040 is an outstanding issue with that / an alternative to #5049 ). The edit fix command is already mentioned in the docs: see the end of the deprecation notice here for example. Is there a particular conversion that is not working for you, or that you are wanting to do manually and unsure how? Generally speaking, you need to add either the patch: or the path: key before each existing value, as appropriate based on the content.

/triage needs-information /remove-kind feature /kind documentation

KnVerey avatar Mar 15 '23 22:03 KnVerey

I have noticed the following changes.

  1. patchesStrategicMerge allowed multiple patches to exist in a single patch file separated by ---, patches doesn't @mgazza

Correct, and when using kustomize edit fix as suggested by latest v5.0.1, it breaks the kustomize build, becuase as noted, patchesStrategicMerge allowed multiple patches..

I guess this is tracked here #5049

asaf400 avatar Apr 27 '23 16:04 asaf400

looked for a while to do the migration, in my scenario, it's simple, just

from "patchesStrategicMerge: - patch-flux_kustomization.yaml" to "patches: - path: patch-flux_kustomization.yaml"

you can find more on the official website:

https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patches/

TOHUHM avatar Aug 21 '23 02:08 TOHUHM

kustomize edit fix does not work for me. First I don't have kustomize installed, need to use kubectl kustomize edit fix instead, and that gives error message

error: specify one path to kustomization.yaml

I tried it with kubectl kustomize edit fix overlays/myoverlay and also with pointing directly the kustomization.yaml - it always gives the same error. Also kubectl kustomize edit fix --help seems to output a help on kubectl kustomize instead. So the official error, stating I should use kustomize edit fix was very much useless for me.

schlichtanders avatar Oct 09 '23 07:10 schlichtanders

kustomize edit fix does not work for me. First I don't have kustomize installed, need to use kubectl kustomize edit fix instead, and that gives error message

error: specify one path to kustomization.yaml

I tried it with kubectl kustomize edit fix overlays/myoverlay and also with pointing directly the kustomization.yaml - it always gives the same error. Also kubectl kustomize edit fix --help seems to output a help on kubectl kustomize instead. So the official error, stating I should use kustomize edit fix was very much useless for me.

I'm experiencing exactly the same behavior.

cyberslot avatar Oct 15 '23 09:10 cyberslot

@schlichtanders @cyberslot you mention using kustomize bundled with kubectl (kubectl kustomize) - can you ensure you use kubectl version 1.27+ ? Older versions of kubectl bundled v4 version of the kustomize. You can check with kubectl version.

sbocinec avatar Oct 17 '23 10:10 sbocinec

@sbocinec Personally, I've tried both ways without success.

k version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.1-gke.1066000

kustomize version
v5.1.1

cyberslot avatar Oct 17 '23 11:10 cyberslot

Same for me,

[sam@sam-redhat-laptop copypvc]$ kubectl kustomize edit fix overlays/example/kustomization.yaml 
error: specify one path to kustomization.yaml
[sam@sam-redhat-laptop copypvc]$ kubectl kustomize edit fix overlays/example
error: specify one path to kustomization.yaml
[sam@sam-redhat-laptop copypvc]$ kubectl version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

sfxworks avatar Oct 20 '23 20:10 sfxworks

Same for me,

[sam@sam-redhat-laptop copypvc]$ kubectl kustomize edit fix overlays/example/kustomization.yaml 
error: specify one path to kustomization.yaml
[sam@sam-redhat-laptop copypvc]$ kubectl kustomize edit fix overlays/example
error: specify one path to kustomization.yaml
[sam@sam-redhat-laptop copypvc]$ kubectl version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

Same issue with same versions

Skoucail avatar Oct 31 '23 15:10 Skoucail

How would one now handle semantically "empty" kustomization.yaml files with >v5.0.0 such as the following:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

Building a base that references this file yields an error:

deployment/base/resource-quotas': kustomization.yaml is empty

The file and folder hierarchy is still needed for legacy reasons.

mihaigalos avatar Nov 21 '23 09:11 mihaigalos

How would one now handle semantically "empty" kustomization.yaml files with >v5.0.0 such as the following:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

Building a base that references this file yields an error:

deployment/base/resource-quotas': kustomization.yaml is empty

The file and folder hierarchy is still needed for legacy reasons.

Have you tried turning this into a component? You could add a dummy image mapping or something like that

mgazza avatar Nov 21 '23 21:11 mgazza

~~I used an empty.yaml and referenced it in resources in kustomization.yaml.~~ I just used an empty list like so: resources: []

mihaigalos avatar Nov 21 '23 21:11 mihaigalos

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 21 '24 15:02 k8s-triage-robot

/remove-lifecycle stale

robin-wayve avatar Feb 27 '24 16:02 robin-wayve

same issue : /

surajkrishan avatar Mar 01 '24 08:03 surajkrishan