SMP with target does not respect patch GVK
Describe the bug
When using an Strategic-Merge Patch in kustomization, the GVK from the patch is not used for limiting the target, e.g. labelSelector. Without GVK in the Kustomize does not accept the SMP.
This makes SMP's with target not very comfortable to use as it seems you have to duplicate the GVK in target to limit the patch to the wished GVK.
Files that can reproduce the issue
Using this patch:
patches:
- patch: |-
apiVersion: v1
kind: Service
metadata:
name: any
annotations:
my-annotation: blubb
target:
labelSelector: my-label=foo
not only Services, but any resource matching the label selector will get the new annotation but all resource kinds.
see this reproducer repo: https://github.com/ciis0/kustomize-4710
Expected output
see reproducer
Actual output
see reproducer
Kustomize version
discovered with
{Version:kustomize/v4.5.4 GitCommit:cf3a452ddd6f83945d39d582243b8592ec627ae3 BuildDate:2022-03-28T23:12:45Z GoOs:linux GoArch:amd64}
verified to still be present in
{Version:kustomize/v4.5.5 GitCommit:daa3e5e2c2d3a4b8c94021a7384bfb06734bcd26 BuildDate:2022-05-20T20:25:40Z GoOs:linux GoArch:amd64}
Platform
Linux
Additional context
@ciis0: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
this seems to be the relevant place in code:
https://github.com/kubernetes-sigs/kustomize/blob/0c6e827ab81051a95bd061bd1cc917f70335ce86/api/internal/builtins/PatchTransformer.go#L88-L101
maybe an patchGVK: true option could be added to the patch or target selector to copy GVK from SMP to target selector?
I believe this is behaving as intended: either you specify the target field, in which case it is used to identify targets and the patch's identifier (group, version, kind, name, namespace) is ignored, or you don't, in which case the patch's own identifier is used for targeting.
/triage not-an-issue /kind support
Ok, I see.
The docs are a little confusing here, because explictly they only tell name is ignored when target is set.
Thus I always have to write my SMPs with labelSelectors like this?
patches:
- patch: |-
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: any
annotations:
haproxy.router.openshift.io/timeout: 5m
target:
group: route.openshift.io
version: v1
kind: Route
labelSelector: label=value
not happy about that duplication of GVK in Patch and Target... ^^
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
maybe an patchGVK: true option could be added to the patch or target selector to copy GVK from SMP to target selector?
would you accept a PR for that?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.