Allow custom merge keys?
Eschewed features
- [X] This issue is not requesting templating, unstuctured edits, build-time side-effects from args or env vars, or any other eschewed feature.
What would you like to have added?
An option to add x-kubernetes-merge-key options to exiting OpenAPI resources (e.g. io.k8s.api.core.v1.ConfigMapVolumeSource)
maybe something like:
# kustomization.yaml
configurations:
- merge-keys.yml
# merge-keys.yml
mergeKeys:
- resource: io.k8s.api.core.v1.ConfigMapVolumeSource
property: items
key: key
strategy: merge
Why is this needed?
Some locations in kubernetes OpenAPI are missing merge keys, some intentionally, some likely bugs:
A StatefulSets' volumeClaimTemplates are missing merge keys, so you cannot use SMP for setting the setting or overwriting the storageClass of a volumeClaimTemplate. From kubernetes PoV this is intentional as patching a STS volumeClaimTemplate is illegal, you need to recreate the STS for that, so k8s does not really have a motivation to fix it. :) (https://github.com/kubernetes/kubernetes/issues/74819, https://github.com/kubernetes-sigs/kustomize/issues/819)
ConfigMapVolumeSource's items are missing merge keys, so you cannot use a SMP for appending a key/path pair to an existing configMap volume. This likely is a bug.
Can you accomplish the motivating task without this feature, and if so, how?
Not really.
What other solutions have you considered?
JSON Patches: As both volumeClaimTemplates and configMap volume items are a list-of-maps, one would need to rely on list order.
Anything else we should know?
An example:
# sts.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ss
spec:
selector:
matchLabels:
ss: ss
template:
metadata:
labels:
ss: ss
spec:
volumes:
- name: config-volume
configMap:
name: config
items:
- key: foo
path: foo
- key: bar
- path: bar
containers:
- name: foo
image: foo
serviceName: sts
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- sts.yml
patches:
- patch: |-
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: ss
spec:
template:
spec:
volumes:
- name: config
configMap:
items:
- key: ll
path: ll
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: thin
with
$ kustomize version
v5.0.1
$ kustomize build .
this results in:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ss
spec:
selector:
matchLabels:
ss: ss
serviceName: sts
template:
metadata:
labels:
ss: ss
spec:
containers:
- image: foo
name: foo
volumes:
- configMap:
items:
- key: ll
path: ll
name: config
name: config-volume
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: thin
while my expected output would be
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ss
spec:
selector:
matchLabels:
ss: ss
serviceName: sts
template:
metadata:
labels:
ss: ss
spec:
containers:
- image: foo
name: foo
volumes:
- configMap:
items:
- key: foo
path: foo
- key: bar
path: bar
- key: ll
path: ll
name: config
name: config-volume
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: thin
Feature ownership
- [X] I am interested in contributing this feature myself! 🎉
I think allowing custom merge keys easily is definitely something we want to support, but we will have to look at it holisitcally with https://github.com/kubernetes-sigs/kustomize/issues/3944 and https://github.com/kubernetes-sigs/kustomize/issues/3945.
We are currently working on a roadmap for the rest of the year and putting those two issues as one of the higher priorities. I will add a note that the design for those issues should address this one as well.
/triage accepted /priority important long-term /assign
@natasha41575: The label(s) priority/important, priority/long-term cannot be applied, because the repository doesn't have them.
In response to this:
I think allowing custom merge keys easily is definitely something we want to support, but we will have to look at it holisitcally with https://github.com/kubernetes-sigs/kustomize/issues/3944 and https://github.com/kubernetes-sigs/kustomize/issues/3945.
We are currently working on a roadmap for the rest of the year and putting those two issues as one of the higher priorities. I will add a note that the design for those issues should address this one as well.
/triage accepted /priority important long-term /assign
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
FWIW, replacements don't seem to have any effect on volumeClaimTemplates too
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
I had to do this recently for an operator's custom resource, and while exporting a schema with kustomize openapi fetch and plumbing it in using openapi: worked, the experience had some pitfalls;
-
Filtering the output of
kustomize openapi fetchis a little awkward, as it includes every single schema definition in your entire cluster (read: upwards of 1000 in this case).For posterity, I use a command like this:
kustomize openapi fetch --format json \
| jq --arg resource com.example.CustomResource '.definitions |= with_entries(select(.key == $resource))'
-
The resulting json, even for the singular resource I was interested in, clocked in at 175K lines long and weighed over 7MB!.
-
This resource didn't actually define the necessary merge keys, so I had to hand-edit the (175K line long) json to add in the appropriate properties.
- This is going to be a foot-gun for next time we e.g. upgrade the CRD/resources and patching lists silently reverts to the old behavior... 😰
It would be very helpful to be able to minimally configure the relevant paths for merging. Thank you!
Thinking on the hypothetical configuration format proposed above; it might additionally convenient to be able to specify multiple merge paths for a given resource type (as some resources are going to have a lot of them), in order to reduce duplication/verbosity:
# merge-keys.yml
mergeKeys:
- resource: io.k8s.api.apps.v1.Deployment
paths:
- path: spec/template/spec/containers
key: name
- path: spec/template/spec/initContainers
key: name
- path: spec/template/spec/containers/env
key: name
... etc, you get the idea
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I would also like to be able to use SMPs on ConfigMapVolumeSource's items (should the bug be fixed upstream instead?)
/remove-lifecycle rotten