Support patching a JSON/YAML value inside a ConfigMap
Hi all! I am reopening an old issue with a lot of likes, which I think is really needed in a lot of scenarios: https://github.com/kubernetes-sigs/kustomize/issues/680
Problem Statement
Kustomize has the ability to perform powerful patches (JSONPatch, SMP) on Kubernetes resources (yaml objects with apiVersion, kind, metadata.name keys). This enables a powerful workflow where manifests consumers are decoupled from manifests consumers and keep their changes in separate files.
It is very common for applications to declare their configuration as a JSON or YAML structure inside a configmap key. However, all the powerful kustomize abilities do not apply to that the JSON inside that key, meaning that consumers must include all the configuration again, instead of just patching the values they want.
This is especially frustrating, as it hinders upgrades. For every minor change in the producer's config, the consumer must now copy the config again and make their changes ontop. Because of that, it violates kustomize's principle of keeping consumer changes separate from the producer's files, via patches.
Proposed Solution
I am opening this issue to discuss how we could apply kustomize's existing powerful patching mechanisms to configmap JSON / YAML values. This would greatly enhance many peoples' workflows, as is evident from the activity in https://github.com/kubernetes-sigs/kustomize/issues/680.
@monopole @shell32-natsu @pwittrock what do you think? This is a long-standing problem, so maybe you already have some thoughts on this. Since we are invested in using kustomize, both as Arrikto and in the Kubeflow community as well, we'd be happy to put in the work required for such a feature.
cc @KnVerey
Would it be possible to break a JSON document down into keys and values.
// base configuration
{
"Configuration": {
"Property1": "Value1",
"Property2": "Value2"
},
list: [ "item0", "item1"]
}
// overlay
{
"Configuration": {
"Property1": "OverrideValue"
}
}
Base Configuration
Configuration:Property1=Value1 Configuration:Property2=Value2 Configuration:list:0=item0 Configuration:list:1=item1
Overlay
Configuration:Property1=OverrideValue
Final Result
Configuration:Property1=OverrideValue Configuration:Property2=Value2
Then generate a JSON ConfigMap for the configuration based on this?
Is there an update to this?
Interested as well
It's very useful, and kubectl edit has already made an example for this usage. It shows a structured value rather than a string if the value is a legal yaml string.
Patching data within ConfigMaps would remove the last reason for templating.
This would be perfect to use with kustomize as a helm postRenderers directive. Many helm charts create a configmap with a yaml file in it. It is today not possible to patch that yaml file without replacing the whole file and thereby overwriting the file created by the helm chart. This suggestion would enable that if I understand it correctly.
As @monopole mentioned here, the maintainer team is open to accepting this feature as long as it is explicitly triggered and the edit it performs is fully structured. However, we do not have the bandwidth to implement it ourselves.
@monopole previously requested a full KEP, but we now have a brand new, lighter weight process for aligning on Kustomize features that are large and/or particularly contentious before time is spent on implementation: a mini in-repo KEP. If someone following this issue still feels strongly about it and is interested in contributing an implementation, please create an in-repo KEP with your proposal.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
A proposal for this feature has been submitted as a KEP: https://github.com/kubernetes-sigs/kustomize/pull/4558
This is a common need whenever someone has operators that don't have full CRDs. For example, the OpenTelemetry Collector has a resource
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
but the bulk of its configuration is in a
spec:
config: |
...inline yaml here ...
which can not be easily modified by Kustomize. It does support environment variable substitution, but that's quite limited.
Ideally everyone should just use proper CRDs, but in reality...
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Hi,
Is there any update on this feature request - inline yaml support for configmap?.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Is there any update on this? This is a very useful feature to have to avoid repeating configuration
I am still looking for this feature to be available.
Could be helpful for Secrets too.
Can we make this happen?
This would also be very helpful for overriding specific fields in the spec.source.helm.values of ArgoCD's Application CRD, for example when using a shared reusable base of Application manifests.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Is this now covered by https://github.com/kubernetes-sigs/kustomize/issues/4517?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.