kustomize
kustomize copied to clipboard
[Feature request] Replacements value in the structured data.
Is your feature request related to a problem? Please describe.
I feel difficult to Overlay when structured data is in k8s YAML's value. (ex. Structured data(yaml or json) format value in configMap data And structured data(almost json) format value in annotations(this way is used in many projects, I think)).
Therefore I propose to add options for replacing the value in structured data to replacements function.
I think this function can be an alternative vars function on almost use cases.
Example (json format value in configMap data)
apiVersion: v1
kind: ConfigMap
metadata:
name: jsoned-configmap
data:
config.json: |-
{"config": {
"id": "42",
"hostname": "REPLACE_TARGET"
}}
Example (json format value in annotations)
- https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#unique_backendconfig_per_service_port
- https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/#listen-ports
Describe the solution you'd like
FIrst, set format
and formatPath
parameters to replacements option.
And interpretation in k8s YAML value using formatPath
with format
.
Finally, execute replacements which place set by formatPath
.
(Please watch Proposal config
on the "Additional context" section.)
Describe alternatives you've considered
I tried to use delimiter: '"'
options on replacement to parse json.
I think I will be able to resolve my problem with this solution, but It is very hard.
Additional context
Proposal config
source
apiVersion: v1
kind: ConfigMap
metadata:
name: source-configmap
data:
HOSTNAME: www.example.com
---
apiVersion: v1
kind: ConfigMap
metadata:
name: target-configmap
data:
config.json: |-
{"config": {
"id": "42",
"hostname": "REPLACE_TARGET_HOSTNAME"
}}
replacement
replacements:
- source:
kind: ConfigMap
name: source-configmap
fieldPath: data.HOSTNAME
targets:
- select:
kind: ConfigMap
name: target-configmap
fieldPaths:
- data.config\.json
options:
format: 'json'
formatPath: '/config/hostname'
expected
apiVersion: v1
kind: ConfigMap
metadata:
name: source-configmap
data:
HOSTNAME: www.example.com
---
apiVersion: v1
kind: ConfigMap
metadata:
name: target-configmap
data:
config.json: '{"config":{"hostname":"www.example.com","id":"42"}}'
@koba1t: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
my implement is here(https://github.com/kubernetes-sigs/kustomize/pull/4518)
Thank you @koba1t for the feature request and the implementation. We have discussed this in the kustomize bug scrub, and while we leaning towards accepting some version of this feature, we noted that similar issue was filed for patches https://github.com/kubernetes-sigs/kustomize/issues/3787. This has a long discussion of what this could look like for patches, and if we want to do something similar for replacements we believe they should have a similar UX.
Because this is a major feature, this would need to be submitted as a mini in-repo KEP for further discussion, so that we can be very clear about what we are supporting. In that KEP, we should discuss the UX for both patches and replacements.
Please let us know if you have any questions about the process.
/triage under-consideration
Hi @natasha41575
I tried to write a mini in-repo KEP
, and I'm open PR https://github.com/kubernetes-sigs/kustomize/pull/4558 now.
This is my first proposal, Could you give me any feedback? I want to improve this proposal document.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/triage accepted
@koba1t I think we are inclined to accept the feature, we just need to take some time to get your KEP through. I'll see if I can find time to look at it again next week.
Sending an update here that the KEP is merged, so please feel free to begin implementation.
Thanks, Natasha!
@koba1t any updates?
This is a very cool feature. It would be really helpful for managing Kuberay configurations. Looking forward to it.
@natasha41575, @koba1t any updates on this? This is much needed to fully embrace replacements.
Hi @undr-rowr I'm working now on the below PR. https://github.com/kubernetes-sigs/kustomize/pull/5679