kustomize
kustomize copied to clipboard
Replacement filter documentation for array creation
Describe the bug
Using a replacement filter with the create option set to true, one can create an arbitrarily deep map to the point where the target value is set. However, the tool cannot create arrays or append to arrays.
Files that can reproduce the issue
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resources.yaml
replacements:
- source:
kind: Pod
name: pod
fieldPath: spec.containers.0.image
targets:
- select:
name: deploy1
fieldPaths:
- spec.template.spec.containers.0.image
options:
create: true
resources.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod
spec:
containers:
- image: busybox
name: myapp-container
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy1
Expected output
apiVersion: v1
kind: Pod
metadata:
name: pod
spec:
containers:
- image: busybox
name: myapp-container
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy1
spec:
template:
spec:
containers:
- image: busybox
Actual output
Error: wrong Node Kind for spec.template.spec.containers expected: SequenceNode was MappingNode: value: {{}}
Kustomize version
Built from master branch
git describe:
kustomize/v4.5.2-25-gf67dd5bbb
Platform
Linux
@lack: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Did some further digging. Turns out if the intermediate array is specified with a lookup value like .[name=foo].
, the intermediate array is created just fine, and with the lookup value added to the intermediate map.
Using a numerical index like .0.
, wildcard .*.
, or hyphen .-.
are not allowed for intermediate creation.
So maybe this is more of a doc bug? If the docs on the replacements filter had better examples of complex creation, I'd be happy.
For example:
- create a field with an intermediate array entry:
foo.bar.[key=value].intermediate.leaf
foo:
bar:
- key: value
intermediate:
leaf: scalarSourceValue
- append a scalar to an array
foo.bar.[=placeholder]
foo:
bar:
- scalarSourceValue
- append a copied structure to an array
foo.bar.[=placeholder]
foo:
bar:
- sourceKey1: value1
sourceKey2: value2
- create a new single field structure in an array
foo.bar.[newkey=].newkey
foo:
bar:
- newkey: scalarSourceValue
Thank you for all the examples. Let's mark this as a documentation issue for now, and if a use case comes up where targeting by index is required, we can reconsider supporting that specifically.
/kind documentation /rename Replacement filter documentation for array creation
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.