Documented wildcard matching for replacement is broken
What happened?
I am using a glob pattern on my ServiceMonitor endpoints (a prometheus-operator construct) and I cannot defined the port 8080 to all of them -- and I have tons of them. I want to be able to apply the port number (and relabelings, actually) to all of them, but the replacements construct, as documented in docs, is not working.
Here's my example, not using a ServiceMonitor, but just a plain-old Pod with environment variables.
kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: mynamespace
resources:
- podexample.yaml
configMapGenerator:
- name: example
literals:
- username=macetw
replacements:
- source:
kind: ConfigMap
name: example
fieldPath: data.username
targets:
- select:
kind: Pod
name: mypod
fieldPaths:
- spec.containers.[name=mycontainer].env.*.value
options:
create: true
podexample.yaml:
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: mynamespace
spec:
containers:
- name: mycontainer
image: docker.io/alpine:latest
command:
- sleep
- inf
env:
- name: VARIABLE1
- name: VARIABLE2
And my result:
$ kubectl apply -k . --dry-run -o yaml
W1011 14:14:54.782699 640700 helpers.go:660] --dry-run is deprecated and can be replaced with --dry-run=client.
error: wrong Node Kind for spec.containers.env expected: MappingNode was SequenceNode: value: {- name: USER_NAME
I am using version 4.5.4. This feature is included in that release.
https://github.com/kubernetes-sigs/kustomize/pull/4424
This is from the docs, here: https://github.com/kubernetes-sigs/cli-experimental/blob/master/site/content/en/references/kustomize/kustomization/replacements/_index.md#index
Docs say:
This will target every element in the list.
... but it does not work this way.
What did you expect to happen?
I want the value field rendered in every entry of the list of env with the given replacement value defined. In this case, with options: { create: true} , I want those fields created.
How can we reproduce it (as minimally and precisely as possible)?
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- podexample.yaml
configMapGenerator:
- name: example
literals:
- username=macetw
replacements:
- source:
kind: ConfigMap
name: example
fieldPath: data.username
targets:
- select:
kind: Pod
name: mypod
fieldPaths:
- spec.containers.[name=mycontainer].env.*.value
options:
create: true
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: docker.io/alpine:latest
command:
- sleep
- inf
env:
- name: VARIABLE1
- name: VARIABLE2
kubectl apply -k . --dry-run -o yaml
Expected output
- apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- command:
- sleep
- inf
env:
- name: VARIABLE1
value: macetw
- name: VARIABLE2
value: macetw
image: docker.io/alpine:latest
name: mycontainer
Actual output
$ kubectl apply -k . --dry-run -o yaml
W1011 14:37:46.838210 644735 helpers.go:660] --dry-run is deprecated and can be replaced with --dry-run=client.
error: wrong Node Kind for spec.containers.env expected: MappingNode was SequenceNode: value: {- name: VARIABLE1
- name: VARIABLE2}
$ echo $?
0
Kustomize version
4.5.4
Operating system
Linux
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Seems to be fixed in newer versions (5.0.4 for example).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.