kustomize
kustomize copied to clipboard
transformers: PrefixSuffixTransformer applied partially in referenced resources
Describe the bug
Using PrefixSuffixTransformer via transformers
field to add a prefix to configs/secrets works correctly when build is invoked directly on the kustomization.yaml that uses it, but works only partially when build is invoked on a kustomization.yaml that references via resources
the kustomization.yaml that uses the transformer.
Partially means that the prefix is added to the configmap definition, but not to the references to the configmap in a deployment.
This means that you can have something like this:
apiVersion: v1
data:
param: value
kind: ConfigMap
metadata:
name: bar-config-27ch7b2kbt
namespace: stage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bar
namespace: stage
spec:
template:
spec:
containers:
- env:
- name: SOME_PARAM
valueFrom:
configMapKeyRef:
key: param
name: config
image: hashicorp/http-echo:1.1-devel
name: bar
where the config
ConfigMap is named bar-config-27ch7b2kbt
when defined and just config
when used in the deployment.
Files that can reproduce the issue
(Anyway, I have attached them to the issue)
./bar/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: bar
spec:
template:
spec:
containers:
- name: bar
image: hashicorp/http-echo:1.1-devel
env:
- name: "SOME_PARAM"
valueFrom:
configMapKeyRef:
name: config
key: param
./bar/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: stage
resources:
- deployment.yaml
configMapGenerator:
- name: config
literals:
- param=value
transformers:
- kustomizeconfig.yaml
./bar/kustomizeconfig.yaml
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: configSecretsPrefixer
prefix: "bar-"
fieldSpecs:
- kind: ConfigMap
path: metadata/name
./foo/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: foo
spec:
template:
metadata:
labels:
app: foo
version: 1.0.0
spec:
containers:
- name: foo
image: hashicorp/http-echo
env:
- name: "A_PARAM"
valueFrom:
configMapKeyRef:
name: config
key: param
./foo/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: stage
resources:
- deployment.yaml
configMapGenerator:
- name: config
literals:
- param=somevalue
transformers:
- kustomizeconfig.yaml
./foo/kustomizeconfig.yaml
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: configSecretsPrefixer
prefix: "foo-"
fieldSpecs:
- kind: ConfigMap
path: metadata/name
./kustomization.yaml
resources:
- bar
- foo
If you run kustomize build .
you can see the bug, if you run kustomize build foo
or kustomize build bar
you don't see any bug.
Expected output
apiVersion: v1
data:
param: value
kind: ConfigMap
metadata:
name: bar-config-27ch7b2kbt
namespace: stage
---
apiVersion: v1
data:
param: somevalue
kind: ConfigMap
metadata:
name: foo-config-ffm984ghct
namespace: stage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bar
namespace: stage
spec:
template:
spec:
containers:
- env:
- name: SOME_PARAM
valueFrom:
configMapKeyRef:
key: param
name: bar-config-27ch7b2kbt
image: hashicorp/http-echo:1.1-devel
name: bar
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: foo
namespace: stage
spec:
template:
metadata:
labels:
app: foo
version: 1.0.0
spec:
containers:
- env:
- name: A_PARAM
valueFrom:
configMapKeyRef:
key: param
name: foo-config-ffm984ghct
image: hashicorp/http-echo
name: foo
Actual output
apiVersion: v1
data:
param: value
kind: ConfigMap
metadata:
name: bar-config-27ch7b2kbt
namespace: stage
---
apiVersion: v1
data:
param: somevalue
kind: ConfigMap
metadata:
name: foo-config-ffm984ghct
namespace: stage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bar
namespace: stage
spec:
template:
spec:
containers:
- env:
- name: SOME_PARAM
valueFrom:
configMapKeyRef:
key: param
name: config
image: hashicorp/http-echo:1.1-devel
name: bar
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: foo
namespace: stage
spec:
template:
metadata:
labels:
app: foo
version: 1.0.0
spec:
containers:
- env:
- name: A_PARAM
valueFrom:
configMapKeyRef:
key: param
name: config
image: hashicorp/http-echo
name: foo
Kustomize version
{Version:kustomize/v4.0.5 GitCommit:9e8e7a7fe99ec9fbf801463e8607928322fc5245 Bui ldDate:2021-03-08T20:53:03Z GoOs:windows GoArch:amd64}
Platform
Windows
/assign @Serializator
In (nameref.Filter).selectReferral
the candidates are filtered ("sieved") based on a set of criteria.
https://github.com/kubernetes-sigs/kustomize/blob/master/api/filters/nameref/nameref.go#L308-L333
When performing e.g. the kustomize build bar/
there is only one possible candidate left and thus returned on line 317.
When performing a kustomize build .
there are two possible candidates left, which are basically equal. nameref.prefixSuffixEquals
is then used which results on no candidates being left and thus a return of nil
at line 324.
@natasha41575, I feel like there needs to be a filter / comparison of the origin as well (to check from which Kustomize manifest the generated resource originated and if that matches), or am I missing something? Trying to figure out the best approach to solve the issue.
@Serializator I haven’t been able to look at this issue in much depth, but using origin will not work; the origin is only tracked if the build option is set. In most cases the origin will be nil.
@natasha41575, I noticed it being nil, thank you for the explanation as to why!
/triage accepted
any update on this one? Thanks
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted
(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale