kustomize
kustomize copied to clipboard
Secret hash is not taken into account by spec.template.spec.volumes.azureFile.secretName
When a deployment file volumes section makes use of {{azureFile}} definition, the {{secretName}} reference inside it, does not takes into account the secret hash generated by Kustomize.
Files that can reproduce the issue
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
secretGenerator:
- name: secret-credentials
envs:
- secret-credentials.env
secret-credentials.env
azurestorageaccountname=storage-account-name
azurestorageaccountkey=storage-account-key
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: test-deployment
template:
metadata:
labels:
app.kubernetes.io/name: test-deployment
spec:
containers:
- name: test-deployment
image: busybox
imagePullPolicy: IfNotPresent
envFrom:
- secretRef:
name: secret-credentials
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: azure-share
mountPath: /test-share
volumes:
- name: azure-share
azureFile:
secretName: secret-credentials
shareName: test-share
Expected output
The secret referenced in both the {{envFrom}} and {{secretName}} properties is expected to report the hash suffix created by the secret generator:
apiVersion: v1
data:
azurestorageaccountkey: c3RvcmFnZS1hY2NvdW50LWtleQ==
azurestorageaccountname: c3RvcmFnZS1hY2NvdW50LW5hbWU=
kind: Secret
metadata:
name: secret-credentials-8b6bkhgkkb
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
[...]
template:
[...]
spec:
containers:
- name: test-deployment
envFrom:
- secretRef:
name: secret-credentials-8b6bkhgkkb
[...]
volumes:
- azureFile:
secretName: secret-credentials-8b6bkhgkkb
[...]
Actual output
Here follows the whole output of the {{kustomize build}} command. Note that the {{spec.template.spec.volumes.azureFile.secretName}} does not references the actual secret name because is missing the generated hash:
apiVersion: v1
data:
azurestorageaccountkey: c3RvcmFnZS1hY2NvdW50LWtleQ==
azurestorageaccountname: c3RvcmFnZS1hY2NvdW50LW5hbWU=
kind: Secret
metadata:
name: secret-credentials-8b6bkhgkkb
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: test-deployment
template:
metadata:
labels:
app.kubernetes.io/name: test-deployment
spec:
containers:
- envFrom:
- secretRef:
name: secret-credentials-8b6bkhgkkb
image: busybox
imagePullPolicy: IfNotPresent
name: test-deployment
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /test-share
name: azure-share
volumes:
- azureFile:
secretName: secret-credentials
shareName: test-share
name: azure-share
Kustomize version
Reproduced with both kustomize v4.0.5 and v4.5.2: {Version:kustomize/v4.0.5 GitCommit:9e8e7a7fe99ec9fbf801463e8607928322fc5245 BuildDate:2021-03-08T20:53:03Z GoOs:darwin GoArch:amd64} {Version:kustomize/v4.5.2 GitCommit:9091919699baf1c5a5bf71b32ca73a993e98088b BuildDate:2022-02-09T23:26:42Z GoOs:darwin GoArch:amd64}
Platform
Both Linux and macOS
We believe this is happening because the field specs used by the name reference transformer currently do not include spec.template.spec.volumes.azureFile.secretName
as a possible location for a secret name. This is the file that needs to be updated: https://github.com/kubernetes-sigs/kustomize/blob/master/api/konfig/builtinpluginconsts/namereference.go#L134-L271
/triage accepted /kind bug
I am seeing this same problem, but specifically for CRDs.
If I have a secretGenerator, any Deployments or StatefulSets that reference the generated secret will have the reference updated with the name suffix hash. However, if a CRD references that secret, it does not get the reference updated.
It is not possible for Kustomize to handle references in CRDs by default. Please see the CRDs and configurations features. (Note: we have a longer-term issue to reconcile these and the openapi field: https://github.com/kubernetes-sigs/kustomize/issues/3944 https://github.com/kubernetes-sigs/kustomize/issues/3945))
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.