nameReference transformer is not updating fields in all configured resource kinds
What happened?
I'm trying to configure the nameReference transformer so it updates the name references in a handful of additional resource kinds using additional configuration. For testing purposes I've configured the nameSuffix transformer to just add -FOOBAR to all of the resource names.
What did you expect to happen?
I would expect all of the configured references to be updated, but only the one referencing the secret name seems to take effect.
A picture paints a thousand words:
I can't see what I'm doing wrong, nor is there any sort of error or verbosity knob to tweak.
How can we reproduce it (as minimally and precisely as possible)?
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resources.yaml
secretGenerator:
- name: test
files:
- test.pem
namespace: cert-manager
configurations:
- kustomizeconfig.yaml
generatorOptions:
disableNameSuffixHash: true
nameSuffix: -FOOBAR
# resources.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-test-sa
namespace: external-secrets
---
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: secret-store
spec:
provider:
aws:
service: SecretsManager
region: eu-west-1
auth:
jwt:
serviceAccountRef:
name: my-test-sa
namespace: external-secrets
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: vault
namespace: cert-manager
spec:
vault:
caBundleSecretRef:
name: test
key: test.pem
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: cert-manager-vault-approle
namespace: cert-manager
spec:
secretStoreRef:
name: secret-store
kind: ClusterSecretStore
# kustomizeconfig.yaml
nameReference:
- kind: Secret
fieldSpecs:
- kind: ClusterIssuer
group: cert-manager.io
path: spec/vault/caBundleSecretRef/name
- kind: ServiceAccount
fieldSpecs:
- kind: ClusterSecretStore
group: external-secrets.io
path: spec/provider/aws/auth/jwt/serviceAccountRef/name
- kind: ClusterSecretStore
group: external-secrets.io
fieldSpecs:
- kind: ExternalSecret
group: external-secrets.io
path: spec/secretStoreRef/name
# test.pem
Cg==
Expected output
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-test-sa-FOOBAR
namespace: external-secrets
---
apiVersion: v1
data:
test.pem: Q2c9PQo=
kind: Secret
metadata:
name: test-FOOBAR
namespace: cert-manager
type: Opaque
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: vault-FOOBAR
namespace: cert-manager
spec:
vault:
caBundleSecretRef:
key: test.pem
name: test-FOOBAR
---
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: secret-store-FOOBAR
spec:
provider:
aws:
auth:
jwt:
serviceAccountRef:
name: my-test-sa-FOOBAR
namespace: external-secrets
region: eu-west-1
service: SecretsManager
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: cert-manager-vault-approle-FOOBAR
namespace: cert-manager
spec:
secretStoreRef:
kind: ClusterSecretStore
name: secret-store-FOOBAR
Actual output
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-test-sa-FOOBAR
namespace: external-secrets
---
apiVersion: v1
data:
test.pem: Q2c9PQo=
kind: Secret
metadata:
name: test-FOOBAR
namespace: cert-manager
type: Opaque
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: vault-FOOBAR
namespace: cert-manager
spec:
vault:
caBundleSecretRef:
key: test.pem
name: test-FOOBAR
---
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: secret-store-FOOBAR
spec:
provider:
aws:
auth:
jwt:
serviceAccountRef:
name: my-test-sa
namespace: external-secrets
region: eu-west-1
service: SecretsManager
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: cert-manager-vault-approle-FOOBAR
namespace: cert-manager
spec:
secretStoreRef:
kind: ClusterSecretStore
name: secret-store
Kustomize version
5.3.0
Operating system
Linux
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/reopen /lifecycle frozen
@stormqueen1990: Reopened this issue.
In response to this:
/reopen /lifecycle frozen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.