kustomize
kustomize copied to clipboard
buildMetadata should be invoked as a transformer and honored in base kustomizations
What happened?
buildMetadata is not propagated to overlays
What did you expect to happen?
buildMetadata is propagated to overlays
How can we reproduce it (as minimally and precisely as possible)?
# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
buildMetadata:
- managedByLabel
- originAnnotations
configMapGenerator:
- name: my-configmap
literals:
- key=value
# overlay/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
Then run kustomize build overlay/
Expected output
apiVersion: v1
data:
key: value
kind: ConfigMap
metadata:
annotations:
config.kubernetes.io/origin: |
configuredIn: ../base/kustomization.yaml
configuredBy:
apiVersion: builtin
kind: ConfigMapGenerator
labels:
app.kubernetes.io/managed-by: kustomize-v5.1.0
name: my-configmap-t757gk2bmf
Actual output
apiVersion: v1
data:
key: value
kind: ConfigMap
metadata:
name: my-configmap-t757gk2bmf
Kustomize version
5.1.0
Operating system
Linux
/assign
/kind documentation
I believe that builtMetadata is currently in alpha, and for alpha we only honor it when it is specified in the top-level kustomization.
To get it out of alpha and when we fully support the feature, the buildMetadata field would ideally be honored for resources in base kustomizations. In this model, we ideally would put buildMetadata only on resources that are referenced in that kustomization, i.e. a base kustomization's buildMetadata field should not affect resources defined in other bases.
I think buildMetadata would then have to be implemented as a builtin transformer.
/retitle buildMetadata should be invoked as a transformer and honored in base kustomizations
/triage accepted
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.