kustomize
kustomize copied to clipboard
Allow disabling of suffix hashes on merged ConfigMaps generated from ConfigMapGenerators
Is your feature request related to a problem? Please describe.
Say there is some shared out of scope ConfigMapGenerator that is composed into a given cluster's configuration multiple times through multiple apps' ConfigMapGenerators merging into the shared ConfigMapGenerator in various commands to kustomize build and we'd like some ConfigMaps to have a suffix hash at the end of the name, and others to not have a suffix hash appended.
Describe the solution you'd like
I'd like to be able to specify
- name: example
behavior: merge
options:
disableNameSuffixHash: true
and for the resulting ConfigMap to not have a suffix hash appended to the specified name.
Describe alternatives you've considered
A desired solution may be to use a suffix hash in some of the ConfigMaps that are generated from the shared ConfigMapGenerator but not use a suffix hash in other ConfigMaps.
Additional context
This sounds reasonable to me. It looks like disableNameSuffixHash is effectively ignored in merge scenarios, because if it is true, that means we don't add an internal suffixing annotation to the resource, and the annotation from the base will be merged in. Wdyt @natasha41575 ?
/triage under-consideration
It looks like disableNameSuffixHash is effectively ignored in merge scenarios, because if it is true, that means we don't add an internal suffixing annotation to the resource, and the annotation from the base will be merged in.
That sounds like a bug to me, I'm on board with accepting this issue.
/triage accepted
Thanks so much for accepting the change request! I can take a look at making a PR for this if this works for others?
Yes please, we'd appreciate that! Please be sure to include e2e tests, I suggest in api/krusty/generatormergeandreplace_test.go.
/assign @elisshafer
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Got caught up in some other things lately since making this issue, I'll have a PR out soon
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/triage resolved /close
@elisshafer: Closing this issue.
In response to this:
/triage resolved /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.