kustomize
kustomize copied to clipboard
namePrefix Configuration is broken
What happened?
I do have resources where I don't want to add the namePrefix: e.x. PriorityClass. I want to blacklist them via configuration
What did you expect to happen?
I expected that only the prefix is skipped for the PriorityClass
. Unfortantly it's then removed for all current and future resources. What is needed to reactivate is that you explicity add one by one namePrefix you want to have. Funny thing if you add a conifguration where the namePrefix needs to be added the rule with lowercase doesn't work and you need to name it like it's expected in Pascalcase.
How can we reproduce it (as minimally and precisely as possible)?
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: sta-
resources:
- resources.yaml
configuration:
- kustomize-config-name-prefix.yaml
# resources.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
name: my-lbs
spec:
ports:
- name: 5678-8080
port: 5678
protocol: TCP
targetPort: 8080
selector:
app: my-lbs
type: LoadBalancer
# kustomize-config-name-prefix.yaml
namePrefix:
- path: metadata/name
group: scheduling.k8s.io
version: v1
kind: priorityclass # lowercase due to https://stackoverflow.com/questions/64218188/selectively-apply-nameprefix-namesuffix-in-kustomize
skip: true
Expected output
# resources.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
name: sta-my-lbs
spec:
ports:
- name: 5678-8080
port: 5678
protocol: TCP
targetPort: 8080
selector:
app: my-lbs
type: LoadBalancer
Actual output
# resources.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
name: my-lbs
spec:
ports:
- name: 5678-8080
port: 5678
protocol: TCP
targetPort: 8080
selector:
app: my-lbs
type: LoadBalancer
Kustomize version
v5.1.0
Operating system
None
Tried this with the same kustomization.yaml files but only difference being these files
# resources.yaml
apiVersion: v1
kind: Pod
metadata:
name: prefix-pod-suffix # modified
spec:
containers:
- image: image
name: pod
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
name: my-lbs
spec:
ports:
- name: 5678-8080
port: 5678
protocol: TCP
targetPort: 8080
selector:
app: my-lbs
type: LoadBalancer
# kustomize-config-name-prefix.yaml
namePrefix:
- path: metadata/name
#group: scheduling.k8s.io
apiVersion: v1
kind: Pod # lowercase due to https://stackoverflow.com/questions/64218188/selectively-apply-nameprefix-namesuffix-in-kustomize
skip: true
got this result
apiVersion: v1
kind: Service
metadata:
name: my-lbs
spec:
ports:
- name: 5678-8080
port: 5678
protocol: TCP
targetPort: 8080
selector:
app: my-lbs
type: LoadBalancer
---
apiVersion: scheduling.k8s.io/v1
description: This priority class is for high prio only!
globalDefault: false
kind: PriorityClass
metadata:
name: test
value: 1000000000
---
apiVersion: v1
kind: Pod
metadata:
name: addition-prefix-pod-suffix
spec:
containers:
- image: image
name: pod
It seems firstly the kustomization file is skipping the namePrefix completely and only picking the configurations option and secondly it ignores the skip statement in the config-prefix as the above example for pod (tried for service also same result)
on trying with a kustomization.yaml file like this
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: addition-
resources:
- resources.yaml
Resultant file
apiVersion: v1
kind: Service
metadata:
name: addition-my-lbs
spec:
ports:
- name: 5678-8080
port: 5678
protocol: TCP
targetPort: 8080
selector:
app: my-lbs
type: LoadBalancer
---
apiVersion: scheduling.k8s.io/v1
description: This priority class is for high prio only!
globalDefault: false
kind: PriorityClass
metadata:
name: addition-test
value: 1000000000
---
apiVersion: v1
kind: Pod
metadata:
name: addition-prefix-pod-suffix
spec:
containers:
- image: image
name: pod
I can't find any documentation on this skip: true
field. It would be helpful for someone to try to go through code or past issues and try to reason about what the behavior of skip: true
should be. From there, we can figure out what the discrepancy is between the "expected" behavior and actual behavior and triage this issue from there. It seems in general that the skip
field is buggy and we need to fix it.
/assign @ncapps
Hi @natasha41575, I was not able to find any references of a skip: true
field for configurations overrides. This test demonstrates that we can select which resources to apply a prefix, suffix, or both to names by resource type. Resource types are skipped if they are not listed in the override configuration. There may be more context that I am not aware of, but a skip: true
field does not seem necessary.
Example below
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: foo-
namespace: ns1
resources:
- deployment.yaml
- config.yaml
configurations:
- name-prefix-rules.yaml
# deployment.yaml
apiVersion: apps/v1
metadata:
name: deployment1
kind: Deployment
# config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config
# name-prefix-rules.yaml
namePrefix:
- path: metadata/name
apiVersion: v1
kind: Deployment
Output
This example shows that a foo-
prefix was only added to the Deployment because Deployment was added to the namePrefix configuration override. A prefix was not added to the ConfigMap because ConfigMap was not added to the override.
apiVersion: v1
kind: ConfigMap
metadata:
name: config
namespace: ns1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: foo-deployment1
namespace: ns1
Thank you @ncapps for the thorough investigation! Agree that the skip: true
field does not seem necessary. Could you leave a comment on https://github.com/kubernetes-sigs/kustomize/issues/3945 reminding us that the skip: true
field is not documented or well-understood, and that we will need to address that if we do decide to keep the configurations
field?
For this issue, looks like the alternative configuration in https://github.com/kubernetes-sigs/kustomize/issues/5272#issuecomment-1829195608 that Nick suggested might be sufficient for this use case. @sass1997 Could you give it a try and let us know if this resolves your issue?
/triage resolved
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.