kustomize icon indicating copy to clipboard operation
kustomize copied to clipboard

namePrefix Configuration is broken

Open sass1997 opened this issue 1 year ago • 7 comments

What happened?

I do have resources where I don't want to add the namePrefix: e.x. PriorityClass. I want to blacklist them via configuration

What did you expect to happen?

I expected that only the prefix is skipped for the PriorityClass. Unfortantly it's then removed for all current and future resources. What is needed to reactivate is that you explicity add one by one namePrefix you want to have. Funny thing if you add a conifguration where the namePrefix needs to be added the rule with lowercase doesn't work and you need to name it like it's expected in Pascalcase.

How can we reproduce it (as minimally and precisely as possible)?

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: sta-
resources:
- resources.yaml
configuration:
- kustomize-config-name-prefix.yaml
# resources.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
  name: my-lbs
spec:
  ports:
    - name: 5678-8080
      port: 5678
      protocol: TCP
      targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer
# kustomize-config-name-prefix.yaml
namePrefix:
  - path: metadata/name
    group: scheduling.k8s.io
    version: v1
    kind: priorityclass # lowercase due to https://stackoverflow.com/questions/64218188/selectively-apply-nameprefix-namesuffix-in-kustomize
    skip: true

Expected output

# resources.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
  name: sta-my-lbs
spec:
  ports:
    - name: 5678-8080
      port: 5678
      protocol: TCP
      targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer

Actual output

# resources.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
  name: my-lbs
spec:
  ports:
    - name: 5678-8080
      port: 5678
      protocol: TCP
      targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer

Kustomize version

v5.1.0

Operating system

None

sass1997 avatar Aug 18 '23 12:08 sass1997

Tried this with the same kustomization.yaml files but only difference being these files

# resources.yaml
apiVersion: v1
kind: Pod
metadata:
  name: prefix-pod-suffix # modified
spec:
  containers:
  - image: image
    name: pod
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
  name: my-lbs
spec:
  ports:
    - name: 5678-8080
      port: 5678
      protocol: TCP
      targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer
# kustomize-config-name-prefix.yaml
namePrefix:
  - path: metadata/name
    #group: scheduling.k8s.io
    apiVersion: v1
    kind: Pod # lowercase due to https://stackoverflow.com/questions/64218188/selectively-apply-nameprefix-namesuffix-in-kustomize
    skip: true

got this result

apiVersion: v1
kind: Service
metadata:
  name: my-lbs
spec:
  ports:
  - name: 5678-8080
    port: 5678
    protocol: TCP
    targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer
---
apiVersion: scheduling.k8s.io/v1
description: This priority class is for high prio only!
globalDefault: false
kind: PriorityClass
metadata:
  name: test
value: 1000000000
---
apiVersion: v1
kind: Pod
metadata:
  name: addition-prefix-pod-suffix
spec:
  containers:
  - image: image
    name: pod

It seems firstly the kustomization file is skipping the namePrefix completely and only picking the configurations option and secondly it ignores the skip statement in the config-prefix as the above example for pod (tried for service also same result)

DiptoChakrabarty avatar Oct 12 '23 17:10 DiptoChakrabarty

on trying with a kustomization.yaml file like this

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: addition-
resources:
- resources.yaml

Resultant file

apiVersion: v1
kind: Service
metadata:
  name: addition-my-lbs
spec:
  ports:
  - name: 5678-8080
    port: 5678
    protocol: TCP
    targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer
---
apiVersion: scheduling.k8s.io/v1
description: This priority class is for high prio only!
globalDefault: false
kind: PriorityClass
metadata:
  name: addition-test
value: 1000000000
---
apiVersion: v1
kind: Pod
metadata:
  name: addition-prefix-pod-suffix
spec:
  containers:
  - image: image
    name: pod

DiptoChakrabarty avatar Oct 12 '23 17:10 DiptoChakrabarty

I can't find any documentation on this skip: true field. It would be helpful for someone to try to go through code or past issues and try to reason about what the behavior of skip: true should be. From there, we can figure out what the discrepancy is between the "expected" behavior and actual behavior and triage this issue from there. It seems in general that the skip field is buggy and we need to fix it.

/assign @ncapps

natasha41575 avatar Nov 22 '23 17:11 natasha41575

Hi @natasha41575, I was not able to find any references of a skip: true field for configurations overrides. This test demonstrates that we can select which resources to apply a prefix, suffix, or both to names by resource type. Resource types are skipped if they are not listed in the override configuration. There may be more context that I am not aware of, but a skip: true field does not seem necessary.

Example below

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: foo-
namespace: ns1
resources:
  - deployment.yaml
  - config.yaml
configurations:
  - name-prefix-rules.yaml
# deployment.yaml
apiVersion: apps/v1
metadata:
  name: deployment1
kind: Deployment
# config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: config
# name-prefix-rules.yaml
namePrefix:
- path: metadata/name
  apiVersion: v1
  kind: Deployment

Output

This example shows that a foo- prefix was only added to the Deployment because Deployment was added to the namePrefix configuration override. A prefix was not added to the ConfigMap because ConfigMap was not added to the override.

apiVersion: v1
kind: ConfigMap
metadata:
  name: config
  namespace: ns1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo-deployment1
  namespace: ns1

ncapps avatar Nov 28 '23 06:11 ncapps

Thank you @ncapps for the thorough investigation! Agree that the skip: true field does not seem necessary. Could you leave a comment on https://github.com/kubernetes-sigs/kustomize/issues/3945 reminding us that the skip: true field is not documented or well-understood, and that we will need to address that if we do decide to keep the configurations field?

For this issue, looks like the alternative configuration in https://github.com/kubernetes-sigs/kustomize/issues/5272#issuecomment-1829195608 that Nick suggested might be sufficient for this use case. @sass1997 Could you give it a try and let us know if this resolves your issue?

/triage resolved

natasha41575 avatar Dec 12 '23 16:12 natasha41575

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 11 '24 17:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 10 '24 17:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar May 10 '24 17:05 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar May 10 '24 17:05 k8s-ci-robot