kubernetes-ingress
kubernetes-ingress copied to clipboard
Support a custom name for all resources in Helm
Is your feature request related to a problem? Please describe. Having each resource have a way to name it explicitly without workarounds. We need most resources named 'nginx-ingress' because this is how an older version of the helm chart worked. There are work arounds in the latest helm chart 1.0.0 to be able to do this. Now we are running into an issue cluster wide resources like ClusterRole because we deploy 'nginx-ingress' multiple times per cluster.
An example of workarounds. https://github.com/nginxinc/kubernetes-ingress/commit/79a3eb913218f5c8d750c6e5ac0829415d371143
Currently we need to do something like this to get most resources named 'nginx-ingress'.
nginx-ingress:
fullnameOverride: "nginx-ingress"
serviceNameOverride: "nginx-ingress"
controller:
enableSnippets: true
name: ""
Describe the solution you'd like I would like each resource to have a name parameter that will explicitly name the resource.
Maybe this would look like for each resource but only using rbac as an example.
controller:
rbac:
name: "name here"
Name shows up in a few places but the behavior of name might be different for each resource. https://github.com/nginxinc/kubernetes-ingress/blob/v3.3.0/deployments/helm-chart/values.yaml#L3 https://github.com/nginxinc/kubernetes-ingress/blob/v3.3.0/deployments/helm-chart/values.yaml#L233 https://github.com/nginxinc/kubernetes-ingress/blob/v3.3.0/deployments/helm-chart/values.yaml#L384
Additionally there are a few configurations that are no documented. .Values.nameOverride .Values.fullnameOverride .Values.serviceNameOverride
An override for each resource would also be acceptable if that is the name of the resource explicitly.
Describe alternatives you've considered Vendor the chart and make the changes needed ourselves or change how we do our deployments so that this is not an issue. We moved away from vendoring and making the changes we have needed upstream. Vendoring makes our lives harder and makes it less likely we will push our changes upstream. Changing out deployments is not really an option.
Additional context
Hi @coolbry95 thanks for reporting!
Be sure to check out the docs and the Contributing Guidelines while you wait for a human to take a look at this :slightly_smiling_face:
Cheers!
For reference my fix eneded up being this.
I do not think this is the proper long term solution but is how I implemented it in a pinch to get us to the latest release.
{{- define "nginx-ingress.rbacName" -}}
{{- if .Values.controller.rbac.name -}}
{{- printf "%s" .Values.controller.rbac.name -}}
{{- else -}}
{{- printf "%s" (include "nginx-ingress.fullname" .) -}}
{{- end -}}
{{- end -}}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "nginx-ingress.rbacName" . }}
...
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "nginx-ingress.rbacName" . }}
labels:
{{- include "nginx-ingress.labels" . | nindent 4 }}
subjects:
- kind: ServiceAccount
name: {{ include "nginx-ingress.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "nginx-ingress.rbacName" . }}
apiGroup: rbac.authorization.k8s.io