Ingress servicePort variable definition doesn't respect Kong disabled
What happened?
In templates/network/ingress.yaml you define a servicePort variable in line 22:
https://github.com/kubernetes/dashboard/blob/master/charts/kubernetes-dashboard/templates/networking/ingress.yaml#L22
That ternary unfortunately doesn't respect setups where Kong is not used at all. It should at least add some condition which asks {{- if $.Values.kong.enabled }} or be even more sophisticated.
Also, some default values in the values.yaml for $.Values.kong.proxy.tls.servicePort and $.Values.kong.proxy.http.servicePort are missing here. If they are not defined - even if Kong is disabled! - this results in strange helm rendering issues like e.g. this:
❯ helm template --set kong.enabled=false --set app.ingress.enabled=true
Error: template: kubernetes-dashboard/templates/networking/ingress.yaml:22:30: executing "kubernetes-dashboard/templates/networking/ingress.yaml" at <$.Values.kong.proxy.tls.servicePort>: nil pointer evaluating interface {}.servicePort
Use --debug flag to render out invalid YAML
I suggest to get the definitions for the servicePort from somewhere else in the values.yaml instead, e.g. auth.service.port, api.service.port and so on.
What did you expect to happen?
The rendering of course needs to happen without issues, even if I decided to disable Kong in my environment.
How can we reproduce it (as minimally and precisely as possible)?
❯ helm template --set kong.enabled=false --set app.ingress.enabled=true
Error: template: kubernetes-dashboard/templates/networking/ingress.yaml:22:30: executing "kubernetes-dashboard/templates/networking/ingress.yaml" at <$.Values.kong.proxy.tls.servicePort>: nil pointer evaluating interface {}.servicePort
Use --debug flag to render out invalid YAML
Anything else we need to know?
No response
What browsers are you seeing the problem on?
No response
Kubernetes Dashboard version
7.11.1
Kubernetes version
1.30.x
Dev environment
No response
As long as the containerPort is used for the service definitions like here https://github.com/kubernetes/dashboard/blob/master/charts/kubernetes-dashboard/templates/services/api.yaml#L37 even that one could be used instead.
I hit a seemingly related problem to this recently as well, and started a PR for my fix...
#10086
This is closer to the problem I am having, but shares many of the symptoms.
#9601
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
7.13.0 same issue
same problem (
+1 Any workaround?