helm-charts
helm-charts copied to clipboard
Including an internal-frontend service with auth enabled
NOTE: I have not tested this with mTLS.
What was changed
This adds an internal-frontend portion to the config map when a non-default authorizer is enabled, and removes the publicClient section if it's present.
It starts the internal-frontend deployment using the same parameters as the normal frontend, but slightly different ports (7236, 6936).
This pretty much follows the instructions in the release notes for 1.20: https://github.com/temporalio/temporal/releases/tag/v1.20.0
Why?
When one enables authorization using the recent jwt authorization support, the worker pod fails repeatedly with an authorization failure because it is not providing the JWT token. By following the above instructions, we bring up a special "internal frontend" that it will use for internal communications instead, bypassing this requirement and allowing the worker to start up properly.
-
Closes issue #560
-
How was this tested:
Added the requisite configuration, and ensure that the Temporal worker starts successfully.
- Any docs updates needed?
Added a section to the readme.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.
I would have a suggestion, why to decide based on authorization settings? could we simplify it with settings
server:
frontend:
enabled: false
# other config for frontend component
internalFrontend:
enabled: true
# other config for internal-frontend component
and do the implementation based on {{ .Values.server.frontend.enabled }} and {{ .Values.server.internalFrontend.enabled }} instead of indirect logic based on authorization settings? In my opinion, it would be more flexible solution to fit more usecase, for example n our use case, we have our custom frontend build with authorizer independent of the server.config.authorization value, so your solution would not work for us without tweaking the helm char.
Thank you
I would have a suggestion, why to decide based on authorization settings? could we simplify it with settings
server: frontend: enabled: false # other config for frontend component internalFrontend: enabled: true # other config for internal-frontend component
Yeah, I like that approach better. I would still need to know when to remove the publicClient stuff, I can probably add another config value for that.
I'll see about putting those changes in, ideally in the next few days.
Yeah, I like that approach better. I would still need to know when to remove the publicClient stuff, I can probably add another config value for that.
I think the publicClient would be required when frontend.enabled == true, assuming if you're deploying built-in frontend (not your own), you want that to be accessible by clients.
Hi, You read in my mind - I have similar scenario - I want to use service principals and Azure Entra ID to handle users and apps auth. Since, you already created MR I want to highlight some changes which I did to make everything work:
The source version was 0.44.0. I was doing my changes to check if it works in internal project, before I thought it might be good to raise MR to original repo.
Change 1
I had to add if block in server-configmap.yaml. Only frontend is used externally, so only this service should have oAuth enabled (otherwise everything fails).
{{- if eq $service "frontend" }}
{{- with $server.config.authorization }}
authorization:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
Change 2
Repo uses camelCase, so I used internalFrontend in values.yaml and added in _helpers.tpl:
{{- define "serviceName" -}}
{{- $service := index . 0 -}}
{{- if eq $service "internalFrontend" }}
{{- print "internal-frontend" }}
{{- else }}
{{- print $service }}
{{- end }}
{{- end -}}
Change 3
Usage for service-configmap.yaml:
{{- range $originalService := (list "frontend" "internalFrontend" "history" "matching" "worker") }}
{{ $serviceValues := index $.Values.server $originalService }}
{{ $service := include "serviceName" (list $originalService) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ include "temporal.fullname" $ }}-config-{{ $service }}"
serviceNamemight not be the best name hereinternalFrontendshould be added regarding to the condition in commend above- I added
internalFrontendto all places wherefrontendwas used (made a loop when needed)
Change 4
Later in service-configmap.yaml I added missing section (condition required) and condition for publicClient:
{{- if ne $service "frontend"}}
internal-frontend:
rpc:
grpcPort: {{ $server.internalFrontend.service.port }}
httpPort: {{ $server.internalFrontend.service.httpPort }}
membershipPort: {{ $server.internalFrontend.service.membershipPort }}
bindOnIP: "0.0.0.0"
{{- end }}
# ...
{{- if eq $service "frontend"}}
publicClient:
hostPort: "{{ include "temporal.componentname" (list $ "frontend") }}:{{ $server.frontend.service.port }}"
{{- end }}
Change 5
Values.yaml
internalFrontend:
service:
annotations: {}
type: ClusterIP
port: 7236
membershipPort: 6936
httpPort: 7246
ingress:
enabled: false
annotations: {}
hosts:
- "/"
tls: []
metrics:
annotations:
enabled: true
serviceMonitor: {}
prometheus: {}
podAnnotations: {}
podLabels: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
additionalEnv: []
containerSecurityContext: {}
topologySpreadConstraints: []
podDisruptionBudget: {}
Change 6
Change for the first part to have a loop (for frontend and internalFrontend) in server-service.yaml
{{- if $.Values.server.enabled }}
{{- range $originalService := (list "frontend" "internalFrontend") }}
{{ $serviceValues := index $.Values.server $originalService }}
{{ $service := include "serviceName" (list $originalService) }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "temporal.componentname" (list $ $service) }}
labels:
{{- include "temporal.resourceLabels" (list $ $originalService "") | nindent 4 }}
{{- if $serviceValues.service.annotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" $serviceValues.service.annotations "context" $) | nindent 4 }}
{{- end }}
spec:
type: {{ $serviceValues.service.type }}
ports:
- port: {{ $serviceValues.service.port }}
targetPort: rpc
protocol: TCP
name: grpc-rpc
{{- if hasKey $serviceValues.service "nodePort" }}
nodePort: {{ $serviceValues.service.nodePort }}
{{- end }}
- port: {{ $serviceValues.service.httpPort }}
targetPort: http
protocol: TCP
name: http
# TODO: Allow customizing the node HTTP port
selector:
app.kubernetes.io/name: {{ include "temporal.name" $ }}
app.kubernetes.io/instance: {{ $.Release.Name }}
app.kubernetes.io/component: {{ $service }}
---
{{- end }}
Change 7
To run admintool commands inside pod I have to add Authorization header:
temporal operator namespace create --namespace <namespace> --grpc-meta=Authorization='Bearer <token_from_ui>'
I had a similar issue after enabling authorizer and have enabled internal-frontend by adding an extra service endpoint rather than an entirely new pod. I did it that way because I found that the frontend was throwing errors on startup when I had specified publicClient alongside services.internal-frontend and the configurations are mutually exclusive. The way I worked around it only takes a few changes.
For the deployment, pass env SERVICES="frontend:internal-frontend" to frontend container env
{{- if and (eq $service "frontend") ($.Values.server.frontend.internal.enabled) }}
- name: SERVICES
value: "{{ $service }}:internal-frontend"
{{- else }}
- name: SERVICES
value: {{ $service }}
{{- end }}
Plus the extra port
ports:
...
{{- if and (eq $service "frontend") ($.Values.server.frontend.internal.enabled) }}
- name: rpc-internal
containerPort: {{ $serviceValues.internal.service.port }}
protocol: TCP
{{- end }}
For the configmap, internal-frontend looks the same as you have
{{- if $server.frontend.internal.enabled }}
internal-frontend:
rpc:
grpcPort: {{ $server.frontend.internal.service.port }}
membershipPort: {{ $server.frontend.internal.service.membershipPort }}
bindOnIP: "0.0.0.0"
{{- end }}
But, publicClient has to be removed when internal is used. Prevents frontend from throwing startup errors.
{{- if $server.frontend.internal.enabled}}
{{- else }}
publicClient:
hostPort: "{{ include "temporal.componentname" (list $ "frontend") }}:{{ $server.frontend.service.port }}"
{{- end }}
My values file addition looks like this. Same defaults used as the compose setup this is all based on. Given that this setup re-uses the same deployment of frontend for internal and "external" in this case, I think it is intuitive to nest the values under frontend.
frontend:
...
# Enables internal-frontend so that the builtin worker can access the frontend while
# bypassing authorizer and claim mapper.
# Equivalent to env.USE_INTERNAL_FRONTEND in docker compose config
internal:
enabled: false
service:
# Evaluated as template
annotations: {}
type: ClusterIP
port: 7236
membershipPort: 6936
EDIT: I have also confirmed the above setup works with internode tls. I haven't checked tls for client connections, but suspect it would work the same way.
There appears to be another PR to accomplish this here as well: https://github.com/temporalio/helm-charts/pull/602
There appears to be another PR to accomplish this here as well: #602
I like the #602, it's kind of the implementation I had in mind. @dleblanc would #602 cover your use case as well? if yes, I would prefer to give it a "push" :)
Thanks
There appears to be another PR to accomplish this here as well: #602
I like the #602, it's kind of the implementation I had in mind. @dleblanc would #602 cover your use case as well? if yes, I would prefer to give it a "push" :)
Thanks
Yeah, I just ran it locally and it seems to meet my needs nicely. I'll close this PR in preference of #602.
Looks like the https://github.com/temporalio/helm-charts/pull/602 didn't resolve the issue completely
If I enable the JWT auth in the helm chart, it's applied on the frontend and on the internal-fronted as well :(
Looks like the #602 didn't resolve the issue completely
If I enable the JWT auth in the helm chart, it's applied on the frontend and on the internal-fronted as well :(
Probably this scenario: https://github.com/temporalio/helm-charts/pull/602#discussion_r1820542381
There should be a condition in configmap to skip auth section for internalFrontend
Looks like the #602 didn't resolve the issue completely
If I enable the JWT auth in the helm chart, it's applied on the frontend and on the internal-fronted as well :(
Probably this scenario: https://github.com/temporalio/helm-charts/pull/602#discussion_r1820542381
There should be a condition in configmap to skip auth section for internalFrontend
Exactly but not only the auth section. We also need an ability to set a different set if the tls options The idea is to enable the tls client auth for the internal frontend and jwt auth for external frontend. This is a very common scenario when you need a different set of options for external and internal frontend server