trino
trino copied to clipboard
0 worker nodes inspite of having pods running
I am deploying Trino on a K8s environment. The coordinator is working and accessible on https. However, the worker nodes are shown as 0 on the web-ui. However, I can see the worker pod created and running successfully. Coordinator's config.properties:
config.properties: |
coordinator=true
{{- if gt (int .Values.server.workers) 0 }}
node-scheduler.include-coordinator=false
{{- else }}
node-scheduler.include-coordinator=true
{{- end }}
# http-server.http.port={{ .Values.service.port }}
query.max-memory={{ .Values.server.config.query.maxMemory }}
query.max-memory-per-node={{ .Values.server.config.query.maxMemoryPerNode }}
memory.heap-headroom-per-node={{ .Values.server.config.memory.heapHeadroomPerNode }}
discovery-server.enabled=true
discovery.uri=https://{{ .Values.service.ip }}:{{ .Values.service.port }}
http-server.https.enabled=true
http-server.https.port={{ .Values.service.port }}
internal-communication.https.required=true
http-server.https.keystore.path={{ .Values.server.config.https.keystore.path }}
# needed for Presto ODBC driver
protocol.v1.alternate-header-name=Presto
http-server.authentication.type={{ .Values.server.config.authenticationType }}
http-server.https.keystore.path={{ .Values.server.config.https.keystore.path }}
internal-communication.shared-secret=***********
Worker's config.properties:
config.properties: |
coordinator=false
internal-communication.https.required=true
http-server.https.enabled=true
http-server.https.port={{ .Values.service.port }}
query.max-memory={{ .Values.server.config.query.maxMemory }}
query.max-memory-per-node={{ .Values.server.config.query.maxMemoryPerNode }}
memory.heap-headroom-per-node={{ .Values.server.config.memory.heapHeadroomPerNode }}
discovery.uri=https://{{ .Values.service.ip }}:{{ .Values.service.port }}
internal-communication.shared-secret=*************
I am able to see the worker and coordinator pods running status. However, the active workers is shown as 0. What is wrong here?
Still facing this issue. Please help
Is there any update on it?
Check logs of the workers - I'm guessing the discovery.uri is not reachable from the workers.