`TRANSACTOR_URL` for `front` seems to have no effect
Description of the issue
I'm trying run Huly in Kubernetes, but I can't find the right way to have the frontend connect to the transactor service via anything but ws://localhost:3333. I've taken the huly-selfhost's Docker compose as a reference.
Your environment
Huly v0.6.230
Steps to reproduce
Set the TRANSACTION_URL for the front container to any value other than ws://localhost:3333, and note how the frontend tries to open a websocket over ws://localhost:3333 regardless.
Expected behaviour
The TRANSACTION_URL environment variable is recognized correctly.
Actual behaviour
No matter what I do (e.g. also change the TRANSACTION_URL environment variable in all of the other containers), the client app tries to open a websocket via ws://localhost:3333. In my case, I set the TRANSACTOR_URL to https://huly.example.com/transactor and expect the frontend to open the websocket using that URL.
Environment variables
- apiVersion: v1
kind: Secret
stringData: {}
type: Opaque
metadata:
name: huly-io-rekoni-env
namespace: huly-io
type: Opaque
- apiVersion: v1
stringData:
ACCOUNTS_URL: http://localhost:3000
ELASTIC_INDEX_NAME: huly-io
ELASTIC_URL: http://huly-io-elastic:9200
FRONT_URL: https://huly.example.com/
LAST_NAME_FIRST: true
METRICS_CONSOLE: false
METRICS_FILE: metrics.txt
MINIO_ACCESS_KEY: REDACTED
MINIO_ENDPOINT: REDACTED
MINIO_SECRET_KEY: REDACTED
MONGO_URL: mongodb://huly-io-mongodb:27017
REKONI_URL: http://localhost:4004
SERVER_CURSOR_MAXTIMEMS: 30000
SERVER_PORT: 3333
SERVER_PROVIDER: ws
SERVER_SECRET: secret
kind: Secret
metadata:
name: huly-io-transactor-env
namespace: huly-io
type: Opaque
- apiVersion: v1
stringData:
ACCOUNTS_URL: http://localhost:3000
ENDPOINT_URL: ws://huly.example.com/transactor
FRONT_URL: https://huly.example.com/
INIT_WORKSPACE: demo-tracker
MINIO_ACCESS_KEY: REDACTED
MINIO_ENDPOINT: REDACTED
MINIO_SECRET_KEY: REDACTED
MODEL_ENABLED: *
MONGO_URL: mongodb://huly-io-mongodb:27017
SERVER_PORT: 3000
SERVER_SECRET: secret
TRANSACTOR_URL: ws://localhost:3333
kind: Secret
metadata:
name: huly-io-account-env
namespace: huly-io
type: Opaque
- apiVersion: v1
stringData:
ACCOUNTS_URL: http://localhost:3000
COLLABORATOR_PORT: 3078
MINIO_ACCESS_KEY: REDACTED
MINIO_ENDPOINT: REDACTED
MINIO_SECRET_KEY: REDACTED
MONGO_URL: mongodb://huly-io-mongodb:27017
SECRET: secret
TRANSACTOR_URL: ws://localhost:3333
UPLOAD_URL: /files
kind: Secret
metadata:
name: huly-io-collaborator-env
namespace: huly-io
type: Opaque
- apiVersion: v1
stringData:
ACCOUNTS_URL: https://huly.example.com/accounts
CALENDAR_URL: http://huly.example.com:8095
COLLABORATOR_API_URL: http://huly.example.com/collaborator
COLLABORATOR_URL: ws://huly.example.com/collaborator
DEFAULT_LANGUAGE: en
ELASTIC_URL: http://huly-io-elastic:9200
GMAIL_URL: http://huly.example.com:8088
LAST_NAME_FIRST: true
MINIO_ACCESS_KEY: REDACTED
MINIO_ENDPOINT: REDACTED
MINIO_SECRET_KEY: REDACTED
MONGO_URL: mongodb://huly-io-mongodb:27017
REKONI_URL: https://huly.example.com/rekoni
SERVER_PORT: 8080
SERVER_SECRET: secret
TELEGRAM_URL: http://huly.example.com:8086
TITLE: Huly Self Hosted
TRANSACTOR_URL: ws://huly.example.com/transactor
UPLOAD_URL: /files
kind: Secret
metadata:
name: huly-io-front-env
namespace: huly-io
type: Opaque
I'm having the same issues, beside this, we're also facing the problem with socket (ws/wss) as well!
Friendly bump on this one. 🙂
I'm unsure if this is really the same issue, but it feels slightly related. I've used kompose to convert the huly-selfhost repository to K8s resources, and all Pods start except for the account Pod now.
However, again it seemingly has something to do with environment variables.
node:internal/validators:424
throw new ERR_SOCKET_BAD_PORT(name, port, allowZero);
^
RangeError [ERR_SOCKET_BAD_PORT]: options.port should be >= 0 and < 65536. Received type number (NaN).
at validatePort (node:internal/validators:424:11)
at Server.listen (node:net:2000:5)
at WBt.listen (/usr/src/app/bundle.js:270:27463)
at OFt (/usr/src/app/bundle.js:277:455341)
at Object.<anonymous> (/usr/src/app/bundle.js:277:455961)
at Module._compile (node:internal/modules/cjs/loader:1376:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1435:10)
at Module.load (node:internal/modules/cjs/loader:1207:32)
at Module._load (node:internal/modules/cjs/loader:1023:12)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:135:12) {
code: 'ERR_SOCKET_BAD_PORT'
}
Node.js v20.11.1
I wouldn't call myself a newbie to Docker, Kubernetes or containerized deployments, but I'm at a loss as to what's causing this, given that the account container from the huly-selfhost repository starts up alright when run with docker compose up, and the SERVER_PORT environment variable is set there as well as in my K8s Pod:
apiVersion: v1
kind: Pod
metadata:
annotations:
kompose.cmd: ./kompose convert -o k8s/ --chart
kompose.version: 1.26.0 (40646f47)
creationTimestamp: "2024-05-21T19:59:01Z"
generateName: account-98649c9dd-
labels:
io.kompose.service: account
pod-template-hash: 98649c9dd
name: account-98649c9dd-frhbb
namespace: huly-io
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: account-98649c9dd
uid: b909e0c0-3c85-4e14-82a1-c9b4a25a0796
resourceVersion: "37871003"
uid: 5c8e1a87-7477-4fe1-8331-ce73d4061969
spec:
containers:
- env:
- name: ACCOUNTS_URL
value: http://localhost:3000
- name: ENDPOINT_URL
value: ws://huly.example.com:3333
- name: FRONT_URL
value: http://front:8080
- name: INIT_WORKSPACE
value: demo-tracker
- name: MINIO_ACCESS_KEY
value: minioadmin
- name: MINIO_ENDPOINT
value: minio
- name: MINIO_SECRET_KEY
value: minioadmin
- name: MODEL_ENABLED
value: '*'
- name: MONGO_URL
value: mongodb://mongodb:27017
- name: SERVER_PORT
value: "3000"
- name: SERVER_SECRET
value: secret
- name: TRANSACTOR_URL
value: ws://transactor:3333
image: hardcoreeng/account:v0.6.221
imagePullPolicy: IfNotPresent
name: account
ports:
- containerPort: 3000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-hsp2g
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: node1
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-hsp2g
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
As far as the original issue goes, I figured out that the ENDPOINT_URL is what is used by the browser client as that is what is returned by the accounts API (POST account:3000/), e.x.:
{
"result": {
"endpoint": "ws://example.com/transactor",
"email": "[email protected]",
"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJjb25maXJtZWQiOnRydWUsImVtYWlsIjoiY3VudEBjdW50LmN1bnQiLCJ3b3Jrc3BhY2UiOiIiLCJwcm9kdWN0SWQiOiIifQ.XXEIpO6whEtXYoRfXEZnHQYju2kN2gRjCzahQTn40kU"
}
}
However, the /transactor path is ignored by the client when constructing the URL, hence the transactor must have its dedicated hostname. Although I will attempt to match on location ~ ^/eyJ to see if that works as well.
Hey @NiklasRosenstein, sorry for the delay and thanks for reporting the issue. We are using Kubernets for Huly deployment and planning to share our deployment configuration as an example in the huly-selfhost repo. But indeed, we are using dedicated hostnames for services and I suppose that scenario with dedicated paths on the same hostname might not work. I'm planning to test this and get back to you with solution.
@NiklasRosenstein answering to your questions:
No matter what I do (e.g. also change the TRANSACTION_URL environment variable in all of the other containers), the client app tries to open a websocket via ws://localhost:3333. In my case, I set the TRANSACTOR_URL to https://huly.example.com/transactor and expect the frontend to open the websocket using that URL.
Indeed, the TRANSACTOR_URL env variable is not used by the front container. It used to be used, and still is required to be provided, but the actual transactor URL returned by the account container when user logs in.
If you initially used wrong env variable name, then the browser might store login information with wrong transactor URL in the local storage. I recommend you to clean it up and try to login again.
However, the /transactor path is ignored by the client when constructing the URL, hence the transactor must have its dedicated hostname. Although I will attempt to match on location ~ ^/eyJ to see if that works as well.
There seems to be a problem in our code, that caused this. This scenario was not expected and has never been tested. I added a fix for this. Until then, the transactor should be exposed on a dedicated domain or port.
I'm unsure if this is really the same issue, but it feels slightly related. I've used kompose to convert the huly-selfhost repository to K8s resources, and all Pods start except for the account Pod now.
This is because of missing ACCOUNT_PORT env variable. It is not marked as required but actually is required.
Hi @aonnikov, thanks for the detailed response!
It took me a while to figure out what variables must have the internal vs. the external URL. I couldn't discern a pattern that would allow me to tell, and the huly-selfhost example doesn't help to distinguish this either. Are there maybe some docs that I've missed?
There seems to be a problem in our code, that caused this. This scenario was not expected and has never been tested. I added a fix for this. Until then, the transactor should be exposed on a dedicated domain or port.
I've gotten a step closer to the workspace creation with the method I mentioned above. Although, when the fix you mentioned is merged, I will gladly move transactor to a sub-path. Thanks!
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.org/websocket-services: huly-io-ws
name: huly-io-transactor
namespace: huly-io
spec:
ingressClassName: nginx-private
rules:
- host: huly.rosenstein.app
http:
paths:
- backend:
service:
name: huly-io-ws
port:
number: 3333
path: /(eyJ.*)|(api/.*)
pathType: ImplementationSpecific
tls:
- hosts:
- huly.rosenstein.app
secretName: huly-io-tls
This is because of missing ACCOUNT_PORT env variable. It is not marked as required but actually is required.
Are you sure it's not SERVER_PORT? I don't see ACCOUNT_PORT in huly-selfhost; but SERVER_PORT is set there (and also in my kompose version it is set). Anyway, this is a separate issue that I don't face again with my more hand-crafted Kubernetes manifests 🤷♂️ If I get the chance again I will try this again
Are you sure it's not SERVER_PORT? Yes, it is
ACCOUNT_PORT, the port is not listed in theaccount's environment variables and therefore not picked up bykompose. It took me some time to figure out what is going on because this environment variable was not required in Docker configuration, but the service did not start without it when deployed to Kubernetes.
It took me a while to figure out what variables must have the internal vs. the external URL. I couldn't discern a pattern that would allow me to tell, and the huly-selfhost example doesn't help to distinguish this either. Are there maybe some docs that I've missed?
Yeah, that's the problem, we don't follow any naming convention that would allow to distinguish between internal vs external URLs. It is something that we can try to improve.
I've just pushed a sample Kubernetes deployment configuration to huly-selfhost repository. It is generated with kompose, cleaned up, and restructured a bit. All the required configuration was moved to a config map and a secret, but I tried to keep them as small as possible.
My Kubernetes configuration assumes that each service sits on a dedicated hostname, but it should not be a problem to use dedicated paths instead. It may require some changes in Ingress configuration to rewrite paths, but it should be working fine now.
Please check it out: https://github.com/hcengineering/huly-selfhost/tree/main/kube