cote
cote copied to clipboard
K3S Kubernetes Cluster: TypeError [ERR_INVALID_ARG_TYPE]: The "options.port" property must be one of type number or string. Received null
I'm running 6+ NodeJS programs, each with a combination of requesters, responders, publishers and subscribers. These programs are running on a k3s kubernetes cluster. I use redis for service discovery.
To make this work, I need to Expose ports when building the docker images. At the moment I expose ports 8000-9999 but with all programs running, I get:
TypeError [ERR_INVALID_ARG_TYPE]: The "options.port" property must be one of type number or string. Received null
at new NodeError (node:internal/errors:329:5)
at lookupAndConnect (node:net:989:13)
at Socket.connect (node:net:968:5)
at ReqSocket.Socket.connect (/usr/src/app/node_modules/@dashersw/axon/lib/sockets/sock.js:297:8)
at Requester.onAdded (/usr/src/app/node_modules/cote/src/components/requester.js:94:19)
at Discovery.
This can happen to any pod in the cluster, so I guess it's related to port usage. Is it possible to assign specific ports to a requester, responder, etc, or is it possible to use redis for the communication, and not only for the discover?
I made some progress with debugging. The NodeJS programs that have this issue, both have this Requester that is causing the error:
const cote = require('cote')({ redis: { host: 'redis' } }); const configRequester = new cote.Requester({ name: 'Requester User Configurations', key: 'user.processor' });
The NodeJS that has the responder for this has the following:
const cote = require('cote')({ redis: { host: 'redis' } }); const configPublisher = new cote.Publisher({ name: 'Publisher to user configuration changes', key: 'user.publisher' }); const configResponder=new cote.Responder({ name: 'Responder to User Config Requests', key: 'user.processor' });
If I remove the Publisher, and only keep the Responder, then the issue is gone. What's wrong with the definition of the Publisher?
Hey! Any update? I have the same issue.
Just a small piece of debug logs (Sorry, actually I have no idea what's going on there...):
2021-08-26T00:21:16.889Z axon:sock client connect
2021-08-26T00:21:16.889Z axon:sock client add socket 10
2021-08-26T00:21:17.973Z axon:sock client remove socket 10
2021-08-26T00:21:25.460Z axon:sock client connect attempt builder-4.default.svc.cluster.local:8001
2021-08-26T00:21:25.504Z axon:sock client connect attempt builder-4.default.svc.cluster.local:8000
2021-08-26T00:21:25.516Z axon:sock client connect attempt builder-4.default.svc.cluster.local:65535
2021-08-26T00:21:25.531Z axon:sock client connect attempt builder-4.default.svc.cluster.local:65535
2021-08-26T00:21:25.548Z axon:sock client connect attempt builder-4.default.svc.cluster.local:null
node:net:989
throw new ERR_INVALID_ARG_TYPE('options.port',
^
TypeError [ERR_INVALID_ARG_TYPE]: The "options.port" property must be one of type number or string. Received null
Hey! Any update? I have the same issue.
From what I recall, my issue was with the Publisher. The broadcasts key is mandatory. Without it I got the same error as you.
const randomPublisher = new cote.Publisher({ name: 'Random Publisher', // namespace: 'rnd', // key: 'a certain key', broadcasts: ['randomUpdate'], });
I still have issues with the Publisher. It seems to eat up all memory when there is no listener, eventually causing an OOM error. The way I read the specs, it should drop any messages if there is no listener.
From what I recall, my issue was with the Publisher. The broadcasts key is mandatory. Without it I got the same error as you.
I have that error even on app that has no Publisher
s :(
We experienced similar issues when we upgraded to Kubernetes v1.20.2 (provider: Digital Ocean) using cote v1.0.2. We use cote with REDIS. We "fixed" it by setting port manually because we detected that cote wasn't able to find a free port to use and was trying to set it to 65535.
Sample code:
const responder = new cote.Responder({ name: "TEST1", key: "TEST1", port: 8000, });
We couldn't find any other solution. I hope it helps.
We are facing now random timeout issues after upgrading to Kubernetes v1.20.9. ...
We experienced similar issues when we upgraded to Kubernetes v1.20.2 (provider: Digital Ocean) using cote v1.0.2. We use cote with REDIS. We "fixed" it by setting port manually because we detected that cote wasn't able to find a free port to use and was trying to set it to 65535.
Sample code:
const responder = new cote.Responder({ name: "TEST1", key: "TEST1", port: 8000, });
We couldn't find any other solution. I hope it helps.
We are facing now random timeout issues after upgrading to Kubernetes v1.20.9. ...
Thank you! This really helps.
Might sound unrelated, but I was getting this error due to a malfunctioning redis server.
Validate redis server status in your system, it may fix the problem.
EDIT: Another observation, even when ports are configured and open, cote seems to get confused. A system reboot does not solve this but a system shutdown>start seems to fix it.