"Error: timeout reached: only 0 responses received out of 1" while scaling up a high-traffic system with multiple nodes.
I encountered an error "Error: timeout reached: only 0 responses received out of 1" while scaling up a high-traffic system with multiple nodes.
My system run 2 same instances on digitalOceans, and sometime it had been crash with error:
/workspace/node_modules/@socket.io/redis-streams-adapter/dist/cluster-adapter.js:348 reject(new Error(timeout reached: only ${storedRequest.current} responses received out of ${storedRequest.expected})); Error: timeout reached: only 0 responses received out of 1 at listOnTimeout (node:internal/timers:559:17) at processTimers (node:internal/timers:502:7)
my setup redis `const redisClient = createClient({ url: config.REDIS_ADDRESS, });
const initRedis = () => { redisClient.on('error', err => console.log(err)); return redisClient.connect().then(() => console.log('connected redis success')); };`
my setup adapter is:
Io.io = new Server(httpServer, { adapter: createAdapter(redisClient, { maxLen: 500_000, readCount: 5000, heartbeatInterval: 5_000, heartbeatTimeout: 120_000, }), cors: { origin: config.CORS_WEBSOCKET, }, connectionStateRecovery: { maxDisconnectionDuration: config.MAX_DISCONNECTION_DURATION, skipMiddlewares: true, }, }).use(checkTokenAuth);
my setup use witth express:
const server = http.createServer(app); const io = new Io(); Promise.all([db.init(), initRedis()]).then(async () => { server.listen(config.PORT, () => { logger.info(HttpNetwork is running at port: ${config.PORT}); }); io.startSocket(server, () => console.log(socket is running at port: ${config.PORT})); });
I had a similar error when making a call to fetchSockets().
Error: timeout reached: only 0 responses received out of 1 at Timeout._onTimeout (node_modules/@socket.io/redis-streams-adapter/dist/cluster-adapter.js:348:28) at listOnTimeout (node:internal/timers:564:17) at process.processTimers (node:internal/timers:507:7) Error: timeout reached: only 0 responses received out of 1 at Timeout._onTimeout (node_modules/@socket.io/redis-streams-adapter/dist/cluster-adapter.js:348:28) at listOnTimeout (node:internal/timers:564:17) at process.processTimers (node:internal/timers:507:7)
The other instances show an error uncaughtException: Maximum call stack size exceeded in the hasBinary function of the util.js file.
My solution was to check for sockets following @darrachequesne's recommendation and not use fetchSockets. https://github.com/socketio/socket.io/issues/4183#issuecomment-982181865
I've also been getting a similar error with const recipientsList = await io.in(exSocket.appGuid).fetchSockets();
timeout reached: only 3 responses received out of 7
I have the same problem in version 0.1.0
when call let sockets = await io.fetchSockets() with 4 cpu using node js cluster
a lot of:
timeout reached: only 0 responses received out of 4
.
.
.
timeout reached: only 1 responses received out of 3
/root/daddy/node_modules/@socket.io/redis-streams-adapter/dist/cluster-adapter.js:348
reject(new Error(timeout reached: only ${storedRequest.current} responses received out of ${storedRequest.expected}));
^
Error: timeout reached: only 1 responses received out of 3
at Timeout._onTimeout (/root/daddy/node_modules/@socket.io/redis-streams-adapter/dist/cluster-adapter.js:348:28)
at listOnTimeout (node:internal/timers:573:17)
at process.processTimers (node:internal/timers:514:7)
Node.js v20.5.1
worker 1035162 died
@hysapp @OnlySekai hello,sirs,do you resolve this bug? I met this bug on cluster-adapter too . multiple nodes...
@hysapp @OnlySekai hello,sirs,do you resolve this bug? I met this bug on cluster-adapter too . multiple nodes...
No, I implemented everything myself to be able to save and receive the list of online users, the stream adapter was not optimal at all and had bugs, and I used the sharded adapter.
@hysapp @OnlySekai hello,sirs,do you resolve this bug? I met this bug on cluster-adapter too . multiple nodes...
No, I implemented everything myself to be able to save and receive the list of online users, the stream adapter was not optimal at all and had bugs, and I used the sharded adapter.
can you tell me ,how to do it ,dear sir.
https://github.com/socketio/socket.io-adapter/commit/0e23ff0cc671e3186510f7cfb8a4c1147457296f (included in version 0.2.0) should reduce the number of those errors.
@hysapp do you remember what kind of bugs you encountered?
I think this is not a bug ,but nodejs's event loop is blocked.
/data/geelevel-signaling/node_modules/socket.io-adapter/dist/cluster-adapter.js:600
reject(new Error(timeout reached: missing ${storedRequest.missingUids.size} responses));
^
Error: timeout reached: missing 1 responses at Timeout._onTimeout (/data/geelevel-signaling/node_modules/socket.io-adapter/dist/cluster-adapter.js:600:28) at listOnTimeout (node:internal/timers:581:17) at process.processTimers (node:internal/timers:519:7)
Node.js v22.5.1 I find other error , some one can help me?
I try to used @socket.io/redis-adapter , not find this bug. only show in redis-streams-adapter.
@darrachequesne
I am getting this error when using the redis-streams-adapter... I have a pretty repeatable setup where I can cause crashes:
When does it happen/Repro:
It happens for me when I have a non-persistent servers that spin up, send messages then shut down. Then when calling .fetchSockets() from a remaining server or persistent server the adapter times out.
For me this is demonstrated on Google Cloud Functions that send socket.io messages and connect via redis-streams-adapter. And the main API which is persistent (hosted on App Engine), tries to call fetchSockets() shortly after, and times out and throws the error:
Error: timeout reached: missing 3 responses
at Timeout._onTimeout (/workspace/node_modules/socket.io-adapter/dist/cluster-adapter.js:600:28)
at listOnTimeout (node:internal/timers:581:17)
at process.processTimers (node:internal/timers:519:7)
Possible underlying causes
Seems to have to do with having servers registered, but then no longer available due to exiting.
Perhaps the servers need to be exited more gracefully?
In my case since the Cloud Function instances are only meant to emit messages, ideally the socket.io-emitter would be used here. But I understand that does not work with redis-streams-adapter yet.