[BUG][NOWEB] - Docker container exits abnormally
I'm not sure if it's a server problem or a WAHA problem. It doesn't take up many resources, but it exits abnormally. A container uses about 40 sessions.
Aug 07 04:00:12 tcdocker1 dockerd[1275]: time="2024-08-07T04:00:12.239175800+08:00" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=8a8e224eadf95ca49deb7a98b3619a4ab6df8cda62005c9d491e8bb6c05cf9c3
Aug 07 04:00:12 tcdocker1 dockerd[1275]: time="2024-08-07T04:00:12.410455300+08:00" level=info msg="ignoring event" container=8a8e224eadf95ca49deb7a98b3619a4ab6df8cda62005c9d491e8bb6c05cf9c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 08 04:00:12 tcdocker1 dockerd[1275]: time="2024-08-08T04:00:12.115160514+08:00" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=8a8e224eadf95ca49deb7a98b3619a4ab6df8cda62005c9d491e8bb6c05cf9c3
Aug 08 04:00:12 tcdocker1 dockerd[1275]: time="2024-08-08T04:00:12.333365258+08:00" level=info msg="ignoring event" container=8a8e224eadf95ca49deb7a98b3619a4ab6df8cda62005c9d491e8bb6c05cf9c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 09 04:00:11 tcdocker1 dockerd[1275]: time="2024-08-09T04:00:11.751188820+08:00" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=8a8e224eadf95ca49deb7a98b3619a4ab6df8cda62005c9d491e8bb6c05cf9c3
Aug 09 04:00:12 tcdocker1 dockerd[1275]: time="2024-08-09T04:00:12.038993288+08:00" level=info msg="ignoring event" container=8a8e224eadf95ca49deb7a98b3619a4ab6df8cda62005c9d491e8bb6c05cf9c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 09 06:01:05 tcdocker1 dockerd[1275]: time="2024-08-09T06:01:05.857103238+08:00" level=info msg="ignoring event" container=8a8e224eadf95ca49deb7a98b3619a4ab6df8cda62005c9d491e8bb6c05cf9c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
The WEBJS version takes up more resources but is more stable, but only 9 sessions are created
node:events:496 throw er; // Unhandled 'error' event ^
Error: WebSocket was closed before the connection was established
at WebSocket.close (/app/node_modules/@adiwajshing/baileys/node_modules/ws/lib/websocket.js:292:7)
at WebSocketClient.close (/app/node_modules/@adiwajshing/baileys/lib/Socket/Client/web-socket-client.js:53:21)
at Object.end (/app/node_modules/@adiwajshing/baileys/lib/Socket/socket.js:263:20)
at /app/dist/core/engines/noweb/session.noweb.core.js:160:70
at Timeout._onTimeout (/app/dist/utils/SinglePeriodicJobRunner.js:30:13)
at listOnTimeout (node:internal/timers:573:17)
at process.processTimers (node:internal/timers:514:7)
Emitted 'error' event on WebSocketClient instance at:
at WebSocket.
Node.js v20.12.2 error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Version 2024.8.2
likely some unhandled error... WE'll double check that, thank you!
Does the sessions work after container autorestart fine?
likely some unhandled error... WE'll double check that, thank you!
Does the sessions work after container autorestart fine?
The session is normal, but this error also occurs when I use the 2024.7.7 version
@HuangDaHui你使用原始的docker镜像吗?
Yes, I use the command to pull docker login -u devlikeapro -p dckr_pat_G_xxxxxx docker pull devlikeapro/waha-plus docker logout
I have tried every version, and currently I have temporarily stopped the sessions that are not logged in.
Thank you! How often does it happen, do you know, approximately? Like every 30 minutes or every day or like few times during a day. So we can understand how fast should we deploy the fix
weird thing happens, looks like our auto-restart (we restart connection to WA every 30 minutes like the original client does) overlaps with socket setup. We'll adjust the behavior a bit to see if it helps :crossed_fingers:
Thank you! How often does it happen, do you know, approximately? Like every 30 minutes or every day or like few times during a day. So we can understand how fast should we deploy the fix
weird thing happens, looks like our auto-restart (we restart connection to WA every 30 minutes like the original client does) overlaps with socket setup. We'll adjust the behavior a bit to see if it helps 🤞
It's not fixed. It has been running stably for a long time and suddenly appeared today. It happens about once every 30 minutes to an hour, and I start to suspect that it's a problem with my server.
Not sure if too many sessions in one container will affect it, since I have about 40 sessions per container
Now each container only has 5 sessions. I will check again to see if this error will occur.
I will check again to see if this error will occur.
TY, let's keep the issue open so you get a notification when it's fixed!
Not sure if too many sessions in one container will affect it
40 is not much :)
[sessions]/chats/{chatId}/messages?downloadMedia=true&limit=100 Will this problem occur if you keep requesting this interface? Because I am implementing the synchronization chat history function Will downloading attachments cause the request response to be very slow, and repeated requests will cause the docker container to exit abnormally?
I will check again to see if this error will occur.
TY, let's keep the issue open so you get a notification when it's fixed!
Not sure if too many sessions in one container will affect it
40 is not much :)
[sessions]/chats/{chatId}/messages?downloadMedia=true&limit=100 Will this problem occur if you keep requesting this interface? Because I am implementing the synchronization chat history function Will downloading attachments cause the request response to be very slow, and repeated requests will cause the docker container to exit abnormally?
Hi! Has this happened again since then? The logs right before it happened would be helpful 🙏
Hi!It's not happening at the moment, what logs should I provide that would be helpful? I checked the logs of the docker container and found nothing special. I don’t know how to query other logs.
Will this problem occur if you keep requesting this interface?
Shoudn't be a problem for NOWEB
Then I really don’t know what else could cause this situation.
May be it was some rare random circumstances... We've added few fixes in 2024.9.1 based on the errors you provided, let's see if it helps in this case.
what logs should I provide that would be helpful?
All logs for 10-30 seconds before the container restarted, if possible. Usually right before container exits it adds some logs about what a error happened. If there's no error logs, it could also be system-related issues - like OOM killer killed the container or some docker-related issies, you can check operating system logs as well
May be it was some rare random circumstances... We've added few fixes in
2024.9.1based on the errors you provided, let's see if it helps in this case.what logs should I provide that would be helpful?
All logs for 10-30 seconds before the container restarted, if possible. Usually right before container exits it adds some logs about what a error happened. If there's no error logs, it could also be system-related issues - like OOM killer killed the container or some docker-related issies, you can check operating system logs as well
Ok, thank you, looking forward to the 2024.9.1 version, I will try to see if I can find the relevant logs
