uptime-kuma icon indicating copy to clipboard operation
uptime-kuma copied to clipboard

Node High CPU Gui Unresponsive

Open pete83uk opened this issue 1 year ago • 6 comments

⚠️ Please verify that this bug has NOT been raised before.

  • [X] I checked and didn't find similar issue

🛡️ Security Policy

📝 Describe your problem

I'm running 1.17.1 with around 180 monitors on Ubuntu 20.04. I have issues most days where the system becomes unresponsive, either by showing no monitors, or by not loading the gui at all. When I look at top I can see that node is running, sometimes at 150% CPU.

I have to reboot the server and restart docker several times for the system to come back online and I cannot see what is crippling the server in this way.

The server is a VM, running 12GB RAM, and 8vCPU cores, so should be able to handle the system ok.

Any ideas?

🐻 Uptime-Kuma Version

1.17.1

💻 Operating System and Arch

Ubuntu 20.04 x64

🌐 Browser

All

🐋 Docker Version

20.10.12

🟩 NodeJS Version

No response

pete83uk avatar Aug 23 '22 18:08 pete83uk

We have been having the same issue for months, the only thing that helps in our case is to limit the database to only 3 days with the auto-clean feature in the settings page.

gaby avatar Aug 24 '22 00:08 gaby

Gaby that is true. For me, the problem appeared with 370 hosts (NVMe SSD). Database limited to 1 day retention.

htzenadmin avatar Sep 02 '22 15:09 htzenadmin

same issue

Ubuntu 22.04.1 LTS running Uptime-Kuma Version: 1.18.0 on AWS t3a.nano in a docker mode along with docker nginx with reverse-proxy

only 5 monitors: 4 http monitors, 1 http keyword

server freezes every day and requires a restart.

docker stats logs every hour indicates a spike in processes and then a shutdown/crash.

in the following, normal PIDs are 12. At some point, it spikes to 20, then the docker container "crashes" and goes to 0 pids, 0 mem usage.

CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT   MEM %     NET I/O          BLOCK I/O        PIDS
7d5528641f03   uptime_kuma    4.17%     89.59MiB / 446MiB   20.09%    247MB / 9.05MB   44.8GB / 188MB   12
e6e00f243726   uptime_nginx   0.00%     3.461MiB / 446MiB   0.78%     1.8MB / 2.08MB   194MB / 4.1kB    2
CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT   MEM %     NET I/O          BLOCK I/O        PIDS
7d5528641f03   uptime_kuma    56.59%    104.9MiB / 446MiB   23.52%    267MB / 9.64MB   45.8GB / 202MB   20
e6e00f243726   uptime_nginx   0.00%     2.547MiB / 446MiB   0.57%     1.8MB / 2.09MB   204MB / 4.1kB    2
CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT   MEM %     NET I/O          BLOCK I/O       PIDS
7d5528641f03   uptime_kuma    --        -- / --             --        --               --              --
e6e00f243726   uptime_nginx   0.00%     2.363MiB / 446MiB   0.53%     1.89MB / 2.3MB   2.7GB / 4.1kB   2
CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT   MEM %     NET I/O          BLOCK I/O       PIDS
7d5528641f03   uptime_kuma    --        -- / --             --        --               --              --
e6e00f243726   uptime_nginx   0.00%     2.363MiB / 446MiB   0.53%     1.89MB / 2.3MB   2.7GB / 4.1kB   2
CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT   MEM %     NET I/O          BLOCK I/O       PIDS
7d5528641f03   uptime_kuma    --        -- / --             --        --               --              --
e6e00f243726   uptime_nginx   0.00%     2.363MiB / 446MiB   0.53%     1.89MB / 2.3MB   2.7GB / 4.1kB   2
CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT   MEM %     NET I/O          BLOCK I/O       PIDS
7d5528641f03   uptime_kuma    0.00%     0B / 0B             0.00%     0B / 0B          0B / 0B         0
e6e00f243726   uptime_nginx   0.00%     2.363MiB / 446MiB   0.53%     1.89MB / 2.3MB   2.7GB / 4.1kB   2

does uptime kuma provide more detailed logging?

Vigrond avatar Sep 18 '22 18:09 Vigrond

I just wanted to reply to the last message here that I solved my issue.

Ubuntu has automatic updates enabled by default, running apt-check every so often. This script often consumes memory while running apt-update, often up to 200mb+. This triggered OOM and froze the system.

After replacing Ubuntu with Amazon Linux 2 (A flavor of RHEL), it has solved my problem with a lower OS footprint.

Vigrond avatar Sep 28 '22 00:09 Vigrond

Same issue. Sometimes kuma spend a lot of CPU time in iowate: Screenshot_20220928_145749

photo_2022-09-28_14-26-35

Strange messages in logs: Screenshot_20220928_150046

    at process.<anonymous> (/app/server/server.js:1699:13)
    at process.emit (node:events:527:28)
    at emit (node:internal/process/promises:140:20)
    at processPromiseRejections (node:internal/process/promises:274:27)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26)
    at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
    at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:588:22)
    at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:574:22)
    at async Function.calcUptime (/app/server/model/monitor.js:729:22)
    at async Function.sendUptime (/app/server/model/monitor.js:792:24)
    at async Function.sendStats (/app/server/model/monitor.js:671:13) {

docker retsart heals problem for few days...

kinnalru avatar Sep 28 '22 12:09 kinnalru

We are clearing up our old issues and your ticket has been open for 3 months with no activity. Remove stale label or comment or this will be closed in 2 days.

github-actions[bot] avatar Dec 27 '22 18:12 github-actions[bot]

This issue was closed because it has been stalled for 2 days with no activity.

github-actions[bot] avatar Dec 29 '22 18:12 github-actions[bot]