uptime-kuma icon indicating copy to clipboard operation
uptime-kuma copied to clipboard

Uptime kuma keeps on giving the following error

Open iamdempa opened this issue 2 years ago • 9 comments

⚠️ Please verify that this bug has NOT been raised before.

  • [X] I checked and didn't find similar issue

🛡️ Security Policy

📝 Describe your problem

This is the error I am getting

at Timeout.safeBeat [as _onTimeout] (/app/server/model/monitor.js:532:25) 2022-06-13T13:02:25.881Z [MONITOR] ERROR: Please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19) at async Function.sendCertInfo (/app/server/model/monitor.js:676:23) at async Function.sendStats (/app/server/model/monitor.js:641:13) { sql: undefined, bindings: undefined } at process. (/app/server/server.js:1696:13) at process.emit (node:events:390:28) at emit (node:internal/process/promises:136:22) at processPromiseRejections (node:internal/process/promises:242:25) at processTicksAndRejections (node:internal/process/task_queues:97:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19) at async Function.sendCertInfo (/app/server/model/monitor.js:676:23) at async Function.sendStats (/app/server/model/monitor.js:641:13) { sql: undefined, bindings: undefined } at process. (/app/server/server.js:1696:13) at process.emit (node:events:390:28) at emit (node:internal/process/promises:136:22) at processPromiseRejections (node:internal/process/promises:242:25) at processTicksAndRejections (node:internal/process/task_queues:97:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19) at async Function.sendCertInfo (/app/server/model/monitor.js:676:23) at async Function.sendStats (/app/server/model/monitor.js:641:13) { sql: undefined, bindings: undefined } at process. (/app/server/server.js:1696:13) at process.emit (node:events:390:28) at emit (node:internal/process/promises:136:22) at processPromiseRejections (node:internal/process/promises:242:25) at processTicksAndRejections (node:internal/process/task_queues:97:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues 2022-06-13T13:02:36.496Z [MANAGE] INFO: Resume Monitor: 42 User ID: 1 2022-06-13T13:02:36.501Z [MONITOR] INFO: Added Monitor: undefined User ID: 1 2022-06-13T13:02:36.990Z [MONITOR] INFO: Monitor #42 'Test': Successful Response: 472 ms | Interval: 60 seconds | Type: http 2022-06-13T13:02:37.366Z [MONITOR] INFO: Monitor #27 'TF Serve Bert Head': Successful Response: 31 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.156Z [MONITOR] INFO: Monitor #28 'Triton Server GPU': Successful Response: 24 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.200Z [MONITOR] INFO: Monitor #30 'Alexanndria Green': Successful Response: 52 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.251Z [MONITOR] INFO: Monitor #32 'Autocomplete Green': Successful Response: 31 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.289Z [MONITOR] INFO: Monitor #34 'Bot Brain Green': Successful Response: 62 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.367Z [MONITOR] INFO: Monitor #36 'Bot Model de Green': Successful Response: 50 ms | Interval: 20 seconds | Type: http

🐻 Uptime-Kuma Version

1.16.1

💻 Operating System and Arch

Kubernetes

🌐 Browser

Chrorme

🐋 Docker Version

No response

🟩 NodeJS Version

No response

Can someone help me please? I can't login and it is freezing @louislam

iamdempa avatar Jun 13 '22 13:06 iamdempa

Usually due to read/write issue. What is your disk type?

louislam avatar Jun 13 '22 13:06 louislam

I am using the AWS EBS volume. This is running as a pod in the kubernetes. It was working fine for a week

iamdempa avatar Jun 13 '22 13:06 iamdempa

Can you help me fix this please?

iamdempa avatar Jun 13 '22 13:06 iamdempa

EBS volume should be working.

Please make sure the load average of your system is low by top command.

louislam avatar Jun 13 '22 13:06 louislam

Hi @louislam, this is the current pod spec (current utilization of the pod by kubectl top pods command). Can you analyse this and let me know if this is the normal consumption?

image

iamdempa avatar Jun 13 '22 13:06 iamdempa

Can this happen when I add more monitors? I have around 25

iamdempa avatar Jun 13 '22 13:06 iamdempa

The spec seems too small. 24m ~ 0.024 cpu and 62Mi ~ 62MB memory, right?

daeho-ro avatar Jun 13 '22 14:06 daeho-ro

Yes you are right. That is the amount of memory the pod is consuming. Not the specs I have configured. It is the real-time usage of the memory and cpu by the uptime kuma.

iamdempa avatar Jun 13 '22 14:06 iamdempa

We are clearing up our old issues and your ticket has been open for 3 months with no activity. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] avatar Sep 12 '22 00:09 github-actions[bot]

This issue was closed because it has been stalled for 7 days with no activity.

github-actions[bot] avatar Sep 19 '22 00:09 github-actions[bot]