uptime-kuma
uptime-kuma copied to clipboard
Uptime kuma keeps on giving the following error
⚠️ Please verify that this bug has NOT been raised before.
- [X] I checked and didn't find similar issue
🛡️ Security Policy
- [X] I agree to have read this project Security Policy
📝 Describe your problem
This is the error I am getting
at Timeout.safeBeat [as _onTimeout] (/app/server/model/monitor.js:532:25) 2022-06-13T13:02:25.881Z [MONITOR] ERROR: Please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19) at async Function.sendCertInfo (/app/server/model/monitor.js:676:23) at async Function.sendStats (/app/server/model/monitor.js:641:13) { sql: undefined, bindings: undefined } at process.
(/app/server/server.js:1696:13) at process.emit (node:events:390:28) at emit (node:internal/process/promises:136:22) at processPromiseRejections (node:internal/process/promises:242:25) at processTicksAndRejections (node:internal/process/task_queues:97:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19) at async Function.sendCertInfo (/app/server/model/monitor.js:676:23) at async Function.sendStats (/app/server/model/monitor.js:641:13) { sql: undefined, bindings: undefined } at process. (/app/server/server.js:1696:13) at process.emit (node:events:390:28) at emit (node:internal/process/promises:136:22) at processPromiseRejections (node:internal/process/promises:242:25) at processTicksAndRejections (node:internal/process/task_queues:97:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19) at async Function.sendCertInfo (/app/server/model/monitor.js:676:23) at async Function.sendStats (/app/server/model/monitor.js:641:13) { sql: undefined, bindings: undefined } at process. (/app/server/server.js:1696:13) at process.emit (node:events:390:28) at emit (node:internal/process/promises:136:22) at processPromiseRejections (node:internal/process/promises:242:25) at processTicksAndRejections (node:internal/process/task_queues:97:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues 2022-06-13T13:02:36.496Z [MANAGE] INFO: Resume Monitor: 42 User ID: 1 2022-06-13T13:02:36.501Z [MONITOR] INFO: Added Monitor: undefined User ID: 1 2022-06-13T13:02:36.990Z [MONITOR] INFO: Monitor #42 'Test': Successful Response: 472 ms | Interval: 60 seconds | Type: http 2022-06-13T13:02:37.366Z [MONITOR] INFO: Monitor #27 'TF Serve Bert Head': Successful Response: 31 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.156Z [MONITOR] INFO: Monitor #28 'Triton Server GPU': Successful Response: 24 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.200Z [MONITOR] INFO: Monitor #30 'Alexanndria Green': Successful Response: 52 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.251Z [MONITOR] INFO: Monitor #32 'Autocomplete Green': Successful Response: 31 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.289Z [MONITOR] INFO: Monitor #34 'Bot Brain Green': Successful Response: 62 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.367Z [MONITOR] INFO: Monitor #36 'Bot Model de Green': Successful Response: 50 ms | Interval: 20 seconds | Type: http
🐻 Uptime-Kuma Version
1.16.1
💻 Operating System and Arch
Kubernetes
🌐 Browser
Chrorme
🐋 Docker Version
No response
🟩 NodeJS Version
No response
Can someone help me please? I can't login and it is freezing @louislam
Usually due to read/write issue. What is your disk type?
I am using the AWS EBS volume. This is running as a pod in the kubernetes. It was working fine for a week
Can you help me fix this please?
EBS volume should be working.
Please make sure the load average of your system is low by top
command.
Hi @louislam, this is the current pod spec (current utilization of the pod by kubectl top pods
command). Can you analyse this and let me know if this is the normal consumption?
Can this happen when I add more monitors? I have around 25
The spec seems too small. 24m ~ 0.024 cpu and 62Mi ~ 62MB memory, right?
Yes you are right. That is the amount of memory the pod is consuming. Not the specs I have configured. It is the real-time usage of the memory and cpu by the uptime kuma.
We are clearing up our old issues and your ticket has been open for 3 months with no activity. Remove stale label or comment or this will be closed in 7 days.
This issue was closed because it has been stalled for 7 days with no activity.