Deleting Monitor - Doesn’t Delete - DB Connection Errors
⚠️ Please verify that this bug has NOT been raised before.
- [X] I checked and didn't find similar issue
🛡️ Security Policy
- [X] I agree to have read this project Security Policy
Description
Anyone seeing issues with the latest version where deleting an existing monitor does not delete but instead causes the db to enter a state where connections are not accepted anymore?
Then I see errors like -
Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
Or DB timeout errors
Only way to recover is to recreate the Docker image. Everything comes back fine.
👟 Reproduction steps
Delete an existing monitor!
👀 Expected behavior
Should just delete without causing DB connection and pooling issues.
😓 Actual Behavior
Does not delete. Sits waiting - error messages show DB connection issues and all monitors then start failing.
🐻 Uptime-Kuma Version
1.20
💻 Operating System and Arch
Docker - RPI Arc64
🌐 Browser
Cheers me
🐋 Docker Version
No response
🟩 NodeJS Version
No response
📝 Relevant log output
Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
Yes, it is a known issue, usually due to deleting a lot of heartbeat data. Unfortunately, Uptime Kuma is using SQLite which allows one connection only at the moment.
I think it could be fixed by 2.0.0 with MariaDB/MySQL support https://github.com/louislam/uptime-kuma/pull/2720
Is there a workaround until 2.0 is released? Maybe by stopping the container, connecting into it and manually removing the data? Could you maybe provide the necessary DELETE statements?
edit: I had a look at the DB schema and there are a fair amount of tables. I don't know how well I can trust SQLite with the relation cascades. Let's say, if I do: DELETE FROM monitor WHERE id=1, will it delete everything that's needed?
DELETE FROM monitor WHERE id=1, will it delete everything that's needed?
It should be OK. Of course, always backup first.
Unfortunately it did not:
sqlite> select count(*) from heartbeat where monitor_id=3;
90834
sqlite> delete from monitor where id=3;
sqlite> select count(*) from heartbeat where monitor_id=3;
90834
For posterity, if you are using the SQLite cli, foreign keys are not enforced by default. You have to enable it with:
PRAGMA foreign_keys = ON
I encountered a similar issue today when trying to delete a monitor. Nothing happens when I click on delete and confirm. It just keeps loading and nothing works after that. I need to restart the docker each time.
@Jiriteach
This will be resolved after the 1.23 release given that https://github.com/louislam/uptime-kuma/pull/2800 and https://github.com/louislam/uptime-kuma/pull/3380 were merged.
=> Could you close this issue? (we forgot to do so in the above PRs)
This is still occurring for me on 1.23.16
It just happened to me today too, also on 1.23.16
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
at runNextTicks (node:internal/process/task_queues:60:5)
at listOnTimeout (node:internal/timers:538:9)
at process.processTimers (node:internal/timers:512:7)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:572:22)
at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:558:22)
at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:593:19)
at async Settings.get (/app/server/settings.js:54:21) {
sql: 'SELECT `value` FROM setting WHERE `key` = ? limit ?',
bindings: [ 'primaryBaseURL', 1 ]
}
at process.unexpectedErrorHandler (/app/server/server.js:1905:13)
at process.emit (node:events:517:28)
at emit (node:internal/process/promises:149:20)
at processPromiseRejections (node:internal/process/promises:283:27)
at processTicksAndRejections (node:internal/process/task_queues:96:32)
at runNextTicks (node:internal/process/task_queues:64:3)
at listOnTimeout (node:internal/timers:538:9)
at process.processTimers (node:internal/timers:512:7)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:572:22)
at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:558:22)
at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:593:19)
at async Settings.get (/app/server/settings.js:54:21)
at async exports.setting (/app/server/util-server.js:610:12)
at async Namespace.<anonymous> (/app/server/server.js:1622:13) {
sql: 'SELECT `value` FROM setting WHERE `key` = ? limit ?',
bindings: [ 'disableAuth', 1 ]
}
at process.unexpectedErrorHandler (/app/server/server.js:1905:13)
at process.emit (node:events:517:28)
at emit (node:internal/process/promises:149:20)
at processPromiseRejections (node:internal/process/promises:283:27)
at process.processTicksAndRejections (node:internal/process/task_queues:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
This doesn't just happen when deleting monitors; I didn't delete any monitors and ran into these errors. They caused all monitors to disappear in the web UI (#5810) because the queries for the web UI kept timing out.
I got things working again. I don't know whether or not that's a result of anything I tried (e.g., ANALYZE etc.), or just time.
I have ~85 monitors. Most of them are "less important" and fire every 5 min. Some of them are "more important" and fire every 1 min. I was keeping 6 months of history. With that, I had ~9,000,000 rows in the heartbeat table. I just changed that to 3 months. I'll find out tomorrow whether or not that helped. (The web UI is working, but sluggish.)
Unfortunately, Uptime Kuma is using SQLite which allows one connection only at the moment.
I'm not sure why Uptime Kuma limits itself to one SQLite connection? That's especially odd given that Uptime Kuma sets up WAL mode. The whole point of WAL mode is to allow reads while there's a writer, and vice-versa. (More specifically: "WAL provides more concurrency as readers do not block writers and a writer does not block readers. Reading and writing can proceed concurrently.")
I'm experimenting with this Dockerfile:
FROM louislam/uptime-kuma:1
RUN sed -i 's/min: 1/min: 2/g' /app/server/database.js
RUN sed -i 's/max: 1/max: 4/g' /app/server/database.js
It changes sqlite's pool min from 1 to 2 and pool max from 1 to 4.
9,000,000 rows in the heartbeat table.
We have fixed that in v2.0. Consider upgrading to the beta