uptime-kuma icon indicating copy to clipboard operation
uptime-kuma copied to clipboard

[Error: PRAGMA journal_mode = WAL - SQLITE_BUSY: database is locked]

Open igorpalant opened this issue 2 years ago • 7 comments

⚠️ Please verify that this bug has NOT been raised before.

  • [X] I checked and didn't find similar issue

🛡️ Security Policy

📝 Describe your problem

I am trying to host uptime-kuma in Azure Container Instances. I have create Azure File Storage and I see that it is properly mounted into container. Here is the log as Uptime Kuma starts (note: It works fine if I am not mounting Azure File share, but data is not persisted in that case - so I am sure it is an issue with storage. When I look into Azure File Storage, I see db and log files created successfully, so there is no issue with permissions):

==> Performing startup jobs and maintenance tasks ==> Starting application with user 0 group 0 Welcome to Uptime Kuma Node Env: production Importing Node libraries Importing 3rd-party libraries Importing this project modules Prepare Notification Providers Version: 1.11.1 Creating express and socket.io instance Server Type: HTTP Data Dir: ./data/ Connecting to the Database Trace: [Error: PRAGMA journal_mode = WAL - SQLITE_BUSY: database is locked] { errno: 5, code: 'SQLITE_BUSY' } at process. (/app/server/server.js:1542:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues

🐻 Uptime-Kuma Version

1.11.1

💻 Operating System and Arch

na

🌐 Browser

na

🐋 Docker Version

Azure Container Instance

🟩 NodeJS Version

No response

igorpalant avatar Dec 25 '21 23:12 igorpalant

Already mentioned in the readme: ⚠️ Please use a local volume only. Other types such as NFS are not supported.

louislam avatar Dec 26 '21 02:12 louislam

Thank you very much. I am curious what prevents such support and is there a potential workaround?

igorpalant avatar Dec 26 '21 03:12 igorpalant

In the meantime, I was facing the same error (in a Kubernetes setup with NFS as storage option). Not a definitive fix, but as workaround you could do this, on your Uptime Kuma data folder:

  1. Copy the current kuma.db file to a new file: $ cp kuma.db kuma.db-new
  2. Rename old kuma.db file (but keep it there for now, just in case): $ mv kuma.db{,.old}
  3. Rename new file as the original one: $ mv kuma.db-new kuma.db
  4. Restart Uptime Kuma (Docker container, Kubernetes pod, whatever)
  5. If it works, it is now safe to delete the kuma.db.old one

Additionally, If it keeps failing just take a look on the new file permissions, i.e. try (temporarily!) with: $ chmod 777 kuma.db and see if it works after it, then adjust permissions properly (whenever possible, don't leave it as 777).

mtnezm avatar Jan 01 '22 01:01 mtnezm

Just tried to implement Uptime Kuma in Azure ACI with a cifs backed volume and faced the same error. Similar issue was described here https://github.com/grafana/grafana/issues/20549#issuecomment-557108788 but it seems Uptime Kuma is already using WAL journalling?

mateuszdrab avatar Jul 12 '22 23:07 mateuszdrab

I have to ask (mainly out of ignorance) - how would the contain know? Isn't this one of the main purposes of containers, to be able to restore a fresh image while keeping your data intact (stored elsewhere)? Aren't containers essentially virtualization at the application layer (meaning, how would the container even know this is stored off-instance - it's just a mount point as far as the app is concerned) - right?

Has anyone come up with a work-around on this, as I want to do the same thing (run it in centralized storage, not local. In a really ideal world, I could store the database in a central place, and have multiple boxes hit it, but for now, I'll settle for being able to simply keep the database in a place that survives redeployments.

Thanks.

kristiandg avatar Jul 17 '22 22:07 kristiandg

I resolved the first error by converting the database manually after it was created.

  • downloaded the database from my Azure fileshare volume (smb cifs)
  • run this command: sqlite3 kuma.db 'PRAGMA journal_mode=wal;'
  • After that the database was running fine. Im now getting this error though:
24 2022-09-21T12:24:17.087Z [DB] INFO: Data Dir: ./data/
23 2022-09-21T12:24:17.087Z [SERVER] INFO: Connecting to the Database
22 2022-09-21T12:24:17.244Z [DB] INFO: SQLite config:
21[ { journal_mode: 'wal' } ]
20 [ { cache_size: -12000 } ]
19 2022-09-21T12:24:17.248Z [DB] INFO: SQLite Version: 3.38.3
18 2022-09-21T12:24:17.249Z [SERVER] INFO: Connected
17 2022-09-21T12:24:17.252Z [DB] INFO: Your database version: 0
16 2022-09-21T12:24:17.252Z [DB] INFO: Latest database version: 10
15 2022-09-21T12:24:17.252Z [DB] INFO: Database patch is needed
14 2022-09-21T12:24:17.252Z [DB] INFO: Backing up the database
13 Error: EACCES: permission denied, copyfile './data/kuma.db-shm' -> './data/kuma.db-shm.bak0'
12    at Object.copyFileSync (node:fs:2817:3)
11    at Function.backup (/app/server/database.js:453:20)
10    at Function.patch (/app/server/database.js:188:22)
9    at async initDatabase (/app/server/server.js:1588:5)
8    at async /app/server/server.js:155:5 {
7  errno: -13,
6  syscall: 'copyfile',
5  code: 'EACCES',
4  path: './data/kuma.db-shm',
3  dest: './data/kuma.db-shm.bak0'
2
}

pimjansen avatar Sep 21 '22 12:09 pimjansen

We are clearing up our old issues and your ticket has been open for 3 months with no activity. Remove stale label or comment or this will be closed in 2 days.

github-actions[bot] avatar Dec 20 '22 18:12 github-actions[bot]

This issue was closed because it has been stalled for 2 days with no activity.

github-actions[bot] avatar Dec 22 '22 18:12 github-actions[bot]

Okay so here is my workaround for someone coming from google to solve this You cannot create the database on a network share like cifs or nfs. What you can do is create the whole data folder locally (run an instance of uptime-kuma with a simple docker run command for example with a local -v volume). When its done and you verified that its working, you can copy it to your central network storage, and spin up a new container using that volume instead. Using this for a while and I had no issue so far.

miberecz avatar Jan 27 '23 11:01 miberecz

Thanks @miberecz this worked for me on Azure Kubernetes (AKS) too. Here's the process I went through that should work for most cloud providers:

  1. Start up a local container and mount the volume somewhere safe, but the final folder should be data as tar passes that along later when we're copying
docker run -d --restart=always \
  -p 3001:3001 \
  -v ~/data/:/app/data \
  --name uptime-kuma \
  louislam/uptime-kuma:latest
  1. Visit http://localhost:3001 and set your initial username and password

  2. Create your namespace and PVC

  3. Mount an "innocuous pod" to it, like nginx or echo. If you try mounting uptime-kuma it will create a db and lock it, you don't want that up first. The pod should have a similar mount as uptime-kuma, something like this:

    spec:
      volumes:
        - name: data-mount
          persistentVolumeClaim:
            claimName: uptime-data-claim0
      containers:
        - name: nginx
          image: nginx:latest
          volumeMounts:
            - mountPath: '/app/data'
              name: data-mount
  1. Copy your files over using your innocuous pod mounted to the PVC:
cd ~ && tar cf - ./data | kubectl exec -i -n YOUR_NAMESPACE YOUR_POD_NAME -- tar xf - -C /app/ --warning=no-unknown-keyword
  1. Remove your innocuous pod

  2. Start up your uptime-kuma! 🎉

kdubb1337 avatar Mar 16 '23 21:03 kdubb1337

It's good to see there's workaround but honestly it probably would be better to use a proper SQL backend for additional performance and stability. Hopefully support for that can be added at some point.

mateuszdrab avatar Mar 17 '23 00:03 mateuszdrab