Database errors when loading repo from NFS Share
Version
0.17.6
Operating System
Windows
Distribution Method
msi (Windows)
Describe the issue
When attempting to open a project on an NFS share (SMB share was going too slow), I am getting errors from the app in relation to the database files on the share. I have verified that the files do exist after the app attempts to create them. I have tried multiple things to disable locking such as mounting with nolock and turning on/off asynchronous operations on the NFS server, to no avail.
My linux NFS export is:
/home/techmage/repos/ 192.168.1.2(rw,sync,all_squash,anonuid=1000,anongid=1000,nohide)
My Windows mount command is:
mount -o nolock \\192.168.1.3\home\techmage\repos Z:
When selecting the network mount as the repo and waiting for a bit, I get the following "file in use" error:
command: set_project_active
params: {"id":"e5beb465-3060-4717-8d2a-2c37b5a4b953"})
Failed to rename Z:\wyvern\web\.git\gitbutler\but.sqlite to Z:\wyvern\web\.git\gitbutler\but.sqlite.maybe-broken-01 - application may fail to startup: The process cannot access the file because it is being used by another process. (os error 32)
After proceeding anyways to open the project, the "rules" section loads for a bit then spits out this error:
command: list_workspace_rules
params: {"projectId":"e5beb465-3060-4717-8d2a-2c37b5a4b953"})
database is locked
I have verified that the files are being created and written to, and they have the "correct" user/group IDs and permissions assigned to them:
When I remove the project, the folder is deleted and things are correctly cleaned up.
How to reproduce (Optional)
- Setup an NFS share (I did so on Linux, though other OSs might exhibit the issue as well)
- Setup and configure Windows built-in "Client for NFS"
- Mount the NFS share
- Add a new local repository pointing to a git repo on the NFS share
Expected behavior (Optional)
No errors, the app works as if it were working with local files.
Relevant log output (Optional)
command: set_project_active
params: {"id":"e5beb465-3060-4717-8d2a-2c37b5a4b953"})
Failed to rename Z:\wyvern\web\.git\gitbutler\but.sqlite to Z:\wyvern\web\.git\gitbutler\but.sqlite.maybe-broken-01 - application may fail to startup: The process cannot access the file because it is being used by another process. (os error 32)
command: list_workspace_rules
params: {"projectId":"e5beb465-3060-4717-8d2a-2c37b5a4b953"})
database is locked
Thanks a lot for reporting and gathering all the extra information, it's much appreciated!
This is very unexpected as I naively thought that Sqlite will just work under all conditions.
And given it's Sqlite, there isn't too much we can do except for figuring out if there are patterns to avoid that we have control over. And due to the usage of diesel (ORM), controlling the database directly is non-obvious.
From what Qwen3 comes up with, it looks like a few things can be tried when mounting the NFS share, and maybe there is more to try as well.
In any case, fixing this will require some effort (and probably more research as well to get more real-world experiences, written by humans preferably).
From https://sqlite.org/lockingv3.html:
SQLite uses POSIX advisory locks to implement locking on Unix. On Windows it uses the LockFile(), LockFileEx(), and UnlockFile() system calls. SQLite assumes that these system calls all work as advertised. If that is not the case, then database corruption can result. One should note that POSIX advisory locking is known to be buggy or even unimplemented on many NFS implementations (including recent versions of Mac OS X) and that there are reports of locking problems for network filesystems under Windows. Your best defense is to not use SQLite for files on a network filesystem.
So it seems nolock isn't what we'd want on NFS, maybe allowing locks and hoping for a correct implementation is all we can do?
CC @krlvi
Qwen3 thinks
SQLite can work on NFS (Network File System), but it's known to have performance and reliability issues due to NFS's lack of strong file locking and consistency guarantees. Here’s how to make SQLite work better and more reliably on NFS:
✅ SQLite on NFS: Quick Tips
-
Use
WALjournal mode:PRAGMA journal_mode=WAL; -
Set
synchronous=NORMAL(orOFFif loss is acceptable):PRAGMA synchronous=NORMAL; -
Mount NFS with:
noatime,nodiratime,hard,intr,actimeo=30 -
Avoid concurrent writes — only one writer allowed.
-
Use local copy + sync (e.g.,
rsync) instead of direct NFS access. -
Add retry logic for
database is lockederrors.
❌ Avoid
DELETEjournal mode,noatimealone isn’t enough, and multiple writers = failure.
💡 Best for production: Use a local SQLite file + sync to NFS.
SQLite on NFS is possible but fragile — design for failure.
✅ 1. Use journal_mode=WAL (Recommended)
SQLite’s Write-Ahead Logging (WAL) mode improves concurrency and reduces locking conflicts, which is critical on NFS.
PRAGMA journal_mode=WAL;
- Why it helps: WAL allows multiple readers to proceed without blocking, and reduces the need for exclusive locks.
- NFS benefit: Less contention and fewer deadlocks.
⚠️ Ensure your NFS client supports
O_DIRECTor similar for better performance (see below).
✅ 2. Avoid journal_mode=DELETE (Default)
The default DELETE mode uses exclusive file locks, which are problematic on NFS due to delayed lock propagation.
❌ Avoid:
journal_mode=DELETEon NFS.
✅ 3. Use synchronous=NORMAL or synchronous=OFF (with caution)
Default is FULL, which syncs to disk on every write — very slow on NFS.
PRAGMA synchronous=NORMAL; -- or OFF (for non-critical data)
NORMAL: Syncs only on commit, not every write (much faster).OFF: No syncs (fastest), but risk of data loss on crash.
🔥 Use
NORMALfor most cases. UseOFFonly if you can tolerate losing up to ~1 second of data.
✅ 4. Mount NFS with noatime and nodiratime
Prevent unnecessary metadata updates that slow down file access.
mount -o rw,relatime,nodiratime,noatime,nolock,hard,intr,soft,tcp,actimeo=30 server:/path /local/mount
noatime: Don’t update access time on read.nodiratime: Same for directories.actimeo=30: Adjust access time cache (30 seconds).hard: Retry on failure (avoid hangs).intr: Allow interrupting blocked calls (e.g., Ctrl+C).softis risky; avoid unless you want immediate failure.
⚠️
nolockdisables NFS lockd, which may cause issues — use only if you have WAL and don’t rely on file locking.
✅ 5. Use O_DIRECT (if supported)
Avoid caching via the OS page cache, which can cause inconsistent views on NFS.
Set PRAGMA mmap_size and use O_DIRECT via sqlite3_open_v2().
sqlite3 *db;
int rc = sqlite3_open_v2("file:/path/to/db?mode=rw", &db, SQLITE_OPEN_READWRITE, NULL);
- Use
file:URI withmode=rwto enableO_DIRECTon some systems. - Requires kernel support and careful tuning.
🔍 Check
sqlite3_db_cachemsize()andsqlite3_db_pagecount()to monitor cache usage.
✅ 6. Avoid Concurrent Access from Multiple Clients
SQLite is not designed for multi-client simultaneous access.
- Only one process should write to the database at a time.
- If multiple clients must access, use a single writer, and readers can use WAL.
❌ Never have multiple writers on the same SQLite file via NFS.
Use a single server process to handle writes, and use read-only connections for others.
✅ 7. Use a Local Cache (Best Practice)
Instead of mounting the SQLite file directly over NFS, consider:
- Sync the DB file to a local disk (e.g., via
rsyncor a script). - Work on the local copy.
- Push changes back to NFS only when needed.
This avoids all NFS locking and consistency issues.
✅ Use
rsync+ cron orinotifyto keep local and remote in sync.
✅ 8. Monitor and Tune for Consistency
- Check for
SQLITE_BUSYorSQLITE_LOCKEDerrors (common on NFS). - Add retry logic in your app:
import time
import sqlite3
for attempt in range(5):
try:
conn = sqlite3.connect("file:/nfs/db.sqlite?mode=rw", check_same_thread=False)
break
except sqlite3.OperationalError as e:
if "database is locked" in str(e):
time.sleep(0.1)
continue
raise
✅ 9. Consider Alternatives
If you need high concurrency or reliability over NFS, consider:
- PostgreSQL with
pg_nfsorpg_filedump(better for networked databases). - Distributed databases like CockroachDB, etcd, or even a simple API layer.
🔚 Summary: Best Practices for SQLite on NFS
| Setting | Recommendation |
|---|---|
journal_mode |
WAL ✅ |
synchronous |
NORMAL ✅ (or OFF if acceptable) |
| NFS mount options | noatime, nodiratime, hard, intr ✅ |
| Concurrency | One writer only ✅ |
| File locking | Avoid — use WAL + O_DIRECT |
| Cache | Use local copy or sync via rsync |
| Alternatives | Consider PostgreSQL or local caching |
📌 Final Advice
SQLite on NFS is fragile — it can work, but only if you design carefully. Use WAL, disable unnecessary syncs, avoid concurrent writes, and prefer local caching.
For production use, prefer a local SQLite copy + sync over direct NFS access.
One potential way to mitigate this would be to detect this situation and persist the sqlite file somewhere other than <reporoot>.git/gitbutler. If we don't trust windows symlinks we could create a but.sqlite.link.txt file that points to where the sqlite file actually lives. WDYT?
That's a great idea and I think it will work! It seems it already thought the DB is broken, so that would be a good way to detect this case, and change a project setting (via repository-local Git-config ideally) to change the path to the sqlite database. I'd go with configuration over anything else as it gives power to the user.
And maybe… instead of deducing anything, we should provide this information prominently so users can configure it if they run into this.
I suspect that this is a rare scenario so for now we could have it set up as an override that is set only when/if a user experiences this issue.
Perhaps it's also fair to skip the UI and just suggest editing the projects.json file (the db error could link to docs with instructions)
I agree, a fix could be minimal, and am thinking in the direction of adding some notes to the migration-error that was mentioned here.
Regarding the location of the configuration, to me project.json isn't a place I'd want to use at all, as it's much more reasonable to keep project specific configuration with the project. And with this constraint, one basically has to use repo-local .git/config entries, which is very OK to do to my mind.
Wow, a lot of things were poppin' while I was asleep, lol.
I'm willing to help test changes. I started to look in to where the logic might lie yesterday, but couldn't figure it out within the limited time I was looking. For now, I'll figure out how to build the project so that when the time comes, I can build and test locally then report back here.
Thanks a lot! I will let you know once there is something to fix.
Please note that we also have Nightly builds so there should be no need to go through the trouble of building yourself.
@krlvi It just came to my mind that maybe… we should make the .git/gitbutler directory configurable. That way, we'd have a chance to side-step all issues related to IO into a network share. It's notable that even though we might have everything in the database one day, there still is the cross-application project.lock to warn about concurrent use.
❯ l .git/gitbutler
.rw-r--r--@ 176Ki byron staff 10 Nov 11:30 but.sqlite
.rw-r--r--@ 32Ki byron staff 10 Nov 11:30 but.sqlite-shm
.rw-r--r--@ 0 byron staff 10 Nov 11:30 but.sqlite-wal
.rw-------@ 104 byron staff 5 Nov 05:14 edit_mode_metadata.toml
.rw-------@ 129 byron staff 10 Nov 10:24 operations-log.toml
.rw-r--r--@ 0 byron staff 10 Nov 11:30 project.lock
.rw-------@ 8.1Ki byron staff 10 Nov 10:24 virtual_branches.toml
Maybe one day project.lock will also go away though, because we fully trust that everything works perfectly even in the light of concurrent mutations, and… I don't think this is even possible with a filesystem.
This makes me believe that project.lock is vital to help protect against it.
Agreed that .git/config is likely a better place for this data - lets do that :). And also agreed that we might as well make it about the entire gitbutler folder.
Let's make it configurable but without UI for it for the time being. If you have an exact idea for this Byron feel free to do it, but I am also very happy to help with the implementation here. Let me know
Great to hear! Please feel free to pick this up when you have a moment (and assign yourself), I will pick it up if it's not taken by the time, maybe by the end of the week(end).