Slow / hanging startup on WSL
Version
0.14.6
Operating System
Linux
Distribution Method
AppImage (Linux)
Describe the issue
Been using git-butler on WSL for a while now, running it in linux with it rendering via xwindows/wslg, and this issue has been persistent since the mid v0.13's.
Essentially on startup, it will open the window, render a blank screen and hang for ~5-10 minutes, then eventually get through whatever is going on and work fine. While it's hanging, things like update notifications will still popup, but they can't be interacted with.
The repo in question is big, but not massive:
❯ git ls-files | wc -l
10855
I've pulled a log with debug from the last instance, I've put line breaks in at the point where it hangs, you can see from the timestamps this one took ~7 minutes.
Would welcome any thoughts or ideas here, I've seen a few issues on the topic of performance, and tried various fixes. At the moment I'm still able to use git-butler and it's so far worth it even despite the slow startup time. But it does irk sometimes esp b/c if it does go down for any reason and I need to push a quick change, I can't actually interact with the repo via the command line anymore either without git-butler opening up so that I can 'unload' it.
How to reproduce (Optional)
No response
Expected behavior (Optional)
No response
Relevant log output (Optional)
2025-01-23T20:59:17.387452Z INFO get_user: crates/gitbutler-tauri/src/users.rs:10: close time.busy=888µs time.idle=52.7µs
2025-01-23T20:59:17.392664Z DEBUG resolve: /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-util-0.1.9/src/client/legacy/connect/dns.rs:124: resolving host="app.gitbutler.com" host=app.gitbutler.com
2025-01-23T20:59:17.418046Z DEBUG resolve: /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-util-0.1.9/src/client/legacy/connect/dns.rs:121: close time.busy=25.4ms time.idle=223µs host=app.gitbutler.com
2025-01-23T20:59:17.418310Z DEBUG /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-util-0.1.9/src/client/legacy/connect/http.rs:646: connecting to 52.201.161.45:443
2025-01-23T20:59:17.471046Z INFO list_projects: crates/gitbutler-tauri/src/projects.rs:46: close time.busy=35.4µs time.idle=41.4µs
2025-01-23T20:59:17.520869Z DEBUG /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-util-0.1.9/src/client/legacy/connect/http.rs:649: connected to 52.201.161.45:443
2025-01-23T20:59:17.547647Z INFO get_project: crates/gitbutler-tauri/src/projects.rs:32: close time.busy=4.58ms time.idle=44.4µs id=8bdfdf16-006a-4ca8-b21a-c348e398d84f no_validation=None
2025-01-23T20:59:17.851149Z DEBUG /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-util-0.1.9/src/client/legacy/pool.rs:396: pooling idle connection for ("https", app.gitbutler.com)
<==== hangs here ====>
2025-01-23T21:06:27.941804Z DEBUG set_project_active:set_project_to_window: crates/gitbutler-tauri/src/window.rs:167: Maintaining 1 Windows id=8bdfdf16-006a-4ca8-b21a-c348e398d84f window="main"
2025-01-23T21:06:27.941815Z DEBUG set_project_active:set_project_to_window:file monitor: crates/gitbutler-watcher/src/file_monitor.rs:116: file watcher started project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f id=8bdfdf16-006a-4ca8-b21a-c348e398d84f window="main" project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f
2025-01-23T21:06:27.972889Z INFO git_head: crates/gitbutler-tauri/src/commands.rs:56: close time.busy=236µs time.idle=42.2µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f
2025-01-23T21:06:27.974522Z INFO get_base_branch_data:get_base_branch_data: crates/gitbutler-branch-actions/src/actions.rs:140: close time.busy=4.10ms time.idle=43.7µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f
2025-01-23T21:06:27.974577Z INFO get_base_branch_data: crates/gitbutler-tauri/src/virtual_branches.rs:138: close time.busy=4.22ms time.idle=38.5µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f
2025-01-23T21:06:27.986129Z INFO list_branches: crates/gitbutler-tauri/src/virtual_branches.rs:433: close time.busy=18.9ms time.idle=45.2µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f filter=Some(BranchListingFilter { local: None, applied: None })
2025-01-23T21:06:27.993336Z INFO operating_mode: crates/gitbutler-tauri/src/modes.rs:16: close time.busy=297µs time.idle=45.1µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f
2025-01-23T21:06:28.019279Z INFO get_branch_listing_details: crates/gitbutler-tauri/src/virtual_branches.rs:445: close time.busy=7.44ms time.idle=67.1µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f branch_names=[]
2025-01-23T21:06:28.110980Z DEBUG list_virtual_branches:get_workspace_head: crates/gitbutler-branch-actions/src/integration.rs:34: close time.busy=15.7ms time.idle=45.3µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f
2025-01-23T21:06:28.140316Z DEBUG list_virtual_branches:workdir:ignore_large_files_in_diffs: crates/gitbutler-command-context/src/repository_ext.rs:15: close time.busy=23.7ms time.idle=58.8µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f commit_oid=12b0962e0b3c1a8d741800c92c6d9ff87c90be61 limit_in_bytes=50000000
2025-01-23T21:06:28.198481Z DEBUG crates/gitbutler-watcher/vendor/debouncer/src/lib.rs:610: Flushed 4 events
2025-01-23T21:06:28.198649Z INFO set_project_active:set_project_to_window:file monitor:handle debounced events: crates/gitbutler-watcher/src/file_monitor.rs:119: close time.busy=70.1µs time.idle=40.9µs id=8bdfdf16-006a-4ca8-b21a-c348e398d84f window="main" project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f fs_events=4 fs_events=4 ignored=0 ignored=0 git_noop=1 git_noop=1 git=1 git=1 project=0 project=0 git_dedup=1 git_dedup=1
2025-01-23T21:06:28.198957Z INFO handle: crates/gitbutler-watcher/src/handler.rs:54: close time.busy=71.8µs time.idle=35.1µs event=GitFileChange(8bdfdf16-006a-4ca8-b21a-c348e398d84f, FETCH_HEAD)
2025-01-23T21:06:28.203512Z DEBUG list_virtual_branches:workdir: crates/gitbutler-diff/src/diff.rs:139: close time.busy=92.5ms time.idle=44.7µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f commit_oid=12b0962e0b3c1a8d741800c92c6d9ff87c90be61
2025-01-23T21:06:28.242871Z DEBUG list_virtual_branches:list_virtual_branches_cached:get_applied_status_cached: crates/gitbutler-branch-actions/src/status.rs:47: close time.busy=39.1ms time.idle=42.3µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f
2025-01-23T21:06:28.246847Z DEBUG list_virtual_branches:list_virtual_branches_cached:handle branches: crates/gitbutler-branch-actions/src/virtual.rs:346: close time.busy=3.90ms time.idle=47.0µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f num_branches=3
2025-01-23T21:06:28.247048Z DEBUG list_virtual_branches:list_virtual_branches_cached: crates/gitbutler-branch-actions/src/virtual.rs:319: close time.busy=43.5ms time.idle=39.9µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f
2025-01-23T21:06:28.250567Z INFO list_virtual_branches: crates/gitbutler-tauri/src/virtual_branches.rs:48: close time.busy=156ms time.idle=46.6µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f
2025-01-23T21:06:28.283916Z INFO list_branches: crates/gitbutler-tauri/src/virtual_branches.rs:433: close time.busy=9.04ms time.idle=52.6µs project_id=8bdfdf16-006a-4ca8-b21a-c348e398d84f filter=Some(BranchListingFilter { local: None, applied: None })
Thanks so much for reporting, it's super helpful to learn how GitButler is used so we are aware and can make it better!
We are working on version 0.15 which will definitely address these issues even though it's hard to say how that translates to performance exactly. I hope you will be able to try it when it is released in the coming months, and also that we will be able to test on WSL as well.
just an update on this, I updated to 0.14.21 today and tried out the v3 UI, and whatever is behind the 'hanging' is substantially worse there, to the point that it no longer happens just at startup, but throughout many other interactions with the app as well
looked at logs and saw that list_projects is maybe related? it took ~11 minutes?
2025-05-13T13:56:11.847203Z INFO list_projects: crates/gitbutler-tauri/src/projects.rs:51: close time.busy=707s time.idle=86.6µs
2025-05-13T13:56:11.848116Z INFO list_projects: crates/gitbutler-tauri/src/projects.rs:51: close time.busy=707s time.idle=29.9µs
~followup --> i wiped out any share/cache state and started from a 'clean install' and it seems to have improved things~
blegh, nvm, it froze again, i've got the tail running, will see if another long command appears
yeah another same one, weird
2025-05-13T14:30:23.368166Z INFO list_branches: crates/gitbutler-tauri/src/virtual_branches.rs:486: close time.busy=12.7ms time.idle=64.2µs project_id=486ba205-09c3-427c-9e72-7161acdbc826 filter=None
2025-05-13T14:41:30.123864Z INFO list_projects: crates/gitbutler-tauri/src/projects.rs:51: close time.busy=669s time.idle=56.3µs
2025-05-13T14:41:30.123912Z INFO list_projects: crates/gitbutler-tauri/src/projects.rs:51: close time.busy=669s time.idle=27.0µs
2025-05-13T14:41:30.124054Z INFO list_projects: crates/gitbutler-tauri/src/projects.rs:51: close time.busy=669s time.idle=74.1µs
This is very unfortunate, but also very interesting as it now seems to be pinpointed to that particular function.
https://github.com/gitbutlerapp/gitbutler/blob/8d65bff2881088438b080b4d80e5678147a3648f/crates/gitbutler-tauri/src/projects.rs#L50-L69
Ultimately, the function reads one json file and lists the content.
It gets called about 4 times when starting up the application at least in nightly, but it's the same amount both for V2 and V3. For now I have no idea what could be causing this, but since the function takes between 20 to 100microseconds on my machine, it feels like 707 seconds are related to some kind of timeout or slowdown.
let me check in nightly and see if it's any better
i agree it seems particularly odd, maybe actually unrelated / red-herring and it's just an artifact of something else being slow in the meantime, but it's also oddly consistently that call that seems to have long timelines
the change I see linked above, that'll be available in next nightly?
It's the latest code, and it didn't change recently so definitely will be in nightly, but it's the same in stable. I also don't know what could be done other than trying to profile the program using other tools, to see where it is hanging exactly.
sorry i meant the PR here https://github.com/gitbutlerapp/gitbutler/pull/8544
i guess once that lands in a build, i could try that, and if it does happen i'd see it blow up with that new error? (if indeed that's what's happening)
Even though the PR is related, it's not what's happening here I am sure as otherwise the function would never finish. I was merely reminded of this loop and thought that it would be best to defuse it, just in case.
Indeed, right now it's unclear what is making what comes down to reading a single file so slow. Maybe strace could help to shed some light? I'd expect it to hang on a system call.
ahh okay, i'll see if i can take a look with some kind of profiling
i'm running a strace but this time it's taking longer than expected to unhang, but what i am seeing tailing the trace is just lots and lots of hits to individual files, and in particular basically every file in the monorepo's collection of node_modules
@Byron is there any option or way to tell it to exclude certain files/folder patterns from the watcher? or i guess generally, does it respect .gitignore (and in particular, the nested .gitignore's that would be present in a monorepo?
and if I yeet all the node_modules folders and rerun gitbutler, I'm getting pretty reliable startup without hanging
That's great news, we know so much more now!
What's strange is that the log clearly showed list_projects as the culprit even though that most definitely doesn't trigger something like a status of all files, and even if it would, ignored files are definitely not going to be touched.
I created a profile on my own that associated CPU and callstack information with file activity, and when it's done "modeling" I hopefully can share more as to where this is coming from.
so I could never get it to unhang while tracing, but I grabbed what I hope is a representational snippet, but lmk if there's some other thing i could try to look for here
993282 18:56:58.110697 getdents64(63, <unfinished ...>
993086 18:56:58.110721 poll([{fd=35, events=POLLIN}, {fd=47, events=POLLIN}], 2, -1 <unfinished ...>
993282 18:56:58.110751 <... getdents64 resumed>0x7ef4775aa2e0 /* 3 entries */, 32768) = 80
993282 18:56:58.110824 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/is-bigint/test/index.js", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=2252, ...}) = 0
993282 18:56:58.110993 getdents64(63, 0x7ef4775aa2e0 /* 0 entries */, 32768) = 0
993282 18:56:58.111145 close(63) = 0
993282 18:56:58.111293 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/is-bigint/README.md", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=1623, ...}) = 0
993282 18:56:58.111453 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/is-bigint/.nycrc", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=139, ...}) = 0
993282 18:56:58.111614 openat(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/is-bigint/.github", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 63
993282 18:56:58.111767 fstat(63, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
993282 18:56:58.111904 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/is-bigint/.github", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFDIR|0755, stx_size=4096, ...}) = 0
993282 18:56:58.112058 inotify_add_watch(53, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/is-bigint/.github", IN_MODIFY|IN_ATTRIB|IN_CLOSE_WRITE|IN_MOVED_FROM|IN_MOVED_TO|IN_CREATE|IN_DELETE) = 3080
993282 18:56:58.112233 getdents64(63, 0x7ef4775aa2e0 /* 3 entries */, 32768) = 80
993282 18:56:58.112386 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/is-bigint/.github/FUNDING.yml", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=580, ...}) = 0
993282 18:56:58.112551 getdents64(63, 0x7ef4775aa2e0 /* 0 entries */, 32768) = 0
993282 18:56:58.112698 close(63) = 0
993282 18:56:58.112895 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/is-bigint/CHANGELOG.md", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=6674, ...}) = 0
993282 18:56:58.113056 getdents64(60, 0x7ef477588a40 /* 0 entries */, 32768) = 0
993282 18:56:58.113200 close(60) = 0
993282 18:56:58.113342 getdents64(56, 0x7ef47742e180 /* 0 entries */, 32768) = 0
993282 18:56:58.113508 close(56) = 0
993282 18:56:58.113685 getdents64(51, 0x7ef47755f240 /* 0 entries */, 32768) = 0
993282 18:56:58.113851 close(51) = 0
993282 18:56:58.113994 openat(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 51
993282 18:56:58.114152 fstat(51, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
993282 18:56:58.114313 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFDIR|0755, stx_size=4096, ...}) = 0
993282 18:56:58.114494 inotify_add_watch(53, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]", IN_MODIFY|IN_ATTRIB|IN_CLOSE_WRITE|IN_MOVED_FROM|IN_MOVED_TO|IN_CREATE|IN_DELETE) = 3775
993282 18:56:58.114644 getdents64(51, 0x7ef47755f240 /* 3 entries */, 32768) = 80
993282 18:56:58.114794 openat(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 56
993282 18:56:58.114945 fstat(56, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
993282 18:56:58.115072 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFDIR|0755, stx_size=4096, ...}) = 0
993282 18:56:58.115218 inotify_add_watch(53, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules", IN_MODIFY|IN_ATTRIB|IN_CLOSE_WRITE|IN_MOVED_FROM|IN_MOVED_TO|IN_CREATE|IN_DELETE) = 3776
993282 18:56:58.115342 getdents64(56, 0x7ef47742e180 /* 4 entries */, 32768) = 112
993282 18:56:58.115921 openat(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 60
993282 18:56:58.116072 fstat(60, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
993282 18:56:58.116206 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFDIR|0755, stx_size=4096, ...}) = 0
993282 18:56:58.116355 inotify_add_watch(53, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5", IN_MODIFY|IN_ATTRIB|IN_CLOSE_WRITE|IN_MOVED_FROM|IN_MOVED_TO|IN_CREATE|IN_DELETE) = 3224
993282 18:56:58.116491 getdents64(60, 0x7ef477588a40 /* 7 entries */, 32768) = 192
993282 18:56:58.116636 openat(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5/lib", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 63
993282 18:56:58.116776 fstat(63, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
993282 18:56:58.116908 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5/lib", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFDIR|0755, stx_size=4096, ...}) = 0
993282 18:56:58.117047 inotify_add_watch(53, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5/lib", IN_MODIFY|IN_ATTRIB|IN_CLOSE_WRITE|IN_MOVED_FROM|IN_MOVED_TO|IN_CREATE|IN_DELETE) = 3225
993282 18:56:58.117190 getdents64(63, 0x7ef4775b38b0 /* 10 entries */, 32768) = 304
993282 18:56:58.117332 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5/lib/util.js", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=989, ...}) = 0
993282 18:56:58.117476 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5/lib/unicode.js", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=15481, ...}) = 0
993282 18:56:58.117621 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5/lib/register.js", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=423, ...}) = 0
993282 18:56:58.117769 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5/lib/index.js", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=418, ...}) = 0
993282 18:56:58.117908 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5/lib/parse.js", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=13463, ...}) = 0
993282 18:56:58.118048 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5/lib/stringify.js", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=6146, ...}) = 0
993282 18:56:58.118195 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5/lib/require.js", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=119, ...}) = 0
993282 18:56:58.118338 statx(AT_FDCWD, "/home/jared/dev/big-monorepo/node_modules/@shared-utils/postgres/node_modules/@shared-utils/instrumentation/node_modules/@shared-utils/test-utils/node_modules/@shared-utils/type-utils/node_modules/@shared-utils/eslint-config/node_modules/.pnpm/[email protected]/node_modules/json5/lib/cli.js", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0755, stx_size=2220, ...}) = 0
993282 18:56:58.118490 getdents64(63, 0x7ef4775b38b0 /* 0 entries */, 32768) = 0
There is certainly a lot of optimisation potential, a lot of files are read repeatedly and there is a high reliance on filesystem caching to work and the filesystem to be fast.
@krlvi brought up that the filesystem watcher may be forced to a polling based mechanism if the filesystem doesn't support native filesystem events. That might explain why node_modules is traversed over and over again.
Assuming this worst case, could you check how many files there are effectively?
find . | wc -l
We'd probably have to find configurable alternatives to the filesystem watcher then, which could be as simple as a poll-based mechanism (i.e. run a git status every second, check for reference changes on HEAD, which probably already covers a lot of ground). That would be more wasteful over time, but work much more quickly.
Alternatively, maybe there could be a filesystem watcher machinery that takes .gitignore into consideration.
In any case, it's something to tackle in V3 once the dust has settled on a couple of other construction sites.
find . | wc -l
736492
Alternatively, maybe there could be a filesystem watcher machinery that takes .gitignore into consideration.
yeah this is where my head was at, but it makes me wonder how vscode approaches it, but from https://github.com/microsoft/vscode/issues/102829 it seems like vscode still watches files that are gitignored
Yes, that's quite a lot of files which would certainly take a couple of seconds to traverse at least. It seems that the filesystem exposed to the WSL is slow, or goes through a translation layer. Is my assumption that WSL access data on a Windows filesystem correct?
Also it would be interesting to know if VSCode manages to watch this directory without a hitch, even without using files.watcherExclude?
With VSCode (and friends, I use cursor mostly these days) I don't notice any slowness, and their internal git tooling seems to work fine. I haven't had to use watcher exclude settings. I do have to bump the inotify watcher limits, but my machine has plenty of ram so I just set that sky high and things are fine.
Is my assumption that WSL access data on a Windows filesystem correct?
I'm not an expert in the inner workings of WSL by any means, but this repo is entirely in the linux filesystem, and I'm running GitButler from linux as well, not in windows, so there shouldn't be any crossing over the boundaries between the wsl subsystem and windows itself here.
Thank you!
That gives me hope that the inotify based file watching can indeed be made to work, and maybe it's a shortcoming in the notify crate, or in our usage of it.
Unfortunately, as I kept plugging away at this last night, I wound up seeing similar odd freezing/hanging behaviors even when working with the repo without packages installed, which is a filecount more like ~18k
I could trigger it pretty reliably in that scenario by going to remove project, which would then lead to the app freezing and sometimes the screen going white -- but it's definitely unclear what layer is misbehaving there
strace excerpt, this is from some chunk of time during which the screen was white, in case it's useful, but seems like just a pretty innocuous event loop:
3103976 09:03:26.462389 clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=16000000}, <unfinished ...>
3103896 09:03:26.462416 read(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.462547 write(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.462707 sendmsg(33, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\34\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", iov_len=24}, {iov_base="\0\0M\7\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\32\0\0\0<\0\0\0", iov_len=28}], msg_iovlen=2, msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 52
3103896 09:03:26.462894 poll([{fd=33, events=POLLIN}, {fd=45, events=POLLIN}], 2, -1) = 1 ([{fd=45, revents=POLLIN}])
3103896 09:03:26.463073 read(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.463247 poll([{fd=33, events=POLLIN}, {fd=45, events=POLLIN}], 2, -1 <unfinished ...>
3103976 09:03:26.478651 <... clock_nanosleep resumed>0x7fda3893d580) = 0
3103976 09:03:26.478775 write(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.478993 <... poll resumed>) = 1 ([{fd=45, revents=POLLIN}])
3103976 09:03:26.479056 clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=16000000}, <unfinished ...>
3103896 09:03:26.479088 read(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.479246 write(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.479459 sendmsg(33, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\34\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", iov_len=24}, {iov_base="\0\0M\7\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\33\0\0\0<\0\0\0", iov_len=28}], msg_iovlen=2, msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 52
3103896 09:03:26.479702 poll([{fd=33, events=POLLIN}, {fd=45, events=POLLIN}], 2, -1) = 1 ([{fd=45, revents=POLLIN}])
3103896 09:03:26.479852 read(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.480049 poll([{fd=33, events=POLLIN}, {fd=45, events=POLLIN}], 2, -1 <unfinished ...>
3103976 09:03:26.495255 <... clock_nanosleep resumed>0x7fda3893d580) = 0
3103976 09:03:26.495441 write(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.495662 <... poll resumed>) = 1 ([{fd=45, revents=POLLIN}])
3103976 09:03:26.495730 clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=16000000}, <unfinished ...>
3103896 09:03:26.495768 read(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.496051 write(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.496251 sendmsg(33, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\34\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", iov_len=24}, {iov_base="\0\0M\7\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\34\0\0\0<\0\0\0", iov_len=28}], msg_iovlen=2, msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 52
3103896 09:03:26.496496 poll([{fd=33, events=POLLIN}, {fd=45, events=POLLIN}], 2, -1) = 1 ([{fd=45, revents=POLLIN}])
3103896 09:03:26.496661 read(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.496842 poll([{fd=33, events=POLLIN}, {fd=45, events=POLLIN}], 2, -1 <unfinished ...>
3128654 09:03:26.508731 <... futex resumed>) = -1 ETIMEDOUT (Connection timed out)
3128654 09:03:26.508843 futex(0x7fda7c00ea00, FUTEX_WAKE_PRIVATE, 1) = 0
3128654 09:03:26.509036 futex(0x7fda7c00ea54, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1747206206, tv_nsec=608997120}, FUTEX_BITSET_MATCH_ANY <unfinished ...>
3103976 09:03:26.511946 <... clock_nanosleep resumed>0x7fda3893d580) = 0
3103976 09:03:26.512071 write(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.512219 <... poll resumed>) = 1 ([{fd=45, revents=POLLIN}])
3103976 09:03:26.512251 clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=16000000}, <unfinished ...>
3103896 09:03:26.512278 read(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.512494 write(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.512679 sendmsg(33, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\34\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", iov_len=24}, {iov_base="\0\0M\7\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\35\0\0\0<\0\0\0", iov_len=28}], msg_iovlen=2, msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 52
3103896 09:03:26.512905 poll([{fd=33, events=POLLIN}, {fd=45, events=POLLIN}], 2, -1) = 1 ([{fd=45, revents=POLLIN}])
3103896 09:03:26.513100 read(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.513283 poll([{fd=33, events=POLLIN}, {fd=45, events=POLLIN}], 2, -1 <unfinished ...>
3103976 09:03:26.528439 <... clock_nanosleep resumed>0x7fda3893d580) = 0
3103976 09:03:26.528616 write(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.528772 <... poll resumed>) = 1 ([{fd=45, revents=POLLIN}])
3103976 09:03:26.528813 clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=16000000}, <unfinished ...>
3103896 09:03:26.528835 read(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.528991 write(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.529188 sendmsg(33, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\34\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", iov_len=24}, {iov_base="\0\0M\7\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\36\0\0\0<\0\0\0", iov_len=28}], msg_iovlen=2, msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 52
3103896 09:03:26.529405 poll([{fd=33, events=POLLIN}, {fd=45, events=POLLIN}], 2, -1) = 1 ([{fd=45, revents=POLLIN}])
3103896 09:03:26.529576 read(45, "\1\0\0\0\0\0\0\0", 8) = 8
3103896 09:03:26.529750 poll([{fd=33, events=POLLIN}, {fd=45, events=POLLIN}], 2, -1 <detached ...>
I imagine the most robust version of this working will eventually be something that looks very much like how vscode approaches their wsl integration, with GitButler running in windows and connecting to a 'remote' backend running in the linux environment, but I guess that's even beyond the ominous v3 😆
I do still think that being able to filter the watched files reliably helps treat that startup hang, but it's probably not the only quirk going on
Thanks so much for sharing, and for giving GitButler a serious shot!
This strace log indeed looks like something else, even though it's unclear to me what's behind fd 33 and 45, and why it would continuously read and write this byte pattern.
It's good that this only happens when removing a project - that would bring down a file watcher and maybe that once again contributes to this issue.
This makes me feel more strongly about having an alternative, simple and poll-based mechanism as an option just to be able to see if this makes it run stably.
I imagine the most robust version of this working will eventually be something that looks very much like how vscode approaches their wsl integration, with GitButler running in windows and connecting to a 'remote' backend running in the linux environment, but I guess that's even beyond the ominous v3 😆
The frontend, if run on Windows, might indeed run better as the webview might be more modern than what's available on Linux, but that would be the only difference. The backend would still reside on Linux, with all the issues this seems to be causing right now.
But running the backend on another host is definitely something we'd want to achieve, and can, as the application architecture is already setup with a clear differentiation of backend and frontend.
yeah no problem, gitbutler is such a gamechanger that i still try to use it as my daily driver, even with this, so i'm happy to be your guinea pig, try nightlies, etc for any ideas you have here
Can you share which Linux distribution exactly you are using? In theory, a VM should be able to reproduce this case, after all it's just a virtualized linux with a local, linux specific filesystem. At least that's the theory.
In parallel I am setting up a native Windows machine with WSL, and it would probably be good to use the right linux distribution there as well to increase the chances of a reproduction.
I'm using Debian 13 at the moment, but when I originally observed the behavior I was on Ubuntu (i think 22 or 24, whichever one butler does actually work on), no noticeable differences between them
I wiped and rebuilt the comp a bit ago and used that as a chance to try out Debian 13 in case it helped
I haven't had a chance to try gitbutler on a 'pure' linux installation, it's on my list but unfortunately haven't had time.
oh and I'm using GitButler on Debian13 via the AppImage installation
In parallel I am setting up a native Windows machine with WSL, and it would probably be good to use the right linux distribution there as well to increase the chances of a reproduction.
I'd probably recommend just starting with Ubuntu, b/c Debian stable doesn't have the right glibc yet, which is why I had to bump to trixie
there's a commonly referenced community guide here https://gist.github.com/KroniK907/24c6edafa2b34af773dc9466dd7e4e00
but for the AppImage based installation, a chunk of that isn't strictly necessary (i.e. the stuff from extracting the app image on down is mainly just about getting the app in a position to be launched from an icon in windows-land)
once I have the actual dependencies, my install script is just:
wget https://app.gitbutler.com/downloads/release/linux/x86_64/gz -O GitButler.AppImage.tar.gz
tar xzf GitButler.AppImage.tar.gz
chmod +x GitButler.AppImage
mkdir -p ~/.local/bin
mv GitButler.AppImage ~/.local/bin/
and i just start it from commandline in wsl with nohup
Thanks again! WSL is now working and waiting for input (on a freshly installed system). Hopefully I will get to reproducing this with a debug version sometime soon.
A new flavor of this has emerged, I've started see the UI not hang, but file system changes no longer reflected. I guess it's giving up on watching certain files (sometimes a save to one file will trigger an update, but saving another won't).
Any ideas?
tbf I'd be open to even only refreshing on demand, via a button that maybe runs git status, if that's something that could be wedged in
Any ideas?
Unfortunately no. If it really was only partially watching then this might be related to the underlying crate not failing anymore when it's out of handles. But I wouldn't know anything about that.
You should be able to see everything that's happening by looking running…
cargo run --bin but-testing -- -C /path/to/repo watch
…after applying this patch:
diff --git a/crates/but-testing/src/main.rs b/crates/but-testing/src/main.rs
index 93167c10a..15cbbb7fd 100644
--- a/crates/but-testing/src/main.rs
+++ b/crates/but-testing/src/main.rs
@@ -169,7 +169,7 @@ mod trace {
tracing_forest::ForestLayer::from(
tracing_forest::printer::PrettyPrinter::new().writer(std::io::stderr),
)
- .with_filter(LevelFilter::DEBUG),
+ .with_filter(LevelFilter::TRACE),
)
.init();
Ok(())
tbf I'd be open to even only refreshing on demand, via a button that maybe runs git status, if that's something that could be wedged in
It's possible to press Ctrl + R to reload the frontend, that should refresh everything.