orbstack
orbstack copied to clipboard
Too many open file descriptors
Describe the bug
I am running two separate instances of Nextcloud with Docker using Orbstack.
In the past two days, I've been running into "Too many files open on system (23)" errors and a quick check using lsof suggests that /Applications/OrbStack.app/Contents/Frameworks/OrbStack has about ~8700 open file descriptors, with around ~1800 duplicate open file descriptors.
A lot of the descriptors are references the two Nextcloud data mounts and persist even after shutting the Nextcloud containers suggesting a leak?
To Reproduce
No response
Expected behavior
No response
Diagnostic report (REQUIRED)
OrbStack info: Version: 1.6.1 Commit: bfaddc6839de8b00b7aff767dbd673ac6ad4259e (v1.6.1)
System info: macOS: 14.5 (23F79) CPU: arm64, 8 cores CPU model: Apple M1 Model: Macmini9,1 Memory: 8 GiB
Full report: https://orbstack.dev/_admin/diag/orbstack-diagreport_2024-06-06T13-47-59.749208Z.zip
Screenshots and additional context (optional)
No response
Important to note: Downgrading to Orbstack v1.5.1 solved the issue, so might have something to do with the v1.6.1 upgrade
Can you send the full open fd/path list to [email protected]?
(please do this next time you see the "too many open file descriptors" error, not before)
I had saved the list of open file descriptors spawned by Orbstack helper when I was troubleshooting this, I have emailed the list to [email protected]
At that time, Orbstack Helper had around 8780 open file descriptors with 1800 duplicated entries. While using v1.6.1, the number of open descriptors would climb up to this number even after Orbstack and system restarts.
Can you get a new list of file descriptors and a diagnostic report after using v1.5.1 for a while, i.e. maybe 2x the amount of time it would take for v1.6.1 to start erroring?
I've been running my docker containers for the last 3 days using Orbstack v.1.5.1 and haven't encountered the file descriptors issue yet. Whereas the issue would pop-up immediately after starting Orbstack v.1.6.1
Checking the number of open file descriptors now -
There are 9186 open descriptors for the process /System/Library/Frameworks/Virtualization.framework/Versions/A/XPCServices/com.apple.Virtualization.VirtualMachine.xpc/Contents/MacOS/com.apple.Virtualization.VirtualMachine
and 1671 open descriptors for the process /Applications/OrbStack.app/Contents/Frameworks/OrbStack Helper.app/Contents/MacOS/OrbStack Helper
I sent an email to [email protected] with the diagnostics file and the open file descriptors for Orbstack v1.5.1, which has been running for approximately 3 days so far.
Any updates on this? Has it been fixed in v1.6.2?
Any updates on this? Has it been fixed in v1.6.2?
I discovered the bug yesterday and was on 1.6.2. I had to revert to 1.5.1 as well.
This only affects external storage devices formatted as FAT, NTFS, etc. It should be fixed by some upcoming changes.
v1.6.4 still has this problem
I use a mac mini with three smb mounted synology servers. Terrrabytes of music that my music manager was never able to run no matter the version of orbstack with the too many open files error but now I get this same errror on my television show manager as well. Allocating more resources didnt seem to solve. I have this as well. Using 8 out of 16gb of ram running 4 containers.
How can we manually increase the ulimit size? Assuming this is the setting that is causing the error.
is it resolved in 1.7.0 canary?
v1.7.0 still has this issue
Chiming in that Orbstack Helper seems to hold on to files post-scan, beginning around the same time as the original poster.
I have an smb mount containing just over 8,000 files, and a docker container that scans for files and file changes. Stopping containers, force-quitting Orbstack Helper, and launching Orbstack again gives me a fresh start and Synology confirms that the files are no longer being accessed (using smbstatus over ssh).
Edit: just wanted to add that stopping the containers does not fix the issue. Reverting to 1.5.1 worked.
Edit 2: switched from smb to afp and it seems to have solved my problems. 1.5.1 was causing other problems with smb connections on Sequoia, I think.
Edit 3: I take it back; it’s still haunting when plex does a refresh that touches nearly every file.
OrbStack 1.7.5
I am running frigate with recordings saved on Synology over smb. Currently mount contains over 8200 files when frigate container started to spam with following errors:
2024-10-12 01:39:20.521928812 [2024-10-12 01:39:20] frigate.record.maintainer ERROR : Error occurred when attempting to maintain recording cache
2024-10-12 01:39:20.521944729 [2024-10-12 01:39:20] frigate.record.maintainer ERROR : [Errno 23] Too many open files in system: '/media/frigate/recordings/2024-10-11/22'
2024-10-12 01:39:25.441911873 [2024-10-12 01:39:25] frigate.record.maintainer WARNING : Unable to keep up with recording segments in cache for back. Keeping the 6 most recent segments out of 7 and discarding the rest...
Edit: Switched from SMB to AFP, will update how it goes.
Edit 2: Has been stable since switched from SMB to AFP.
Edit: Switched from SMB to AFP, will update how it goes.
How went your luck? I think I’m heading for a weeklong test of ‘vanilla docker’ to see what happens. I’m lucky to get an hour of “it just works” now.
v1.8.1 still has this issue
Any updates on this? It has been preventing me from upgrading since v1.5.1
Is this being addressed at least? It still persists on v1.9.2. Thanks!
After moving all my Docker projects over, everything worked (as before, and was extremely impressed with the performance vs Docker Desktop) except for the read/write to my Synology NAS. Reverted to 1.5.1 and, all works again, hoping this gets updated.
https://github.com/immich-app/immich/discussions/12552
Currently affecting Immich containers as well, it seems.
Still present in 1.9.5
@kdrag0n > This only affects external storage devices formatted as FAT, NTFS, etc. It should be fixed by some upcoming changes.
Any eta of this as your update was from June 28, 2024 and as of today Feb 5th, 2025 the issue is still present in the latest version 1.9.5
Here are some most recents logs
OrbStack info: Version: 1.9.5 Commit: 8276a76896800e4a12df9d35065c7458488ac010 (v1.9.5)
System info: macOS: 15.3 (24D60) CPU: arm64, 14 cores CPU model: Apple M4 Pro Model: Mac16,11 Memory: 64 GiB
Full report: https://orbstack.dev/_admin/diag/orbstack-diagreport_2025-02-05T21-51-30.630823Z.zip
And as others have stated rolling back to 1.5.1 fixed the issues
Some context - I store my docker config files on a nas which are shared via smb and mapped to /Volumes, moving my configs to my mac and updating my images to use the local images allowed me to update on the run my docker containers like normal.
v1.10.0 still has this issue
Still having this issue. Hope it gets solved.
I posted about this issue on discord as well and the developer responded with: "sorry no progress on this, not many people use external storage devices with OrbStack so it's not that high priority"
However, they also mentioned a potential workaround by setting sudo sysctl kern.maxfilesperproc=2000000 on the system.
It hasn't worked for me, but I haven't done a complete clean reinstall of Orbstack yet. Maybe this works for others affected?
Version 1.10.2 (19048) still has this issue
Yeah, I had to switch to Rancher because of this. All the other major alternatives (Rancher, Colima, Podman, etc) that I’ve tried don’t have this issue.
I think it's also time to give up on the hope this will be fixed and to migrate elsewhere. Missing out on updated features including correct memory management/release means using an older version of Orbstack is no longer feasible.
I believe Podman would be the closest match? Anyone have experience with the most resource-lightweight of the available options?
1.10.3 has this issue