zed
zed copied to clipboard
linux: Startup memory usage is very large
Check for existing issues
- [X] Completed
Describe the bug / provide steps to reproduce it
Environment
deepin v23 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian
If applicable, add mockups / screenshots to help explain present your vision of the feature
No response
If applicable, attach your ~/Library/Logs/Zed/Zed.log
file to this issue.
No response
Thanks for letting us know, what where you doing or what project did you open that led to this problem?
It is a C language project, and search for a function name. There are about 30,000 files in this project.
Perhaps this problem related to https://github.com/zed-industries/zed/issues/9744
I can confirm. I've just built the latest version on master
, and with a single 13B text file open, Zed uses 1GiB of RAM:
This increases with every new window I open (2 windows: 3.0GiB, 3 windows: 4.0GiB, 4 windows: 6.0GiB), but it doesn't seem to depend on the number of files open or the size of the project in a single window.
Closing a window decreases the memory usage by the same amount it added when opening.
The issue occurs in both debug and release builds.
I use Fedora 39, KDE and Wayland on an x86_64 CPU.
I ran under valgrind DHAT and obtained the attached trace: dhat.out.131305.json
It seems like Zed makes a single 1.46GB allocation in ash::device::create_descriptor_pool
. Perhaps blade-graphics
' choice of 60000 for ROUGH_SET_COUNT
isn't sane?
The precise size of this allocation seems to be suspiciously close to a multiple of 60000: 1569600640/60000 = 26160.01066666666666666666 (i.e. a 640-byte remainder)
Confirmed, decreasing the 60000
there to 500
cuts memory usage (RSS) from ~1728MB to ~180MB without any noticeable adverse effects.
@nt8r Thank you so much for this great finding, I was wondering why Zed was using so much memory with seemingly nothing going on! :+1: @kvark This might be of interest to you, any idea if it seems to make sense and could be adjusted without any negative side effects? :slightly_smiling_face:
I was hoping that "gpu-descriptor" would allow me to use it with the specific workflow, eventually, but this hasn't happened yet. Filed https://github.com/zakarumych/gpu-descriptor/issues/42 to track. In the meantime, we can at least add a progressive scheme where the first pool has 10 sets, second pool has 100 sets, third has a 1000, etc.
Prototyped a solution here - https://github.com/kvark/blade/pull/118