Agent Panel: prompts take a long time to execute in big codebases (over 50k files)
Summary
I work in a large codebase that has over 50k files (excluding .gitignore), it takes more than 2 min for the command and doesn't give feedback that is loading.
Description
When I type a command, it takes around 2 ~ 5 min to start sending to the LLM. The subsequent commands face the same issue.
The reason we have so many files, it's because we have some vendorized libraries and internal modules.
I suppose it's because it uses a "tree" or repo map to the LLM. If so, is there a way to generate this data with "max-depth" or something similar?
Suggestions:
- Implement depth limitations (e.g., "max-depth" parameter) when generating repository data
- Add visual loading feedback during command execution (not in the LLM generation, it already has this. I mean when gathering data).
Actual Behavior: appears frozen and never runs the command Expected Behavior: show a visual feedback
Zed Version and System Specs
Zed: v0.182.0 (Zed Preview) OS: Linux Wayland pop 22.04 Memory: 7.4 GiB Architecture: x86_64 GPU: Intel(R) UHD Graphics (CML GT2) || Intel open-source Mesa driver || Mesa 24.0.3-1pop1~1711635559~22.04~7a9f319
Hey, thanks for reporting this. What model are you using and what are you asking the model to do? Is this a new chat where the first message takes 2 minutes to get a response? Or are the LLM responses taking 2 minutes to respond for specific tasks you're giving it? Any context you can provide will help reproduce and fix this issue. Thanks!
Thanks for looking into this, really appreciate it.
What model are you using and what are you asking the model to do?
It's true across all models. You can ask them anything, even a 'Hello'
Is this a new chat where the first message takes 2 minutes to get a response?
Unfortunately, in both cases. It doesn't get much better after the first message
Or are the LLM responses taking 2 minutes to respond for specific tasks you're giving it?
It's not the LLM itself, it seems it's gathering the repomap or something similar. You know that red indicator that appears when the LLM is responding? It's right before that. And every subsequent message present the same behavior.
Didn't find anything about it in the logs themselves, it's just really slow in that specific part. Because the editor works fine, as long as this part of the process is not running.
If it helps, I can record a video or add logs, build from source. Anything I can do to help identify and fix the issue, just let me know.
@nathabonfim59 are you still experiencing this on the latest version? We're using it on Zed daily, which is a fairly large codebase, but maybe there is something unique with your setup. If you are still running into issues, please let us know!
We also made some changes in https://github.com/zed-industries/zed/pull/31352 to avoid including large or binary files when creating checkpoints in the agent, which may also help.
It improved a lot, but it still an issue in the latest preview. In this particular project, it now takes around 1.5 minutes for the prompt to get registered.
I'm happy to help. Just let me know how.
Zed: v0.188.3 (Zed Preview)
OS: Linux Wayland pop 22.04
Memory: 7.4 GiB
Architecture: x86_64
GPU: Intel(R) UHD Graphics (CML GT2) || Intel open-source Mesa driver || Mesa 24.2.8-1~bpo12+1pop1~1744225826~22.04~b077665
@nathabonfim59 which model provider are you using? Is it local models like ollama or lm studio? Zed Pro? Copilot? Your own Anthropic or Open AI keys?
It's fully solved in v0.189.0. Now it's as fast as any other codebase.
This is the snappiest, fastest and slickest GUI editor I've used so far, and now I'll be able to use it in bigger projects.
I can't thank you guys enough!
Zed: v0.189.0 (Zed Preview)
OS: Linux Wayland pop 22.04
Memory: 7.4 GiB
Architecture: x86_64
GPU: Intel(R) UHD Graphics (CML GT2) || Intel open-source Mesa driver || Mesa 24.2.8-1~bpo12+1pop1~1744225826~22.04~b077665
@nathabonfim59 which model provider are you using? Is it local models like ollama or lm studio? Zed Pro? Copilot? Your own Anthropic or Open AI keys?
I was using Copilot, Zed and some Ollama local models.
Amazing! 🎉 I'm going to close the issue but feel free to reach out if you notice anything else