cant modify create code file or edit code automatically like cursor
Hello, thank you fort this project, but i realized the llama3.2 model im using through ollama cant modify my code directly, cant even write code itself, so its basically working like chatgpt...is there a way to make it work like cursor??? i mean a way it can write codes and edit code itself
How big is the llama model you're using? It might be a model intelligence thing unfortunately.
@andrewpareles Do you have any suggestion about the llama model to use?
How big is the llama model you're using? It might be a model intelligence thing unfortunately.
I don't think it's a model thing cause my model can write codes too but im talking about the ability to create and even edit file directly on pc the way cursor works, seems void can't do that
@MadeByKit can you let us know what model and provider you're using?
@agungsb Larger models (32B+) work best. DeepSeek R1 32B is great for Agent mode. Unfortunately smaller models (8B, etc) just aren't quite there yet, but they might be in the not too distant future.
Just had the same issue using llama 70b on groq
https://github.com/user-attachments/assets/dcdf77a5-d028-41c8-aebb-104f57afe316
@agungsb Larger models (32B+) work best. DeepSeek R1 32B is great for Agent mode. Unfortunately smaller models (8B, etc) just aren't quite there yet, but they might be in the not too distant future.
This doesn't seem to be true; we are using larger models - 70b - and they have these same issues. Tried both ollama and lmstudio. Local file system just is never touched, no files are edited, etc. At most, it sits there and talks about what it would or wants to do, but does not do it. Worse case, it errors out on read_file and the file system is never touched.
What providers and models work, if ollama or lmstudio using gguf are not supported?
@duaneking devstral works, qwen3 works, even local versions on m1 pro 32 gb
try openRouter to see if your models are configured properly
Try increasing context window size
Devstral model that worked for me here - https://huggingface.co/unsloth/Devstral-Small-2505-GGUF#ollama I used ollama
- Nobody I know is running this on a mac.
- The system in question has 192 gigabytes of RAM and runs models fine.
- Not a single model has worked for me. The common thing seems to be that they all talk a big game but don't actually do anything. Every single model seems to be impotent.
gotta be all LLMs, not void problem (+ unfortunately)
These same models work outside of void, but never in it.
Same issue here with qwen2.5-coder aka recommended model for self hosted llama from void?