void icon indicating copy to clipboard operation
void copied to clipboard

cant modify create code file or edit code automatically like cursor

Open MadeByKit opened this issue 8 months ago • 12 comments

Hello, thank you fort this project, but i realized the llama3.2 model im using through ollama cant modify my code directly, cant even write code itself, so its basically working like chatgpt...is there a way to make it work like cursor??? i mean a way it can write codes and edit code itself

MadeByKit avatar Apr 27 '25 19:04 MadeByKit

How big is the llama model you're using? It might be a model intelligence thing unfortunately.

andrewpareles avatar Apr 28 '25 00:04 andrewpareles

@andrewpareles Do you have any suggestion about the llama model to use?

agungsb avatar Apr 29 '25 03:04 agungsb

How big is the llama model you're using? It might be a model intelligence thing unfortunately.

I don't think it's a model thing cause my model can write codes too but im talking about the ability to create and even edit file directly on pc the way cursor works, seems void can't do that

MadeByKit avatar Apr 29 '25 09:04 MadeByKit

@MadeByKit can you let us know what model and provider you're using?

andrewpareles avatar Apr 30 '25 01:04 andrewpareles

@agungsb Larger models (32B+) work best. DeepSeek R1 32B is great for Agent mode. Unfortunately smaller models (8B, etc) just aren't quite there yet, but they might be in the not too distant future.

andrewpareles avatar Apr 30 '25 01:04 andrewpareles

Just had the same issue using llama 70b on groq

https://github.com/user-attachments/assets/dcdf77a5-d028-41c8-aebb-104f57afe316

LivioGama avatar May 24 '25 12:05 LivioGama

@agungsb Larger models (32B+) work best. DeepSeek R1 32B is great for Agent mode. Unfortunately smaller models (8B, etc) just aren't quite there yet, but they might be in the not too distant future.

This doesn't seem to be true; we are using larger models - 70b - and they have these same issues. Tried both ollama and lmstudio. Local file system just is never touched, no files are edited, etc. At most, it sits there and talks about what it would or wants to do, but does not do it. Worse case, it errors out on read_file and the file system is never touched.

What providers and models work, if ollama or lmstudio using gguf are not supported?

duaneking avatar Jun 12 '25 17:06 duaneking

@duaneking devstral works, qwen3 works, even local versions on m1 pro 32 gb

try openRouter to see if your models are configured properly

Try increasing context window size

Devstral model that worked for me here - https://huggingface.co/unsloth/Devstral-Small-2505-GGUF#ollama I used ollama

ro-mak avatar Jun 13 '25 08:06 ro-mak

  1. Nobody I know is running this on a mac.
  2. The system in question has 192 gigabytes of RAM and runs models fine.
  3. Not a single model has worked for me. The common thing seems to be that they all talk a big game but don't actually do anything. Every single model seems to be impotent.

duaneking avatar Jul 24 '25 15:07 duaneking

gotta be all LLMs, not void problem (+ unfortunately)

ColinRitman avatar Jul 26 '25 16:07 ColinRitman

These same models work outside of void, but never in it.

duaneking avatar Jul 31 '25 00:07 duaneking

Same issue here with qwen2.5-coder aka recommended model for self hosted llama from void?

thenoname-gurl avatar Aug 08 '25 13:08 thenoname-gurl