[Bug] qwen3-coder can't edit file
版本: 1.99.30044
提交: 1.4.9
日期: b5a41840a0ce29fe5a86b2fa07c26b07f92684d2
Electron: 2025-06-23T08:11:20.689Z
ElectronBuildId: 34.5.8
Chromium: undefined
Node.js: 132.0.6834.210
V8: 20.19.1
OS: 13.2.152.41-electron.0
when using qwen3-coder, it always can't use edit file to modify codes, it always report No changes found and the error message is Error: Error: No Search/Replace blocks were received!
it cause qwen3-coder have to read the whole file, and use write file to modify codes, and it seems this way will use more tokens
I have reproduced the same issue.
Same issue, all models served by Ollama, such as qwen3-code or qwen3, cannot apply edits to files.
I'm adding my voice here, facing the same issue with Qwen3-Coder
I can't get any Ollama model to edit files. Works with Gemini models, but not any served by Ollama.
Looking into it, will have this fixed ASAP.. I'm unable to reproduce the issue on mac (qwen3 models can edit files for me) -
What OS are you using?
Also, is it a problem for all Ollama models in general, or just qwen3?
I am using a Mac
VSCode Version: 1.99.30044 Void Version: 1.4.9 Commit: b5a41840a0ce29fe5a86b2fa07c26b07f92684d2 Date: 2025-06-23T08:09:51.384Z Electron: 34.3.2 ElectronBuildId: undefined Chromium: 132.0.6834.210 Node.js: 20.18.3 V8: 13.2.152.41-electron.0 OS: Darwin arm64 24.6.0
None of the models I try from Ollama running on my MacBook or from another beefier server in my homelab with ollama, will edit files. Not even a simple "Create a .voidrules file for me" chat with the Agent generates any code changes. Only offering code to copy/paste.
Looking into it, will have this fixed ASAP.. I'm unable to reproduce the issue on mac (qwen3 models can edit files for me) -
What OS are you using?
Also, is it a problem for all Ollama models in general, or just qwen3?
I'm on Windows 11, using LM Studio. Already tried Qwen3 Coder and Gemma 3. Both face the same problem. They can't see the file fully, and can't edit it as well.
same problem on my mac
got the same problem as well...! and on top of this al ollama models dont have chat history serving to them
I switched back to cursor with Gemini free api key.
Local llm doesn't provide good coding solution for me. What about you guys? Is any model works for you?
Tried some 14b models for code on my homelab server, but saw a comment they are too small. I rented a server on vast ai, and neither devstral:24b nor llama3.1:70b were able to use tools to update files.
I used next configs:
// llama3.1:70b
{
"contextWindow": 131072,
"reservedOutputTokenSpace": 8192,
"supportsSystemMessage": "system-role",
"specialToolFormat": "openai-style",
"supportsFIM": false,
"reasoningCapabilities": {
"supportsReasoning": true,
"canTurnOffReasoning": true,
"canIOReasoning": true,
"reasoningReservedOutputTokenSpace": 16384,
"openSourceThinkTags": [
"<think>",
"</think>"
]
}
}
// devstal
{
"contextWindow": 65536,
"reservedOutputTokenSpace": 4096,
"supportsSystemMessage": "system-role",
"specialToolFormat": "anthropic-style",
"supportsFIM": true,
"reasoningCapabilities": false
}