void icon indicating copy to clipboard operation
void copied to clipboard

[Bug] qwen3-coder can't edit file

Open Sherlock-Holo opened this issue 5 months ago • 11 comments

版本: 1.99.30044
提交: 1.4.9
日期: b5a41840a0ce29fe5a86b2fa07c26b07f92684d2
Electron: 2025-06-23T08:11:20.689Z
ElectronBuildId: 34.5.8
Chromium: undefined
Node.js: 132.0.6834.210
V8: 20.19.1
OS: 13.2.152.41-electron.0

when using qwen3-coder, it always can't use edit file to modify codes, it always report No changes found and the error message is Error: Error: No Search/Replace blocks were received!

it cause qwen3-coder have to read the whole file, and use write file to modify codes, and it seems this way will use more tokens

Sherlock-Holo avatar Aug 01 '25 03:08 Sherlock-Holo

I have reproduced the same issue.

zh3yu avatar Aug 06 '25 09:08 zh3yu

Same issue, all models served by Ollama, such as qwen3-code or qwen3, cannot apply edits to files.

ngocthua830 avatar Aug 08 '25 07:08 ngocthua830

I'm adding my voice here, facing the same issue with Qwen3-Coder

hknoener avatar Aug 09 '25 21:08 hknoener

I can't get any Ollama model to edit files. Works with Gemini models, but not any served by Ollama.

dgvigil avatar Aug 15 '25 17:08 dgvigil

Looking into it, will have this fixed ASAP.. I'm unable to reproduce the issue on mac (qwen3 models can edit files for me) -

What OS are you using?

Also, is it a problem for all Ollama models in general, or just qwen3?

mathewpareles avatar Aug 21 '25 05:08 mathewpareles

I am using a Mac

VSCode Version: 1.99.30044 Void Version: 1.4.9 Commit: b5a41840a0ce29fe5a86b2fa07c26b07f92684d2 Date: 2025-06-23T08:09:51.384Z Electron: 34.3.2 ElectronBuildId: undefined Chromium: 132.0.6834.210 Node.js: 20.18.3 V8: 13.2.152.41-electron.0 OS: Darwin arm64 24.6.0

None of the models I try from Ollama running on my MacBook or from another beefier server in my homelab with ollama, will edit files. Not even a simple "Create a .voidrules file for me" chat with the Agent generates any code changes. Only offering code to copy/paste.

dgvigil avatar Aug 21 '25 18:08 dgvigil

Looking into it, will have this fixed ASAP.. I'm unable to reproduce the issue on mac (qwen3 models can edit files for me) -

What OS are you using?

Also, is it a problem for all Ollama models in general, or just qwen3?

I'm on Windows 11, using LM Studio. Already tried Qwen3 Coder and Gemma 3. Both face the same problem. They can't see the file fully, and can't edit it as well.

hknoener avatar Aug 21 '25 20:08 hknoener

same problem on my mac

Image

milon27 avatar Aug 25 '25 04:08 milon27

got the same problem as well...! and on top of this al ollama models dont have chat history serving to them

Pr0fe5s0r avatar Sep 01 '25 17:09 Pr0fe5s0r

I switched back to cursor with Gemini free api key.

Local llm doesn't provide good coding solution for me. What about you guys? Is any model works for you?

milon27-pyng avatar Sep 01 '25 17:09 milon27-pyng

Tried some 14b models for code on my homelab server, but saw a comment they are too small. I rented a server on vast ai, and neither devstral:24b nor llama3.1:70b were able to use tools to update files.

I used next configs:

// llama3.1:70b
{
  "contextWindow": 131072,
  "reservedOutputTokenSpace": 8192,
  "supportsSystemMessage": "system-role",
  "specialToolFormat": "openai-style",
  "supportsFIM": false,
  "reasoningCapabilities": {
    "supportsReasoning": true,
    "canTurnOffReasoning": true,
    "canIOReasoning": true,
    "reasoningReservedOutputTokenSpace": 16384,
    "openSourceThinkTags": [
      "<think>",
      "</think>"
    ]
  }
}
// devstal
{
  "contextWindow": 65536,
  "reservedOutputTokenSpace": 4096,
  "supportsSystemMessage": "system-role",
  "specialToolFormat": "anthropic-style",
  "supportsFIM": true,
  "reasoningCapabilities": false
}

A avatar Nov 10 '25 12:11 A