tabby
tabby copied to clipboard
CodeQwen returns extra white space for code completion
Other models like deepseek works without a problem.
Thanks for reporting the issue - I have also observed the issue and am looking into the debug process. It seems that the tokenizer treats _
as a space, regardless of the context for CodeQwen.
Can confirm this presents in upstream llama.cpp as well:
cross posted at: https://github.com/ggerganov/llama.cpp/issues/7050
Can confirm this presents in upstream llama.cpp as well:
cross posted at: ggerganov/llama.cpp#7050
yeah, white space exists for using vllm as well so its a model thing rather than serving framework thing. I was thinking shiftleft the cursor a few chars and then check the completion result with overlapping substrings to overcome this and thats the only workaround in my mind.
Observed that codeqwen not always returns extra white space, and sometimes the white space is meaningful. Therefore, trim the leading white space may not be an approach.
We've confirmed this is actually an issue comes with Qwen's vocab - which should be fixed in next version CodeQwen release based on my communication with Qwen team.