gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

Chat Template not supporting Qwen_QwQ-32B on second question

Open ilgrank opened this issue 1 year ago • 3 comments

Hi Just downloaded Qwen_QwQ-32B and it worked perfectly in GPT4All 3.10.. well, kind of. The template seems to handle well the firtst question, except for a displayed </think> at the end of first reasoning:

So I'll draft code step by step, making sure to explain variables and logic.
</think>

... but then, for any subsequent question, I always get:

Error: Failed to parse chat template: Unknown method: split at row 23, column 43:
    {%- elif message.role == "assistant" and not message.tool_calls %}
        {%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
                                          ^
        {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
 at row 23, column 43:
    {%- elif message.role == "assistant" and not message.tool_calls %}
        {%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
                                          ^
        {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
 at row 23, column 65:
    {%- elif message.role == "assistant" and not message.tool_calls %}
        {%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
                                                                ^
        {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
 at row 23, column 9:
    {%- elif message.role == "assistant" and not message.tool_calls %}
        {%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
        ^
        {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
 at row 22, column 71:
        {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
    {%- elif message.role == "assistant" and not message.tool_calls %}
                                                                      ^
        {%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
 at row 20, column 5:
{%- for message in messages %}
    {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
    ^
        {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
 at row 19, column 31:
{%- endif %}
{%- for message in messages %}
                              ^
    {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
 at row 19, column 1:
{%- endif %}
{%- for message in messages %}
^
    {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
 at row 1, column 1:
{%- if tools %}
^
    {{- '<|im_start|>system\n' }}

Probably the default template needs some tweak in order to support QwQ 32B?

ilgrank avatar Mar 07 '25 03:03 ilgrank

Can confirm:

  • The opening tag is apparently not displayed while the closing tag is
  • The above error message appears upon entering the second prompt

brankoradovanovic-mcom avatar Mar 12 '25 16:03 brankoradovanovic-mcom

About the opening <think>tag, from the HF model card:

Ensure the model starts with "\n" to prevent generating empty thinking content, which can degrade output quality. If you use apply_chat_template and set add_generation_prompt=True, this is already automatically implemented, but it may cause the response to lack the tag at the beginning. This is normal behavior.

brankoradovanovic-mcom avatar Mar 18 '25 16:03 brankoradovanovic-mcom

Hey, I had the same problem. It's not a perfect solution, but a workaround that works fine for me. Just remove the .split('</think>')[-1].lstrip('\n') part. That way, it works for me with GPT4All 3.10 without any formatting errors or similar issues.

timwirt avatar May 28 '25 12:05 timwirt