continue icon indicating copy to clipboard operation
continue copied to clipboard

Specifying message template in model config

Open avoroshilov opened this issue 5 months ago • 1 comments

Validations

  • [x] I believe this is a way to improve. I'll try to join the Continue Discord for questions
  • [x] I'm not able to find an open issue that requests the same enhancement

Problem

Hello! I've been trying to configure Continue to use with my custom LLM server, and I was a bit puzzled to see that I was receiving messages as a single user message with content of the whole context formatted with the ChatML template framing. Then after digging for a bit, I've discovered this code: https://github.com/continuedev/continue/blob/5f4a9e5189e87a5acef00165129448093edd0c1f/core/llm/autodetect.ts#L214

Which infers template framing from the model name. I was able to fix my issue by spoofing the model name to contain some "claude" so that the template is set to "none" and I receive a set of messages as they should be.

I tried configuring different providers which were shown in the docs to accept apiBase (URL), like vLLM, llama.cpp and some others -- but I was always getting this pre-formatted single string.

Solution

Can you please add a message_template parameter into the model config YAML, so that we can explicitly state "none" and receive the raw list of messages with roles/content? Frankly, to me it seems like it should be default behavior, since servers usually apply the chat template from messages themselves.

For example:

models:
  - name: my_custom_model
    provider: vllm
    model: my_model #instead of writing something like "claude-fake" to trigger template autodetect to "none"
    message_template: none
    apiBase: http://localhost:8000/v1
    roles:
      - chat
      - edit
      - apply
      - autocomplete

avoroshilov avatar Apr 27 '25 14:04 avoroshilov