continue
continue copied to clipboard
The 'systemMessage' is not applied to built-in commands
Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the Continue Discord for questions
- [X] I'm not able to find an open issue that reports the same bug
- [X] I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: macOS 14.4.1
- Continue: v0.9
- IDE: VSCode 1.87.2
Description
The systemMessage property is not being applied to built-in commands (/edit, /comment, etc.) when using models with the provider of openai. This issue assumes that including the systemMessage in those cases is desired/expected.
The systemMessage is being included with general queries in the sidebar.
In my particular case, I'm using a VLLM server rather than OpenAI directly, but the issue should be the same regardless.
To reproduce
There is nothing directly in the Continue output window that will show the issue. For each of the "Look at the messages sent" steps in the write-up below, you can either:
- Be running the extension in a debugger with a breakpoint set at this point in the OpenAI LLM class and examining
body.messages. - Look at the messages received by the LLM server (possible in my case since I can look at the VLLM logs)
To reproduce:
- Setup the configuration to have a model with
providerset toopenai - Include a
systemMessagefor that model - Open the Continue output window
- Ask a question using the sidebar
- Look at the messages sent to see if the
systemMessageis present (working) - Select some code
- Invoke the
/commentcommand - Look at the messages sent to see if the
systemMessageis present (not working) - Select some code
- Invoke the
/editcommand - Look at the messages sent to see if the
systemMessageis present (not working)
Log output
No response
I've tested a change for this locally using the following:
diff --git a/core/llm/llms/OpenAI.ts b/core/llm/llms/OpenAI.ts
index da31df62..dacd16c8 100644
--- a/core/llm/llms/OpenAI.ts
+++ b/core/llm/llms/OpenAI.ts
@@ -122,8 +122,12 @@ class OpenAI extends BaseLLM {
prompt: string,
options: CompletionOptions,
): AsyncGenerator<string> {
+ const messages: ChatMessage[] = [{ role: "user", content: prompt }];
+ if (this.systemMessage && this.systemMessage.trim().length !== 0) {
+ messages.unshift({ role: "system", content: this.systemMessage });
+ }
for await (const chunk of this._streamChat(
- [{ role: "user", content: prompt }],
+ messages,
options,
)) {
yield stripImages(chunk.content);
This changes the behavior to what I'd expect for my use case. However, I don't know whether a similar change should apply in the _complete function as well (I haven't tracked down which code pathway(s) use _complete instead of _streamComplete.