continue icon indicating copy to clipboard operation
continue copied to clipboard

The 'systemMessage' is not applied to built-in commands

Open ajalexander opened this issue 1 year ago • 1 comments

Before submitting your bug report

Relevant environment info

- OS: macOS 14.4.1
- Continue: v0.9
- IDE: VSCode 1.87.2

Description

The systemMessage property is not being applied to built-in commands (/edit, /comment, etc.) when using models with the provider of openai. This issue assumes that including the systemMessage in those cases is desired/expected.

The systemMessage is being included with general queries in the sidebar.

In my particular case, I'm using a VLLM server rather than OpenAI directly, but the issue should be the same regardless.

To reproduce

There is nothing directly in the Continue output window that will show the issue. For each of the "Look at the messages sent" steps in the write-up below, you can either:

  • Be running the extension in a debugger with a breakpoint set at this point in the OpenAI LLM class and examining body.messages.
  • Look at the messages received by the LLM server (possible in my case since I can look at the VLLM logs)

To reproduce:

  1. Setup the configuration to have a model with provider set to openai
  2. Include a systemMessage for that model
  3. Open the Continue output window
  4. Ask a question using the sidebar
  5. Look at the messages sent to see if the systemMessage is present (working)
  6. Select some code
  7. Invoke the /comment command
  8. Look at the messages sent to see if the systemMessage is present (not working)
  9. Select some code
  10. Invoke the /edit command
  11. Look at the messages sent to see if the systemMessage is present (not working)

Log output

No response

ajalexander avatar Apr 04 '24 19:04 ajalexander

I've tested a change for this locally using the following:

diff --git a/core/llm/llms/OpenAI.ts b/core/llm/llms/OpenAI.ts
index da31df62..dacd16c8 100644
--- a/core/llm/llms/OpenAI.ts
+++ b/core/llm/llms/OpenAI.ts
@@ -122,8 +122,12 @@ class OpenAI extends BaseLLM {
     prompt: string,
     options: CompletionOptions,
   ): AsyncGenerator<string> {
+    const messages: ChatMessage[] = [{ role: "user", content: prompt }];
+    if (this.systemMessage && this.systemMessage.trim().length !== 0) {
+      messages.unshift({ role: "system", content: this.systemMessage });
+    }
     for await (const chunk of this._streamChat(
-      [{ role: "user", content: prompt }],
+      messages,
       options,
     )) {
       yield stripImages(chunk.content);

This changes the behavior to what I'd expect for my use case. However, I don't know whether a similar change should apply in the _complete function as well (I haven't tracked down which code pathway(s) use _complete instead of _streamComplete.

ajalexander avatar Apr 04 '24 20:04 ajalexander