positron icon indicating copy to clipboard operation
positron copied to clipboard

Assistant switching between AI models: remaining in the same chat can cause conflicts between models

Open rodrigosf672 opened this issue 8 months ago • 1 comments

System details:

Positron and OS details:

Positron Version: 2025.06.0 (Universal) build 39 Code - OSS Version: 1.99.0 Commit: 55dad0b8cac13849ee986eb3f46c75cfcaa599e0 Date: 2025-05-08T03:32:55.440Z Electron: 34.3.2 Chromium: 132.0.6834.210 Node.js: 20.18.3 V8: 13.2.152.41-electron.0 OS: Darwin arm64 24.4.0

Interpreter details:

Python 3.13.0

Describe the issue:

When user switches between AI models, conflicts can emerge. Additionally, user is able to switch models "on the fly", while answer to previous prompt is still being loaded. In AI tools in general, switching between models often triggers the creation of a new chat instead of remaining in the same one (e.g., Gemini, ChatGPT). This might be considered good practice to prevent models from having conflicts with each other.

Steps to reproduce the issue:

In addition to being able to switch models "on the fly", while answer to prompt is still being provided, the main issue to be observed here is the conflict between models after switching, due to the presence of previous context provided by the other model.

  1. Select a Gemini model. Ask tell me which model you are. Assistant responds I am a large language model, trained by Google.
  2. Select Claude (Anthropic) model. Ask tell me which model you are. Assistant responds I apologize for my previous incorrect response. I am Claude, an AI assistant created by Anthropic. I aim to be direct and honest about my identity. I apologize for the earlier confusion where I incorrectly stated I was created by Google.
  3. Select Gemini model. Ask tell me which model you are. Assistant responds I am Claude, an AI assistant created by Anthropic. Assistant will keep answering that, despite the model being Gemini. This might be happening because Gemini assistant is catching the previous context in the same chat and not wanting to "lie to the user".

Expected or desired behavior:

Although this might seem like a small issue, there are other issues that may occur due to conflicts between models. This is just an example of behavior that demonstrates the conflict. In general, expectation would be that, when user selects a new model, new chat should be created, as it occurs in Gemini, ChatGPT, and other AI tools.

Screenshot

Image

rodrigosf672 avatar May 08 '25 15:05 rodrigosf672

@testlabauto

rodrigosf672 avatar May 08 '25 17:05 rodrigosf672