zed icon indicating copy to clipboard operation
zed copied to clipboard

Assistant Panel: <error interacting with language model response contained no choices>

Open dustookk opened this issue 9 months ago • 9 comments

Summary

SUMMARY_SENTENCE_HERE

Description

Steps to trigger the problem:

  1. update Zed from v0.180.4 to v0.181.5
  2. open assistant panel and select a openai model like gpt-4o
  3. start to chat with assistant

An error pops out: Response contained no choices

Image

Additional :

  1. Everything works fine if I change the model to GitHub Copilot models. Through packet capture software, I observed that the HTTP responses are the same. I believe the copilot_chat.rs code logic is more robust.
  2. Everything works fine in 0.180.4 and error is showing from 0.181.5

I guess It's from pull28051

@maxdeviant Can you please check it out ,Thanks!

Zed Version and System Specs

Zed: v0.181.5 (Zed) OS: macOS 15.4.0 Memory: 16 GiB Architecture: aarch64

dustookk avatar Apr 16 '25 09:04 dustookk

I encountered the same problem.

lekai63 avatar Apr 16 '25 11:04 lekai63

ARVOE avatar Apr 18 '25 14:04 ARVOE

+1 to this. I also get this error.

System Info

Zed Version: Zed 0.183.10 7736c850ae12764710bcf938e63a494f334710d4
OS: Debian 12 bookworm

0xmzk avatar Apr 24 '25 08:04 0xmzk

Another +1, I have also updated Zed on my other computer, and have encountered the same issue. Not only is there an error when using the AI assistant but also the keybindings on both my computers have changed. I am using VSCode defaults and previously ctrl+enter would submit the chat in the AI assistant whilst now it opens the inline prompt UI - this is the same across both my computers.

➜  ~ zed --version
Zed 0.183.10 7736c850ae12764710bcf938e63a494f334710d4

0xmzk avatar Apr 24 '25 10:04 0xmzk

Another +1.

❯ zed --version
Zed 0.183.11 – /Applications/Zed.app

But I can get the full response content normally. And it still signed "response contained no choices".

FinleyGe avatar Apr 27 '25 07:04 FinleyGe

+1

zed --version
Zed 0.185.10 b5c6567924293e22b19089ea3fbe931472b6dba7  – .local/zed.app/libexec/zed-editor\

With custom provider open ai like on Manjaro Llama-4-Maverick-17B-128E

Paniceres avatar May 07 '25 22:05 Paniceres

+1

With custom provider open ai compatible api with any model.

Zed Version and System Specs

Zed: v0.185.16 (Zed) OS: Linux X11 ubuntu 24.04 Memory: 31.1 GiB Architecture: x86_64 GPU: Intel(R) Xe Graphics (TGL GT2) || Intel open-source Mesa driver || Mesa 24.2.8-1ubuntu1~24.04.1

diegonix avatar May 13 '25 10:05 diegonix

I was successfully using Zed and Azure OpenAI with the proxy from https://github.com/zed-industries/zed/issues/4321#issuecomment-2759774507

Then one of the updates last week (12th-16th May) broke that, and I just get the "Response contained no choices" message.

Macos 15.4.1 w/ Zed 0.186.9

d5ve avatar May 19 '25 02:05 d5ve

Same, with custom openai compatible provider and any model. Also get this message in the Agent panel on any prompt.

❯ zed --version
Zed 0.188.5 17079151fefcb672152c79c03e4efdf7d4d27270  – $HOME/.local/zed.app/libexec/zed-editor

dmarjenburgh avatar Jun 01 '25 14:06 dmarjenburgh

I'm experiencing the same issue when using OpenAI models hosted on an OpenWebUI server (https://openwebui.com/). From what I understand, Zed uses the OpenAI Stream Chat Completions. The error arises in the map_event function, which Zed employs to convert streamed chat completion objects from OpenAI into its own typed LanguageModelCompletionEvent.

This function fails to handle empty choices because, typically, the choices response field would only be empty if Zed were requesting usage stats, which it does not include in the chat completion request creation (source). Please refer to the documentation for more details on choices in response chunks.

Unfortunately, when working with OpenAI models through OpenWebUI, it is possible for administrators to configure the models to enforce usage stats for all chat completion requests (source). Therefore, Zed might encounter empty choices even without directly requesting usage reports. Here are some logs I added in the map_event to capture the stream of chat completion objects:

...
2025-06-16T18:44:36+02:00 ERROR [language_models::provider::open_ai] the event: ResponseStreamEvent { created: 1750092270, model: "o1-2024-12-17", choices: [ChoiceDelta { index: 0, delta: ResponseMessageDelta { role: None, content: Some("."), tool_calls: None }, finish_reason: None }], usage: None }
2025-06-16T18:44:36+02:00 ERROR [language_models::provider::open_ai] the event: ResponseStreamEvent { created: 1750092270, model: "o1-2024-12-17", choices: [ChoiceDelta { index: 0, delta: ResponseMessageDelta { role: None, content: None, tool_calls: None }, finish_reason: Some("stop") }], usage: None }
2025-06-16T18:44:36+02:00 ERROR [language_models::provider::open_ai] the event: ResponseStreamEvent { created: 1750092270, model: "o1-2024-12-17", choices: [], usage: Some(Usage { prompt_tokens: 2991, completion_tokens: 362, total_tokens: 3353 }) }

I have proposed a potential workaround in my pull request #32823. It involves modifying the function to handle these usage chunks of response data consistently. I have tested this solution for my use case, and it successfully addresses the issue.

timtimjnvr avatar Jun 16 '25 22:06 timtimjnvr

Bug is fixed (tested with 0.193)

dustookk avatar Jul 03 '25 07:07 dustookk