openai_ex icon indicating copy to clipboard operation
openai_ex copied to clipboard

Non-interactive handling of stream errors

Open restlessronin opened this issue 2 months ago • 0 comments

For example, sending an invalid request now generates the following log

iex(21)> chat_stream = openai |> ChatCompletion.create(chat_req, stream: true)
%{
  status: 400,
  headers: [
    {"date", "Tue, 09 Apr 2024 08:58:57 GMT"},
    {"server", "uvicorn"},
    {"content-length", "147"},
    {"content-type", "application/json"}
  ],
  body_stream: #Function<52.53678557/2 in Stream.resource/3>,
  task_pid: #PID<0.2014.0>
}
iex(22)> chat_stream.body_stream |> Stream.flat_map(& &1) |> Enum.each(fn x -> IO.puts(inspect(x)) end)
2024-04-09 09:58:58.669 [warning] pid=<0.789.0> application=openai_ex module=OpenaiEx.HttpSse function=next_sse/1 line=66
[message: "Unexpected value in sse 'acc' after ':done' event received", value: "{\"object\":\"error\",\"message\":\"Conversation roles must alternate user/assistant/user/assistant/...\",\"type\":\"BadRequestError\",\"param\":null,\"code\":400}"]

I guess this was the purpose of your Logger.warning call in the first place, but the problem is that we are logging a perfectly valid reponse payload and we are not adding it to the request state.

To see what is going on you can check this curl request

.../sn-ml-prompts❯ curl http://localhost:8000/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "CodeLlama-34b-Instruct-hf",
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},{"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the world series in 2020?"}
        ],
       "temperature":0,
       "frequency_penalty": 0,
       "presence_penalty":0.1
    }' -v
*   Trying localhost:8000...
* Connected to localhost (localhost) port 8000
> POST /v1/chat/completions HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/8.4.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 385
>
< HTTP/1.1 400 Bad Request
< date: Tue, 09 Apr 2024 08:54:09 GMT
< server: uvicorn
< content-length: 147
< content-type: application/json
<
* Connection #0 to host localhost left intact
{"object":"error","message":"Conversation roles must alternate user/assistant/user/assistant/...","type":"BadRequestError","param":null,"code":400}%

So OpenaiEx will return the following, without the actual error content.

%{
  status: 400,
  headers: [
    {"date", "Tue, 09 Apr 2024 08:36:01 GMT"},
    {"server", "uvicorn"},
    {"content-length", "147"},
    {"content-type", "application/json"}
  ],
  body_stream: #Function<52.53678557/2 in Stream.resource/3>,
  task_pid: #PID<0.2001.0>
}

The ideal would be not to log this as a warning but to add the error event to the body_stream so that we can consume on status != 200 or to returned it the struct like in the following example

%{
  status: 400,
  error: %{
   "message" => "Conversation roles must alternate user/assistant/user/assistant/...",
    "type" => "BadRequestError",
    "param" => null
    } 
  headers: [
    {"date", "Tue, 09 Apr 2024 08:36:01 GMT"},
    {"server", "uvicorn"},
    {"content-length", "147"},
    {"content-type", "application/json"}
  ],
  body_stream: #Function<52.53678557/2 in Stream.resource/3>,
  task_pid: #PID<0.2001.0>
}

Originally posted by @aramallo in https://github.com/restlessronin/openai_ex/issues/83#issuecomment-2044502289

restlessronin avatar Apr 12 '24 06:04 restlessronin