openai.ex icon indicating copy to clipboard operation
openai.ex copied to clipboard

Intermittent Jason.DecodeError while streaming output

Open bfolkens opened this issue 11 months ago • 1 comments

During periods of high volume, and in particular when using some of the gpt-3.5 series models, OpenAI will occasionally split events into multiple chunks. The current approach of splitting each line with "\n" assumes the chunks are complete events. However, this is not always the case.

** (Jason.DecodeError) unexpected end of input at position 18
    (jason 1.4.0) lib/jason.ex:92: Jason.decode!/2
    (elixir 1.15.6) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
    (elixir 1.15.6) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
    (openai 0.6.1) lib/openai/stream.ex:57: anonymous fn/1 in OpenAI.Stream.new/1
    (elixir 1.15.6) lib/stream.ex:1626: Stream.do_resource/5
    (elixir 1.15.6) lib/stream.ex:690: Stream.run/1

bfolkens avatar Mar 27 '24 01:03 bfolkens

Same problem here when using streaming with assistent API. For me it seems to happen when i reach a certain size with the assistent instructions text. For now i reduced the instruction size and the problem doesn't seem to happen.

JoaoSetas avatar Apr 18 '24 22:04 JoaoSetas

I'm having a similar issue with the latest models (4 and 4o) while using the assistants API, I get the following error if I try to stream the response of the threads_create_and_run function:

[error] Unexpected message: {#Reference<0.3374531620.451411971.196574>, :stream_next}

thiagomajesk avatar May 17 '24 17:05 thiagomajesk

@JoaoSetas , @thiagomajesk - have you tried with PR #61 ? I'm using that patch successfully in production at high volume and the problem no longer occurs.

bfolkens avatar May 19 '24 17:05 bfolkens

Hi, @bfolkens! I'm not sure if this is yet another issue on top of that one or a separate issue, but the problem persists even with your branch. Check this code out:

OpenAI.threads_create_and_run([
  assistant_id: @id, 
  model: "gpt-4o",  
  stream: true,
  thread: %{messages: [%{role: "user", content: "Hi"}]}
])
 |> Enum.to_list()

The same problem happens using both assistants beta API V1 and V2

thiagomajesk avatar May 20 '24 12:05 thiagomajesk

@thiagomajesk sorry for delay - it seems like that "Unexpected message ... :stream_next" issue you mentioned above is a separate issue. Also, I looked at the openai.ex source and threads_create_and_run/1 uses the same underlying function calls as completion, so PR #61 should cover the threads API as well.

Additionally, the error looks like it might be generated outside of this library. Are you able to locate the code path in your application that is generating that log message?

bfolkens avatar Jun 25 '24 14:06 bfolkens

I am also experiencing this issue intermittently and it's quite frustrating. I'm looking forward to the fix being merged!

stuartjohnpage avatar Jul 03 '24 20:07 stuartjohnpage

Jumping here to thank @mgallo for all the hard work done on implementing the OpenAI client, thank youuu 🙇 and @bfolkens for patch that fixes the described issue, lovely ❤️

nickgnd avatar Jul 17 '24 09:07 nickgnd

Hey everyone, Sorry I haven't been able to maintain the project as it deserves; life and work have been super busy. Big thanks to @bfolkens (again) for your contribution! I'll review the PR and if it's all ok publish the fix in the next few hours. I'll post here when its released

mgallo avatar Jul 17 '24 12:07 mgallo

The new patch (v0.6.2) has been released! 🎉 If you notice anything not working properly, please keep us posted. Thanks @bfolkens for your efforts!

mgallo avatar Jul 18 '24 09:07 mgallo