ai icon indicating copy to clipboard operation
ai copied to clipboard

Feedback/Notification on experimental_onFunctionCall Execution on Server

Open pivanov opened this issue 1 year ago • 2 comments

Feature Description

Мechanism by which I can understand that OpenAI function call is captured on the server. It will be great if this is bi-directional. server => client, client => server

Use Case

I want to know (on the client) that my OpenAI function call is captured on the server.

Additional context

Hey guys,

My question is similar to this one: Optionally pass server-side function calls back to the client https://github.com/vercel/ai/issues/404

experimental_StreamData is great but it's sent after the function is complete, right?

If my function takes let's say 10 seconds to be executed I want to know that my function is captured and I want to show a loading component.

Maybe I've missed something? Or this one is more like a feature request? :)

Hope this will help!

pivanov avatar Oct 31 '23 06:10 pivanov

Second this! Without this, the assistant will get tripped off quite easily.

For instance, I had this situation:

  1. There was an error during a function call
  2. The nested completion call in the server-side experimental_onFunctionCall callback got that error response and wrote a natural language response like "Sorry, there was an error"
  3. Only the "Sorry, there was an error" message got put into the message history
  4. The 'main' assistant only saw the "Sorry, there was an error" message, and not the details of the failed call/response
  5. Thus, without context, it thought the "Sorry, there was an error" message was not valid, and was not able to improve its call based on the given function call error response.

I have been trying to use the experimental_StreamData for putting the function call and response messages into the message object, but wasn't successful yet. For instance, I can merge message and data objects in the component tree like so:

{[...messages, ...(data ?? [])]
  .map((message) => (
    <div>...</div>
))}

but this still does not give the assistant access to it as part of the message history. Using setMessages in the client-side onFinish callback does not seem to work, and a useEffect dependent on the data object does also not work, because data changes with every streamed token.

I know there's support for a data attribute to the message object on the roadmap, but I am not sure that is the best way? Is there not a way we can simply send the function call and result messages back to the client, i.e. somehow insert it into the message history server-side?

verrannt avatar Nov 16 '23 11:11 verrannt

I understand the demand, it would be the same as the code interpreter, in addition to the response it brings the output of the function call. There is something along these lines made by @lgrammel in examples/next-openai, I didn't have time to study the implementation but I believe there is something similar there

tgonzales avatar Nov 16 '23 12:11 tgonzales