ai icon indicating copy to clipboard operation
ai copied to clipboard

Have you verified whether the abort signal is actually functioning in the edge runtime?

Open wsxiaoys opened this issue 2 years ago • 25 comments

Based on my experiments so far, it appears that the AbortController doesn't function properly on Vercel's hosted edge runtime. This observation aligns with the issue discussed in detail on the following GitHub issue: https://github.com/vercel/next.js/issues/50364.

wsxiaoys avatar Jun 16 '23 03:06 wsxiaoys

Update: I have done some testing using the ai-chatbot project deployed to Vercel with some logging. Tokens are not generated on the server, in the background, after cancellation.


I am coming to the same conclusion.

I was very excited to see the release of this library today, particularly this documentation https://sdk.vercel.ai/docs/concepts/backpressure-and-cancellation.

Last month I was asked to add cancellation to a product I am working on for a client. I attempted to get this working with edge functions but could not receive the cancel callback on the server. I decided to see if I could get it working at all and was able to implement a version using Deno since it has a similar runtime API.

At this point, I sent a support request to Vercel and about two weeks later it was confirmed the upstream provider, Cloudflare or AWS, I assume, did not support the abort signal and was aware of this limitation.

I moved our Open AI streaming endpoints to a Fastify server running Node 20 on an AWS ECS cluster. This setup has allowed me to handle the cancellations properly. I can cancel my request to OpenAI, which saves me tokens and writes whatever tokens were received to the db with a "cancel" type.

I did not expect the edge function to receive the AbortController signal, even though the examples use this as the cancellation mechanism. The example of using the pull callback in a ReadableStream was something I hadn't tried previously, so I decided to apply the approach to our codebase.

I found that the AIStream doesn't use the pull callback but instead has a TransformStream. I looked for pull and found it in the Hugging Face example. I converted my request.body to an iterator using a similar approach. This also doesn't seem to handle the cancellation.

I am using the pages directory in our app, assuming that the examples only use the new app directory to highlight it rather than it being a requirement.

jensen avatar Jun 16 '23 06:06 jensen

I moved our Open AI streaming endpoints to a Fastify server running Node 20 on AWS an ECS cluster. This setup has allowed me to handle the cancellations properly. I can cancel my request to OpenAI, which saves me tokens and writes whatever tokens were received to the db with a "cancel" type.

@jensen were you able to confirm this behavior? Based on my experiments, it seems that sending a cancel signal to OpenAI does not actually reduce the token usage of the request...

related: https://github.com/openai/openai-node/issues/134

wsxiaoys avatar Jun 16 '23 06:06 wsxiaoys

There are a lot of moving pieces, but I have successfully cancelled the request using both Deno and Fastify with Node 20.

I have my own Open AI API account that has no traffic. I use this to confirm a long completion that is cancelled uses the number of tokens that matches what I calculate using tiktoken vs what it would have if it finished. My prompt would be "Write a blog post about React". When I cancel it after a few sentences, the usage on the Open AI dashboard matches after an approximate delay of 10 minutes.

jensen avatar Jun 16 '23 06:06 jensen

There are a lot of moving pieces, but I have successfully cancelled the request using both Deno and Fastify with Node 20.

I have my own Open AI API account that has no traffic. I use this to confirm a long completion that is cancelled uses the number of tokens that matches what I calculate using tiktoken vs what it would have if it finished. My prompt would be "Write a blog post about React". When I cancel it after a few sentences, the usage on the Open AI dashboard matches.

It's good to know that cancellation is indeed possible with OpenAI's API. Now, the next steps lie in vercel / nextjs side...

wsxiaoys avatar Jun 16 '23 06:06 wsxiaoys

It looks like this was released today. https://github.com/vercel-labs/ai-chatbot

It has a stop-generating button https://chat.vercel.ai/ which makes sense since the SDK has a stop API through the hooks. I guess my next step is to clone this, set up some logging and deploy it to Vercel to double-check how it is behaving on the server. Perhaps the development environment isn't a good one to test this in, it hasn't been in the past.

jensen avatar Jun 17 '23 05:06 jensen

It looks like this was released today. https://github.com/vercel-labs/ai-chatbot

It has a stop-generating button https://chat.vercel.ai/ which makes sense since the SDK has a stop API through the hooks. I guess my next step is to clone this, set up some logging and deploy it to Vercel to double-check how it is behaving on the server. Perhaps the development environment isn't a good one to test this in, it hasn't been in the past.

Thank you for taking the time to verify this. However, based on the source code, I predict that it won't work because the stop button simply calls the AbortController.abort() function.

I'm eagerly anticipating the result.

wsxiaoys avatar Jun 17 '23 06:06 wsxiaoys

Seems like it stops the new tokens to be displayed on the screen, but the token production still goes on between the Vercel Edge and OpenAI, so you'll be charged for the full amount and incur rate limiting while many generations are still going on in the background. High priority fix, imo.

enricoros avatar Jun 19 '23 00:06 enricoros

I was able to spend some time testing this tonight. I deployed a version of the ai-chatbot application with some additional logging. I am confident the cancellation works as intended when using the edge runtime. Tokens are not generated on the server, in the background, after cancellation.

This is excellent news.

I haven't done as much testing as I want to for our production application, but my next step will be to test my streaming changes using our staging environment.

I likely won't get to this in the next few days since we have already shipped our cancellation feature using AWS. I will still want to move these endpoints back to Vercel in July if I can.

jensen avatar Jun 19 '23 07:06 jensen

Seems related:

https://github.com/vercel/edge-runtime/pull/396 https://github.com/vercel/next.js/pull/51330

Hey @jridgewell , seems what you're working on is related to the issue discussed here?

wsxiaoys avatar Jun 19 '23 19:06 wsxiaoys

Hi! Yes, I'm working on getting proper streaming cancellation and back-pressure into Next.js. If you're using Next as your dev/production server, it's not currently possible to end the stream. Once https://github.com/vercel/next.js/pull/51330 is merged and released, this should be fixed.

jridgewell avatar Jun 20 '23 21:06 jridgewell

Hi all! We've merged https://github.com/vercel/next.js/pull/51594, which implements cancellation only. We'll work on getting back-pressure support after verifying it's impact on Next's general streaming performance (Next is mainly for streaming React components, and we need to make sure that's not taking a hit). I don't think it's going to be an issue, but just need some time to verify.

jridgewell avatar Jun 22 '23 01:06 jridgewell

@jridgewell Anything special devs need to do for using cancellation? And what is back-pressure and what do we need to accomodate it?

I work on one of the popular opensource GPT UIs (https://github.com/enricoros/big-agi) and I'm sure devs like us appreciate your fix.

enricoros avatar Jun 22 '23 17:06 enricoros

Anything special devs need to do for using cancellation?

The Next.js team hasn't released a new version yet (I'll ping them to see if they can do a canary release), but once they have users just need to npm install next@latest (or next@canary).

And what is back-pressure and what do we need to accomodate it?

Back-pressure is explained in https://sdk.vercel.ai/docs/concepts/backpressure-and-cancellation. Essentially, it's the ability for the server pause the stream because the client doesn't need more information yet. Next.js hasn't added support for it yet, but when they do, everyone will need to update their next dependency again.

jridgewell avatar Jun 22 '23 17:06 jridgewell

@jridgewell thanks for the explanation. Will try out the canary when available. I'm glad it doesn't require code changes (maybe some exception handling?) I was trying with AbortControllers and exceptions everywhere, but nothing worked for me wrt cancellations.

enricoros avatar Jun 22 '23 18:06 enricoros

v13.4.8-canary.0 just got released. If you update your project dependency, your dev server will support cancellation, and when you deploy that change your prod server should get it too.

jridgewell avatar Jun 22 '23 18:06 jridgewell

@jridgewell I tried canary.0 and .1, but somehow is not working for me, the server continues to pull events from OpenAI and feed pieces down the ReadableStream controller, despite closing the client browser window.

When I close the socket to the server (physically closing the Chrome window of the edge function caller), this is what I see in the log of the dev server (next dev): image

And this is the code that prints the streaming events (the error is printed by the edge server):

image

This is within the ReadableStream.

return new ReadableStream({
    start: (controller) => {
        ...loop above...
    },
});

I must be doing something wrong.

enricoros avatar Jun 23 '23 05:06 enricoros

Maybe catch the AbortError on the server and expect it to happen by returning null or something? Like the client hooks do: https://github.com/vercel-labs/ai/blob/main/packages/core/react/use-completion.ts#L179

Haven't played around with it just yet, just watching this issue 👀

edit: nvm the above, just tried it and can't seem to catch that error

edit2: unfortunately it does not seem to abort the request to OpenAI as also stated above. The token usage reported on the OpenAI usage page is just too high for some aborted streams I just tested. It reports the usage as if I did not abort anything.

jvandenaardweg avatar Jun 23 '23 08:06 jvandenaardweg

Tried catching the AbortError but doesn't catch anything. Agree, the token usage keeps skyrocketing, a sign that the request to OpenAI servers keeps going..

enricoros avatar Jun 23 '23 15:06 enricoros

Hi @enricoros: I'm not sure where your code is coming from, can you provide a link? Just based on reading the screenshot, it looks like you are the keeping the connection alive by eagerly pulling the data out of upstreamResponse.body with a for await (…) loop (this is talked about in the back-pressure and cancellation doc).

I'm assuming your eventParser is a SSE parser, but I'm not sure where you're sending the data after it's parsed. It's possible this could be fixed with by switching to a TransformStream similar to https://github.com/vercel-labs/ai/blob/107e436e925f660ea9fd02ced726a02cb7831a25/packages/core/streams/ai-stream.ts#L31-L62

jridgewell avatar Jun 23 '23 15:06 jridgewell

Ok I will try it and report. Many of us GPT UIs had implementations that predate vercel/ai, and possibly with a code path that's non optimal.

Code below:

https://github.com/enricoros/big-agi/blob/main/pages/api/openai/stream-chat.ts#L112

enricoros avatar Jun 23 '23 15:06 enricoros

Reading your code, it's definitely possible to switch to a TrasnformStream:

  • Move (most) of the code out of start handler, you can do it immediately before constructing the TransformStream
  • Delete the for await (…) code, replace it with a transform handler as explained in the back-pressure and cancellation

jridgewell avatar Jun 23 '23 15:06 jridgewell

To hook into this conversation. My implementation looks like this: https://sdk.vercel.ai/docs/api-reference/openai-stream#chat-model-example . So with all the methods the AI SDK provides. And the latest NextJS canary. Using the pages directory.

I also tried passing in req.signal into the createChatCompletion options, as it is an option there. But it does not seem to help. The server just throws the "Error: aborted" as mentioned by @enricoros , and then that's it. Also no more logging. But OpenAI is still sending data until it's done, i'm just not receiving it anymore (through the methods provided by the AI SDK)

Edit: Manually aborting using a timeout of a few seconds with a new AbortController inside the Edge function and then passing the signal into the options of createChatCompletion does work. It cancels the stream and it does not add token usage. Just to verify it is supported for that method.

jvandenaardweg avatar Jun 23 '23 17:06 jvandenaardweg

@jvandenaardweg: It seems the AbortSignal isn't actually hooked up the node response that Next.js receives. I've opened https://github.com/vercel/next.js/pull/51727 to address.

And, I've discovered that the cancel handler in a ReadableStream/TransformStream isn't called either. That'll be fixed once https://github.com/vercel/edge-runtime/pull/428 is merged and we update the edge-runtime internal dependency.

jridgewell avatar Jun 23 '23 22:06 jridgewell

@jridgewell awesome! Appreciate the quick response on this! Looking forward to try it out 👍

Also, could you re-open this issue until there's a verification it works?

jvandenaardweg avatar Jun 24 '23 09:06 jvandenaardweg

@jridgewell hi Your commit solves this problem at the same time https://github.com/vercel/next.js/issues/50804

I did not detect the same problem in version v13.4.8-canary.5.

StringKe avatar Jun 27 '23 02:06 StringKe

Reading your code, it's definitely possible to switch to a TrasnformStream:

  • Move (most) of the code out of start handler, you can do it immediately before constructing the TransformStream
  • Delete the for await (…) code, replace it with a transform handler as explained in the back-pressure and cancellation

Thanks for your help @jridgewell . Our app is now ported to use backpressure and cancellation, as you suggested. https://github.com/enricoros/big-agi/blob/490f8bdac30267662bee6b853ec8a3a303d2ab13/pages/api/llms/stream.ts#L141

I looked at the vercel-labs/ai implementation and adapted ours. Due to a couple of changes to the transformation functions, I couldn't use vercel-labs/ai as-is, but it's been an enormous help.

Test results (for when the client closes the connection):

  • without the canary (13.4.7), the TransformStream keeps going
  • with 13.4.8-canary.8, the TransformStream stops, BUT data is still sent from the node process to the OpenAI servers

Great progress - thanks!

enricoros avatar Jun 29 '23 05:06 enricoros

13.4.8 is out now, which fixes both issues from https://github.com/vercel-labs/ai/issues/90#issuecomment-1605084732.

  • with 13.4.8-canary.8, the TransformStream stops, BUT data is still sent from the node process to the OpenAI servers

With https://github.com/vercel/next.js/pull/51944 (released in v13.4.8-canary.12) and OpenAIStream, this should be fixed. The transform stream will receive the cancel() event from Next's server, and that should be propagated to the fetch you're maintaining to OpenAI's server.

jridgewell avatar Jul 03 '23 17:07 jridgewell

Thank you for all of your work on this @jridgewell. This closes a long-standing support ticket I opened in April and allows me to give my client some options.

jensen avatar Jul 03 '23 18:07 jensen

@jridgewell: tested with 13.4.8 and it works!

In our implementation (which is inspired by AIStream, using a TransformStream) I can see the connection stopping from the Node process to the OpenAI servers! GREAT!

There's still an error message on the console (- error uncaughtException: Error: aborted), and maybe others won't see that, but apart from the scare effect, all the new changes seem to be working well! Our abort on the (browser) client side will stop the TransformStream on the (edge) server side, and the fetch to the OpenAI servers stops transmitting bytes too!

Well done @jridgewell!

enricoros avatar Jul 03 '23 19:07 enricoros

Thanks @jridgewell , confirmed it works! Token usages reported on the OpenAI website matches with what you would expect when you cancel the stream.

One future improvement could be to catch the abort in the Edge Function so we don't have a uncaught error, and allow to handle the abort, if that's even possible.

The abort error:

error uncaughtException: Error: aborted
  at connResetException (node:internal/errors:717:14)
  at abortIncoming (node:_http_server:754:17)
  at socketOnClose (node:_http_server:748:3)
  at Socket.emit (node:events:525:35)
  at Socket.emit (node:domain:489:12)
  at TCP.<anonymous> (node:net:322:12)
  at TCP.callbackTrampoline (node:internal/async_hooks:130:17) {
  digest: undefined
}

In my use case I need to keep track of how many tokens are used. I already do this when started (onStart) for the prompt and when completed (onCompletion) for the generated output tokens. So handling the abort would allow me to report token usage at the moment it was aborted.

But I think that would fit better in a new issue here on Github.

Many thanks all!

jvandenaardweg avatar Jul 04 '23 03:07 jvandenaardweg