Unable to Stream
Description
Version:
"next": "14.2.5"
"ai": "3.2.41"
Using App Router
Route (/api/chat)
const result = await streamText({
model: model,
system: systemContext,
messages: convertToCoreMessages(messages),
tools: {
getKnowledgeSummary: GetKnowledgeSummaryTool,
checkKnowledge: CheckKnowledgeTool,
},
});
return result.toDataStreamResponse();
Client Component:
"use client";
import { useChat } from "ai/react";
import { unstable_noStore as noStore } from "next/cache";
export function ChatWindow() {
noStore();
const { messages, input, setInput, handleInputChange, handleSubmit, isLoading } = useChat({
maxToolRoundtrips: 5,
});
This code streams successfully locally, but not once deployed to vercel. All routes and components in this application are dynamic and follow the suggestions from https://sdk.vercel.ai/docs/troubleshooting/common-issues/streaming-not-working-on-vercel
Any idea what I'm missing?
Code example
No response
Additional context
No response
I updated ai from 2.2.37 to 3.2.37 and now huggingface is not working and is giving 405 error when stream generated from HuggingfaceStream() to StreamTextResponse().
It works locally but fails when deployed on vercel.
https://github.com/vercel/ai/issues/2485
I'm having a similar issue but have deployed AI SDK and Next.js v14 App Router on AWS Lambda (ussing sst.dev as the framework). Streaming works (streamUI()) in dev locally, but when deployed in production, there's no streaming.
Was anyone able to resolve this issue? I hit the same. 405 when deployed to vercel but locally, it seems to work fine.
Any updates on this? I'm currently running into this myself.
@johnpolacek-veg this is a very old issue. what ai sdk versions are you using? can you provide a reproduction with current code? what is the env where this does not work?
@lgrammel Good news - the issue was with the size of our request payload and our Cloudfront rules. Thanks for following up!
For future folks who might find this - There is a default limit of 8k to payloads in api requests if you are are on AWS with Cloudfront. Fairly easy to hit this limit if you are sending chat thread histories for example in an api request. You have to set a custom rule to get around it.