ai
ai copied to clipboard
Getting function invocation timeout in serverless function streaming
I'm trying to stream responses from serverless functions with the below code. It's working fine in the local env (nodejs server) but I'm getting Function Invocation timeouts when I tested in vercel deployment. Is there something wrong with the code?
Backend code:
import type { VercelRequest, VercelResponse } from '@vercel/node';
import { IFilters } from '@/types/requestTypes';
import { StreamingTextResponse, LangChainStream, Message, streamToResponse } from 'ai';
import { CallbackManager } from 'langchain/callbacks';
import { ChatOpenAI } from 'langchain/chat_models/openai';
import {
AIChatMessage,
BaseChatMessage,
HumanChatMessage,
} from 'langchain/schema';
export default async function handler(
req: VercelRequest,
res: VercelResponse,
) {
console.log('API Called time ', Date.now());
const { messages,filters } = JSON.parse(req.body);
console.log('messages', messages);
console.log('Rqeust Json ', req.body);
console.log('Filters in request ', filters);
console.log('messages', messages);
console.log('Filters in request ', filters);
const { stream, handlers } = LangChainStream({
onStart: async () => {
console.log('Stream Start time ', Date.now());
},
onToken: async (token: string) => {
console.log(token);
},
onCompletion: async (completion: string) => {
console.log('Stream End time ', Date.now());
},
});
const llm = new ChatOpenAI({
streaming: true,
callbackManager: CallbackManager.fromHandlers(handlers),
});
llm.call([new HumanChatMessage(messages[0].content)]).catch(console.error);
streamToResponse(stream, res);
}
Frontend code:
const {
messages,
input: aiSearchInput,
handleInputChange,
handleSubmit,
} = useChat({
api: '/api/streamTest',
body: {
// question: aiSearchInput,
filters: {
tickers: selectedStocksList,
sources: [EARNING_TRANSCRIPT_REPORT_TYPE],
timeframe: selectedQuarters,
},
},
onResponse: () => {
// steaming started
steamStarted = Date.now();
console.log('API latency:', steamStarted - apiHitting, 'milliseconds');
},
onFinish: () => {
// steaming enden
steamEnded = Date.now();
console.log('Stream latency:', steamEnded - steamStarted, 'milliseconds');
console.log('Total latency:', steamEnded - apiHitting, 'milliseconds');
console.log('Stream End:', steamEnded);
console.log('Stream Started');
},
});
Can we have a separate documentation for serverless functions too?
@Sahas can you
- Check how long your function takes to run
- Check the free tier how long serveless funcitons run (normally 10s)
- Check paid tier how long serveless functions run (normally 60s)
@peterokwara I've tried using calling same function with and without edge.
- With edge, the stream started in 2s & stream ended in 20s.
- With serverless, I'm not getting any response as the function invocation is getting timedout
I'm guessing this is because of #97 ("Stream never closes with LangChainStream using postman"). Can you try one of the workarounds in that issue and see if that solves this problem? (You could also check in your dev console if the request is stuck open, even after new messages stop arriving).
Will check, but any idea why it impacts only serverless functions and not edge function?
@jaredpalmer / @shuding can you look into this issue please
any idea why it impacts only serverless functions and not edge function?
No, but I'm less familiar with the edge stuff. I thought perhaps because of the different limits and runtimes, the above issue could possibly express itself this way.
I think there are two issues here:
- Streaming langchain is broken for certain models (see https://github.com/vercel-labs/ai/issues/205)
- You are encountering a timeout. On the hobby tier of Vercel, serverless functions timeout after only 10 seconds.
closing as a duplicate, please reply back if you think I missed something.
@MaxLeiter
- Why is it breaking only for serverless functions and not edge functions?
- I'm using vercel pro account, so the timeout is 60s.
I've used this code
https://github.com/e-roy/openai-functions-with-langchain/blob/main/src/app/api/news-langchain/route.ts
But then i removed
export const runtime = "edge"; i think this means I am not using edge
And I don't get the same issues.
I am using vercel pro. I am using gpt-3.5-turbo-0613