ai icon indicating copy to clipboard operation
ai copied to clipboard

Getting function invocation timeout in serverless function streaming

Open Sahas opened this issue 2 years ago • 1 comments

I'm trying to stream responses from serverless functions with the below code. It's working fine in the local env (nodejs server) but I'm getting Function Invocation timeouts when I tested in vercel deployment. Is there something wrong with the code?

Backend code:

import type { VercelRequest, VercelResponse } from '@vercel/node';
import { IFilters } from '@/types/requestTypes';
import { StreamingTextResponse, LangChainStream, Message, streamToResponse } from 'ai';
import { CallbackManager } from 'langchain/callbacks';
import { ChatOpenAI } from 'langchain/chat_models/openai';
import {
  AIChatMessage,
  BaseChatMessage,
  HumanChatMessage,
} from 'langchain/schema';


export default async function handler(
  req: VercelRequest,
  res: VercelResponse,
) {
  console.log('API Called time ', Date.now());
  const { messages,filters } = JSON.parse(req.body);
  console.log('messages', messages);
  console.log('Rqeust Json ', req.body);
  console.log('Filters in request ', filters);

  
  console.log('messages', messages);
  console.log('Filters in request ', filters);
  const { stream, handlers } = LangChainStream({
    onStart: async () => {
      console.log('Stream Start time ', Date.now());
    },
    onToken: async (token: string) => {
      console.log(token);
    },
    onCompletion: async (completion: string) => {
      console.log('Stream End time ', Date.now());
    },
  });

  const llm = new ChatOpenAI({
    streaming: true,
    callbackManager: CallbackManager.fromHandlers(handlers),
  });
  llm.call([new HumanChatMessage(messages[0].content)]).catch(console.error);

  streamToResponse(stream, res);
}

Frontend code:

const {
    messages,
    input: aiSearchInput,
    handleInputChange,
    handleSubmit,
  } = useChat({
    api: '/api/streamTest',
    body: {
      // question: aiSearchInput,
      filters: {
        tickers: selectedStocksList,
        sources: [EARNING_TRANSCRIPT_REPORT_TYPE],
        timeframe: selectedQuarters,
      },
    },
    onResponse: () => {
      // steaming started
      steamStarted = Date.now();
      console.log('API latency:', steamStarted - apiHitting, 'milliseconds');
    },
    onFinish: () => {
      // steaming enden
      steamEnded = Date.now();
      console.log('Stream latency:', steamEnded - steamStarted, 'milliseconds');
      console.log('Total latency:', steamEnded - apiHitting, 'milliseconds');
      console.log('Stream End:', steamEnded);
      console.log('Stream Started');
    },
  });

Sahas avatar Jun 20 '23 12:06 Sahas

Can we have a separate documentation for serverless functions too?

Sahas avatar Jun 20 '23 15:06 Sahas

@Sahas can you

  • Check how long your function takes to run
  • Check the free tier how long serveless funcitons run (normally 10s)
  • Check paid tier how long serveless functions run (normally 60s)

peterokwara avatar Jun 21 '23 12:06 peterokwara

@peterokwara I've tried using calling same function with and without edge.

  1. With edge, the stream started in 2s & stream ended in 20s.
  2. With serverless, I'm not getting any response as the function invocation is getting timedout

Sahas avatar Jun 22 '23 05:06 Sahas

I'm guessing this is because of #97 ("Stream never closes with LangChainStream using postman"). Can you try one of the workarounds in that issue and see if that solves this problem? (You could also check in your dev console if the request is stuck open, even after new messages stop arriving).

gadicc avatar Jun 22 '23 11:06 gadicc

Will check, but any idea why it impacts only serverless functions and not edge function?

Sahas avatar Jun 22 '23 13:06 Sahas

@jaredpalmer / @shuding can you look into this issue please

Sahas avatar Jun 22 '23 14:06 Sahas

any idea why it impacts only serverless functions and not edge function?

No, but I'm less familiar with the edge stuff. I thought perhaps because of the different limits and runtimes, the above issue could possibly express itself this way.

gadicc avatar Jun 22 '23 16:06 gadicc

I think there are two issues here:

  1. Streaming langchain is broken for certain models (see https://github.com/vercel-labs/ai/issues/205)
  2. You are encountering a timeout. On the hobby tier of Vercel, serverless functions timeout after only 10 seconds.

closing as a duplicate, please reply back if you think I missed something.

MaxLeiter avatar Jun 22 '23 23:06 MaxLeiter

@MaxLeiter

  1. Why is it breaking only for serverless functions and not edge functions?
  2. I'm using vercel pro account, so the timeout is 60s.

Sahas avatar Jun 23 '23 05:06 Sahas

I've used this code

https://github.com/e-roy/openai-functions-with-langchain/blob/main/src/app/api/news-langchain/route.ts

But then i removed

export const runtime = "edge"; i think this means I am not using edge

And I don't get the same issues. I am using vercel pro. I am using gpt-3.5-turbo-0613

peterokwara avatar Jun 24 '23 21:06 peterokwara