ai icon indicating copy to clipboard operation
ai copied to clipboard

Support for OpenAI Function Calling and streaming?

Open mattlgroff opened this issue 1 year ago • 9 comments

Is there any examples of using this with the new Function Calling feature in OpenAI's Chat Completion API?

Or is there more work needed to support this?

Thanks!

mattlgroff avatar Jun 15 '23 21:06 mattlgroff

Here is my code:

/* eslint-disable turbo/no-undeclared-env-vars */
import { StreamingTextResponse, OpenAIStream } from 'ai';

import { zValidateEnv } from '@/utils';
import { openAISchema, pineconeSchema } from '@/schemas';
import {
  Configuration,
  OpenAIApiFactory,
  OpenAIApi,
  ChatCompletionResponseMessage,
} from 'openai';

interface FunctionCall {
  name: string;
  arguments: string;
}

interface Message {
  role: string;
  content: string;
  function_call?: FunctionCall;
}

interface Function {
  name: string;
  description: string;
  parameters: object;
}

const functionDescription: Function = {
  name: 'get_current_weather',
  description: 'Get the current weather in a given location',
  parameters: {
    type: 'object',
    properties: {
      location: {
        type: 'string',
        description: 'The city and state, e.g. San Francisco, CA',
      },
      unit: {
        type: 'string',
        enum: ['celsius', 'fahrenheit'],
      },
    },
    required: ['location'],
  },
};

const { OPENAI_API_KEY } = zValidateEnv(openAISchema);

const configuration = new Configuration({
  apiKey: OPENAI_API_KEY,
});

// fake response
function get_current_weather(args: { location: string; unit: string }) {
  const weather_info = {
    location: args.location,
    temperature: '72',
    unit: args?.unit || 'fahrenheit',
    forecast: ['sunny', 'windy'],
  };
  return JSON.stringify(weather_info);
}

export async function POST(req: Request) {
  const { messages } = await req.json();

  const initialMessage = {
    role: 'user' as const,
    content: messages[messages.length - 1].content as string,
  };

  const model = new OpenAIApi(configuration);
  const response = await model.createChatCompletion({
    model: 'gpt-4-0613',
    messages: [initialMessage],
    functions: [functionDescription],
    function_call: 'auto',
  });

  const message = response?.data?.choices?.[0]?.message;


  if (message?.function_call && message.function_call.arguments) {
    const functionResponse = get_current_weather(
      JSON.parse(message.function_call.arguments)
    );

    const response = await model.createChatCompletion({
      model: 'gpt-4-0613',
      stream: true,
      messages: [
        initialMessage,
        message,
        {
          role: 'function',
          name: message.function_call.name,
          content: functionResponse,
        },
      ],
    });
    console.log('second ', response.data.choices[0].message);
    const stream = OpenAIStream(response); // => this point we get some errors, I will describe bellow. 
  
    return new StreamingTextResponse(stream);
  }
}

The reponse of package openai is AxiosResponse<CreateChatCompletionResponse, any> then we get some errors because the OpenAIStream of ai want a response type Response from Fetch. Then not working right now with streams. I think will be work normal if I call rest api instead of use openai package.

obs: I see that Vercel using openai-edge on some examples, BUT this library dont have a options to pass functions and functions_call as parameter of your library.

image image

CarlosZiegler avatar Jun 16 '23 01:06 CarlosZiegler

Here is an example with functions : PLease dont care on design I dont changed :) https://github.com/CarlosZiegler/next-ai

Here is the deployed version : https://next-ai-nine.vercel.app/

Tomowor I will provide more design and a readme. Thanks @Vercel this lib help a lot :)

CarlosZiegler avatar Jun 16 '23 02:06 CarlosZiegler

Tagging https://github.com/dan-kwiat/openai-edge/pull/7 since Vercel uses the openai-edge package. Hopefully this gets implemented soon 😍 (I mean... functions + AI 🥹)

heymartinadams avatar Jun 18 '23 20:06 heymartinadams

As a first step, I've added a PR that allows for Edge Runtime route handlers to stream function-calling responses from OpenAI. #154

Zakinator123 avatar Jun 19 '23 02:06 Zakinator123

It would be nice to be able to return to the client not only the response message but also the result of the function Calling. This would allow for a variety of visual representations.

yutakobayashidev avatar Jun 19 '23 15:06 yutakobayashidev

I've taken a crack at the problem here: #178 . @yutakobayashidev That PR does exactly what you asked for

Zakinator123 avatar Jun 20 '23 22:06 Zakinator123

@Zakinator123 This is great! For example, would it be possible to access the weather API when get_current_weather is called and implement our own weather UI based on the response? If this could be done, I think it would enable a variety of experiences beyond chatting.

yutakobayashidev avatar Jun 21 '23 17:06 yutakobayashidev

@Zakinator123 This is great! For example, would it be possible to access the weather API when get_current_weather is called and implement our own weather UI based on the response? If this could be done, I think it would enable a variety of experiences beyond chatting.

Yes, that type of experience is exactly what my PR enables.

Zakinator123 avatar Jun 21 '23 17:06 Zakinator123

Wow, can you provide an example if merged ? Will be great!!!!

CarlosZiegler avatar Jun 21 '23 20:06 CarlosZiegler

Closing as initial support was implemented in #311. Please create new issues for specific needs you encounter and we can address them individually.

You can view the docs here: https://sdk.vercel.ai/docs/guides/functions

MaxLeiter avatar Jul 15 '23 02:07 MaxLeiter