SuperAGI icon indicating copy to clipboard operation
SuperAGI copied to clipboard

Function Calling

Open rcarmo opened this issue 1 year ago • 5 comments

FYI: Function calling is now available on gpt-4-0613 and gpt-3.5-turbo-0613, which should make tools a lot more reliable:

https://openai.com/blog/function-calling-and-other-api-updates

rcarmo avatar Jun 13 '23 20:06 rcarmo

Came here to create the exact same issue 😄

Here's the api doc: https://platform.openai.com/docs/guides/gpt/function-calling

rico-ocepek avatar Jun 14 '23 06:06 rico-ocepek

Yes we have started testing. One advantage I could see is the JSON output from OpenAI after it processes function's response. It could help reduce the SuperAGI's base prompt.

{
  "id": "chatcmpl-123",
  ...
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "The weather in Boston is currently sunny with a temperature of 22 degrees Celsius.",
    },
    "finish_reason": "stop"
  }]
}

At the same time, the is OpenAI specific and SuperAGI has to support open LLMs too. But yes, we are figuring out the architecture to integrate functions and make it available. You can also try and take a stab at implementing this.

neelayan7 avatar Jun 15 '23 07:06 neelayan7

@neelayan7 I'd like to give it a go if you don't mind

iskandarreza avatar Jun 15 '23 08:06 iskandarreza

Absolutely. Looking forward!

neelayan7 avatar Jun 15 '23 09:06 neelayan7

Okay it's still a work in progress, I'm encountering some obstacles and could use some help -> https://github.com/iskandarreza/SuperAGI/tree/openai-api-use-function-call

Basically, I'm not sure how to accurately count the function tokens. Yes, it appears that the functions defined in the functions array counts towards token usage.

superagi-celery-1           | [2023-06-15 20:27:53,579: WARNING/ForkPoolWorker-7] ==================function_tokens======================
superagi-celery-1           | [2023-06-15 20:27:53,581: WARNING/ForkPoolWorker-7] 198
superagi-celery-1           | [2023-06-15 20:27:54,594: INFO/ForkPoolWorker-7] error_code=context_length_exceeded error_message="This model's maximum context length is 4097 tokens. However, you requested 4156 tokens (136 in the messages, 125 in the functions, and 3895 in the completion). Please reduce the length of the messages, functions, or completion." error_param=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False

That's with two test functions to add decimals or hexadecimal numbers together, just to see what happens. I did a lazy thing and counted the tokens with

function_tokens = TokenCounter.count_text_tokens(json.dumps(test_functions))
        print('==================function_tokens======================')
        print(function_tokens)

As you can see, it's not an accurate way to count, it overestimates (which isn't necessarily a bad thing) but I feel like there's a better way to do that. Seeking input from others.

CC: @neelayan7

iskandarreza avatar Jun 15 '23 20:06 iskandarreza

Any timeline for this 👀

iAbhinav avatar Jul 12 '23 21:07 iAbhinav

I think that the below code can calculate real token with the function calling. But it seems like an ad hoc solution. However I'd like to try and cannot wait using function calling in this app.

refs: https://gist.github.com/CGamesPlay/dd4f108f27e2eec145eedf5c717318f5

trknhr avatar Aug 24 '23 15:08 trknhr