SuperAGI icon indicating copy to clipboard operation
SuperAGI copied to clipboard

context_length_exceeded

Open markharley12 opened this issue 1 year ago • 9 comments

I've been consistently having this issue and have seen a few others in the discord reporting the same. I'm using gpt-3.5-turbo Max token limit in the config.yaml set to 4032 Max tokens for gpt-3.5-turbo is 4096 but on recursive calls it seems to build up tokens in the message until it exceeds this limit.

My error from the terminal: celery_1 | [2023-08-31 23:26:11,980: INFO/ForkPoolWorker-8] error_code=context_length_exceeded error_message="This model's maximum context length is 4097 tokens. However, you requested 4986 tokens (954 in the messages, 4032 in the completion). Please reduce the length of the messages or completion." error_param=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False

I'm looking through the code trying to figure out why this might be happening anyone got suggestions?

markharley12 avatar Sep 01 '23 00:09 markharley12

This might be an issue with your openAI billing, please go through this link & check if your plan is reflecting https://help.openai.com/en/articles/6891831-error-code-429-you-exceeded-your-current-quota-please-check-your-plan-and-billing-details

cognitivebot avatar Sep 01 '23 10:09 cognitivebot

I saw this in the discord I believe my issue is the same? CognitiveBot — Today at 07:14 <@user> thanks for flagging this issue, I think there is an approximation in the token calculation which sometimes leads to this issue. We're looking into this.

markharley12 avatar Sep 01 '23 22:09 markharley12

I am on version v0.0.11 and I had the same issue with gpt-4.

PortlandKyGuy avatar Sep 04 '23 14:09 PortlandKyGuy

Same issue constantly see it scrolling in the output.

Checked billing it shows about $3.40 credits.

superagi-celery-1 | [2023-09-05 20:14:39,791: INFO/ForkPoolWorker-8] error_code=context_length_exceeded error_message="This model's maximum context length is 4097 tokens. However, you requested 4945 tokens (913 in the messages, 4032 in the completion). Please reduce the length of the messages or completion." error_param=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False

superagi-celery-1 | 2023-09-05 20:14:39 UTC - Super AGI - INFO - [/app/superagi/llms/openai.py:85] - OpenAi InvalidRequestError: superagi-celery-1 | [2023-09-05 20:14:39,792: INFO/ForkPoolWorker-8] OpenAi InvalidRequestError: superagi-celery-1 | 2023-09-05 20:14:39 UTC - Super AGI - INFO - [/app/superagi/llms/openai.py:85] - This model's maximum context length is 4097 tokens. However, you requested 4945 tokens (913 in the messages, 4032 in the completion). Please reduce the length of the messages or completion.

zethereumGmail avatar Sep 05 '23 20:09 zethereumGmail

This is the master branch config

"gpt-3.5-turbo-0301": 4032, "gpt-4-0314": 8092, "gpt-3.5-turbo": 4032, "gpt-4": 8092, "gpt-4-32k": 32768, "gpt-4-32k-0314": 32768, "llama":2048, "mpt-7b-storywriter":45000

MODEL_NAME: "gpt-3.5-turbo-0301"

"gpt-3.5-turbo", , "gpt-4", "models/chat-bison-001"

RESOURCES_SUMMARY_MODEL_NAME: "gpt-3.5-turbo" MAX_TOOL_TOKEN_LIMIT: 800 MAX_MODEL_TOKEN_LIMIT: 4032 # set to 2048 for llama

I changed my Max from 4032 to 2048 as a test and have not seen the error anymore.

zethereumGmail avatar Sep 05 '23 22:09 zethereumGmail

Spoke to soon, still seeing it.

superagi-celery-1 | [2023-09-05 22:42:04,107: INFO/ForkPoolWorker-8] error_code=context_length_exceeded error_message="This model's maximum context length is 4097 tokens. However, you requested 4731 tokens (2683 in the messages, 2048 in the completion). Please reduce the length of the messages or completion." error_param=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False

zethereumGmail avatar Sep 05 '23 22:09 zethereumGmail

@zethereumGmail can you provide me more details about your issue so that it will help me to understand your issue much better.

jedan2506 avatar Sep 07 '23 13:09 jedan2506

I am having the same issue each time I spin an agent. it just exits with an error automatically.

image

refugedesigns avatar Dec 07 '23 08:12 refugedesigns

@refugedesigns , there is an approximation in token calculation which sometimes leads to this issue, we're trying to fix this For now try re-running it with a reduced message length.

cognitivebot avatar Dec 07 '23 12:12 cognitivebot