SuperAGI
SuperAGI copied to clipboard
if response['content'] is None: | KeyError: 'content'
I am writing this issue to bring your attention to a recurring error I have encountered while working with your system.
I included the precursors to the problem for context regarding what triggers the error. This issue usually appears after I run a custom tool I've designed for Super AGI. I'm fairly certain the tool isn't be causing the error, but more so that it merely reveals a pre-existing issue in the codebase. The tool is returning almost 200 lines of data, I've included 6 below to demonstrate.
superagi-celery-1 | [2023-06-30 01:00:21,960: INFO/ForkPoolWorker-8] You are an AI assistant to create task.
superagi-celery-1 |
superagi-celery-1 | High level goal:
superagi-celery-1 | 1. Get Product Information and data from my shopify store
superagi-celery-1 |
superagi-celery-1 |
superagi-celery-1 | INSTRUCTION(Follow these instruction to decide the flow of execution and decide the next steps for achieving the task):
superagi-celery-1 | 1. First search the store for all products using the Get All Products tool
superagi-celery-1 | 2. Second, get all the product data with the All Product Data tool based on the first product Id found with the previous get all products command
superagi-celery-1 | 3. Last, use the All Product Data tool to get all the product data with the title of a product
superagi-celery-1 |
superagi-celery-1 |
superagi-celery-1 | You have following incomplete tasks `['All Product Data', 'All Product Data']`. You have following completed tasks `['Get All Products']`.
superagi-celery-1 |
superagi-celery-1 | Task History:
superagi-celery-1 | `
superagi-celery-1 | Task: Get All Products
superagi-celery-1 | Result: Tool Get All Products returned: Found 193 products:
superagi-celery-1 | +---------------+---------------------------------------------------------------------------+-------+
superagi-celery-1 | | Product ID | Title | Price |
superagi-celery-1 | | 8347806236950 | Test Leggings | 60.00 |
superagi-celery-1 | | 8347883307286 | Test Leggings | 65.00 |
superagi-celery-1 | | 8347915026710 | Test Leggings | 60.00 |
superagi-celery-1 | | 8347948417302 | Test Leggings | 60.00 |
superagi-celery-1 | | 8352283918614 | Test Leggings | 60.00 |
superagi-celery-1 | | 8352509886742 | Test Leggings | 60.00 |
superagi-celery-1 | +---------------+---------------------------------------------------------------------------+-------+
superagi-celery-1 |
superagi-celery-1 |
During the execution process, when the execute method is called in super_agi.py, I occasionally encounter a KeyError related to 'content'. The error traceback is as follows:
superagi-celery-1 | Based on this, create a single task to be completed by your AI system ONLY IF REQUIRED to get closer to or fully reach your high level goal.
superagi-celery-1 | Don't create any task if it is already covered in incomplete or completed tasks.
superagi-celery-1 | Ensure your new task are not deviated from completing the goal.
superagi-celery-1 |
superagi-celery-1 | Your answer should be an array of strings that can be used with JSON.parse() and NOTHING ELSE. Return empty array if no new task is required.
superagi-proxy-1 | 172.18.0.1 - - [30/Jun/2023:01:00:22 +0000] "GET /_next/webpack-hmr HTTP/1.1" 499 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36" "-"
superagi-celery-1 | [2023-06-30 01:00:23,034: INFO/ForkPoolWorker-8] error_code=None error_message="-3038 is less than the minimum of 1 - 'max_tokens'" error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
superagi-celery-1 | 2023-06-30 01:00:23 UTC - Super AGI - INFO - [/app/superagi/llms/openai.py:79] - Exception:
superagi-celery-1 | [2023-06-30 01:00:23,034: INFO/ForkPoolWorker-8] Exception:
superagi-celery-1 | 2023-06-30 01:00:23 UTC - Super AGI - INFO - [/app/superagi/llms/openai.py:79] - -3038 is less than the minimum of 1 - 'max_tokens'
superagi-celery-1 | [2023-06-30 01:00:23,034: INFO/ForkPoolWorker-8] -3038 is less than the minimum of 1 - 'max_tokens'
superagi-celery-1 | [2023-06-30 01:00:23,065: ERROR/ForkPoolWorker-8] Task execute_agent[f9a49e26-e794-41f7-a4bf-dc653d3dbb51] raised unexpected: KeyError('content')
superagi-celery-1 | Traceback (most recent call last):
superagi-celery-1 | File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 451, in trace_task
superagi-celery-1 | R = retval = fun(*args, **kwargs)
superagi-celery-1 | File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 734, in __protected_call__
superagi-celery-1 | return self.run(*args, **kwargs)
superagi-celery-1 | File "/usr/local/lib/python3.9/site-packages/celery/app/autoretry.py", line 54, in run
superagi-celery-1 | ret = task.retry(exc=exc, **retry_kwargs)
superagi-celery-1 | File "/usr/local/lib/python3.9/site-packages/celery/app/task.py", line 717, in retry
superagi-celery-1 | raise_with_context(exc)
superagi-celery-1 | File "/usr/local/lib/python3.9/site-packages/celery/app/autoretry.py", line 34, in run
superagi-celery-1 | return task._orig_run(*args, **kwargs)
superagi-celery-1 | File "/app/superagi/worker.py", line 19, in execute_agent
superagi-celery-1 | AgentExecutor().execute_next_action(agent_execution_id=agent_execution_id)
superagi-celery-1 | File "/app/superagi/jobs/agent_executor.py", line 206, in execute_next_action
superagi-celery-1 | response = spawned_agent.execute(agent_workflow_step)
superagi-celery-1 | File "/app/superagi/agent/super_agi.py", line 168, in execute
superagi-celery-1 | if response['content'] is None:
superagi-celery-1 | KeyError: 'content'
This error seems to occur inconsistently, making it harder to predict and manage. The source of the error seems to be the execute method within super_agi.py, particularly when trying to access 'content' from the response dictionary.
It's noteworthy that the error often leads to an infinite loop, forcing the task to retry again and again. This not only consumes resources but also inhibits the progress of my work. This loop seems to happen when the retry function in autoretry.py gets triggered after the exception
In light of this issue, I would like to offer my help. I have spent countless hours working with Python and have a good understanding of the system. I'm considering becoming a contributor to your repository, as I believe my experience could be beneficial in not only resolving this issue but also improving the overall robustness of the system.
Please let me know if there is any more information I can provide or any way I can assist. I am willing and ready to make a positive impact on the Super AGI project.
Thank you for your time and for your dedication to this project @TransformerOptimus .
Can you check if you have a valid openai api key?
I hope your offer to provide help is taken up on! That was really well written, and it sounds like you might have some insights into the potential cause of this pesky KeyError: 'content' issue..
Can you check if you have a valid openai api key?
I've been using Super AGI for a month now coding tools, I'm certain that the API key is properly working, as i've ran numerous operations with it in the past that have compiled successfully
I hope your offer to provide help is taken up on! That was really well written, and it sounds like you might have some insights into the potential cause of this pesky KeyError: 'content' issue..
Absolutely, I'll start debugging and look deeper into the code for what's triggering the problem. Expect a commit coming in the next few days
@LivingElevated We'd love to have you contribute to our Project. Looking forward to your commit!
@LivingElevated have assigned you this issue. Looking forward!
Hi, just an observation from the second screenshot. I have previously encountered similar errors to this:
.... INFO/ForkPoolWorker-8] error_code=None error_message="-3038 is less than the minimum of 1 - 'max_tokens'" error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
It's basic arithmetic: -3038 is indeed a lot less than 1. As GPT4 elaborates:
"The error message -3038 is less than the minimum of 1 - 'max_tokens' indicates that the max_tokens parameter passed to the OpenAI API is negative, which is not allowed. The max_tokens parameter is used to limit the length of the model's response, and it must be a positive integer.
This error could occur if the calculation of max_tokens is incorrect. In the code you provided earlier, max_tokens is calculated as token_limit - current_tokens - function_tokens. If the sum of current_tokens and function_tokens is greater than token_limit, then max_tokens will be negative, leading to this error."
I am wondering if this is either related to, or the root cause of the KeyError: 'content'
issue you're dealing with. FWIW GPT4 seems to think it could be:
"Yes, the KeyError: 'content' could very well be related to the negative max_tokens parameter. When an API request to OpenAI fails (for example, due to a negative max_tokens value), the response from the API may not include the 'content' key, which would normally contain the model's generated text."
Note the default config file looks like this:
I modified mine to instead use gpt-3.5-turbo-16k as the base model (which would easily handle the 3038 tokens you've exceeded due to the length of the base prompt/instructions, previous iterations etc) and adjusted the max token parameter accordingly.
Also in superagi/agent/super_agi.py
I increased the max_token_limit from the default 600 to 1200:
I can't remember if I made any other changes (I don't really know what I'm doing, if that wasn't already obvious..) but at least in my case, this resolved the error – though also increased my OpenAI API usage/bill...)
Ah, and I just noticed https://github.com/TransformerOptimus/SuperAGI/pull/628. Perhaps my observations/suggestions here are wrong/misplaced or now redundant! 🤷♂️
@clappo143 #628 solves more general error handling when our request to the agent execution API gives us an error response. My work catches all errors outside of your case and including it. If I am understanding correctly the error you see is still something that we can resolve. Your above comment should be linked to a new issue and should still be fixed.
@ai-akuma Thanks. I think I understand. It seems #628 handles this KeyError: 'content'
error – so at least the agent can move on and (hopefully) try something different – but it does not resolve the underlying cause of it. I think my post above does get closer to the actual issue at hand.
In OP, "The tool is returning almost 200 lines of data, I've included 6 below to demonstrate."
I duplicated the first line of the sample results 200 times in a spreadsheet and pasted the content into OpenAI's tokenizer to roughly replicate the output of the custom tool, and get a sense of how many tokens it was sending to the model. It came back with 3,382 tokens
If we assume OP's agent is using gpt-3.5-turbo-0301, then the model's max_token_limit
is 4,032. The sum of current_tokens
and function_tokens
cannot exceed this value (otherwise you end up with negative values). The ~3,382 tokens generated by the custom tool eats into most of the 4,032 context window. Then add the system/base prompts and any history, and we end up with the '-3038' cited in the error.
As I see it, the error can be resolved by ensuring that the functions and messages being passed to the API are not too long. It is not just the outputs from tools; the system messages like goals, instructions, constraints, tools, evaluation, task history etc collectively consume a lot of tokens too. Also, I wonder if the custom tool created by OP is subject to the MAX_TOOL_TOKEN_LIMIT:
in the config file (it seems not, if it generated 200 lines of output)
The other approach, which I guess is what I went with, is to use models with larger context windows and adjust token limit parameters accordingly.
I'm not sure if that make sense – it barely does in my own head lol
Also superagi/helper/token_counter.py seems relevant here