SuperAGI
SuperAGI copied to clipboard
i got The model: `gpt-4` does not exist, retry: Retry in 2s: KeyError('content')
at the first time i run Supercoder with gpt4 (i have gpt4 token) but always stuck at this step:
i filled enough all thing if need in config.yaml
This is docker log:
2023-06-28 09:16:53 superagi-celery-1 | Ensure the response can be parsed by Python json.loads. 2023-06-28 09:16:54 superagi-celery-1 | [2023-06-28 02:16:54,054: INFO/ForkPoolWorker-8] error_code=model_not_found error_message='The model:
gpt-4does not exist' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False 2023-06-28 09:16:54 superagi-celery-1 | 2023-06-28 02:16:54 UTC - Super AGI - INFO - [/app/superagi/llms/openai.py:79] - Exception: 2023-06-28 09:16:54 superagi-celery-1 | [2023-06-28 02:16:54,054: INFO/ForkPoolWorker-8] Exception: 2023-06-28 09:16:54 superagi-celery-1 | 2023-06-28 02:16:54 UTC - Super AGI - INFO - [/app/superagi/llms/openai.py:79] - The model:
gpt-4does not exist 2023-06-28 09:16:54 superagi-celery-1 | [2023-06-28 02:16:54,054: INFO/ForkPoolWorker-8] The model:
gpt-4 does not exist 2023-06-28 09:16:54 superagi-celery-1 | [2023-06-28 02:16:54,083: ERROR/ForkPoolWorker-8] Task execute_agent[7723e7d7-11ef-4743-bdfd-b208184799dd] raised unexpected: KeyError('content') 2023-06-28 09:16:54 superagi-celery-1 | Traceback (most recent call last): 2023-06-28 09:16:54 superagi-celery-1 | File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 451, in trace_task 2023-06-28 09:16:54 superagi-celery-1 | R = retval = fun(*args, **kwargs) 2023-06-28 09:16:54 superagi-celery-1 | File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 734, in __protected_call__ 2023-06-28 09:16:54 superagi-celery-1 | return self.run(*args, **kwargs) 2023-06-28 09:16:54 superagi-celery-1 | File "/usr/local/lib/python3.9/site-packages/celery/app/autoretry.py", line 54, in run 2023-06-28 09:16:54 superagi-celery-1 | ret = task.retry(exc=exc, **retry_kwargs) 2023-06-28 09:16:54 superagi-celery-1 | File "/usr/local/lib/python3.9/site-packages/celery/app/task.py", line 717, in retry 2023-06-28 09:16:54 superagi-celery-1 | raise_with_context(exc) 2023-06-28 09:16:54 superagi-celery-1 | File "/usr/local/lib/python3.9/site-packages/celery/app/autoretry.py", line 34, in run 2023-06-28 09:16:54 superagi-celery-1 | return task._orig_run(*args, **kwargs) 2023-06-28 09:16:54 superagi-celery-1 | File "/app/superagi/worker.py", line 19, in execute_agent 2023-06-28 09:16:54 superagi-celery-1 | AgentExecutor().execute_next_action(agent_execution_id=agent_execution_id) 2023-06-28 09:16:54 superagi-celery-1 | File "/app/superagi/jobs/agent_executor.py", line 203, in execute_next_action 2023-06-28 09:16:54 superagi-celery-1 | response = spawned_agent.execute(agent_workflow_step) 2023-06-28 09:16:54 superagi-celery-1 | File "/app/superagi/agent/super_agi.py", line 168, in execute 2023-06-28 09:16:54 superagi-celery-1 | if response['content'] is None: 2023-06-28 09:16:54 superagi-celery-1 | KeyError: 'content' 2023-06-28 09:16:54 superagi-backend-1 | INFO: 172.18.0.7:55016 - "GET /agentexecutionfeeds/get/execution/1 HTTP/1.0" 200 OK
Then i run again with gpt-3.5 model, and can return result but get some err, can't auto finish:
@linhcentrio Hey gpt-4 works fine in mine, can you try running again and check if the error still persists also can you try generating a new key which has gpt-4 access because this seems an error from Open AI side.
In the second screenshot: "Unknown tool PerformanceEvaluationTool'." That seems kinda odd... I wonder is it hallucinating this tool because of what's said in the the base prompt?
TOOLS:
--
14 | {tools}
15 |
16 | PERFORMANCE EVALUATION:
17 | 1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.
18 | 2. Use instruction to decide the flow of execution and decide the next steps for achieving the task.
19 | 2. Constructively self-criticize your big-picture behavior constantly.
Just a thought. (Also I guess this could just be a GPT3.5 thing)
"Unknown tool PerformanceEvaluationTool" is due to GPT-3.5 hallucinating.