MetaGPT
MetaGPT copied to clipboard
tenacity.RetryError: RetryError[<Future at 0x7faeafc5ffd0 state=finished raised JSONDecodeError>]
When I set up everything and python3 startup.py "Write a cli snake game", I get this error.
""",
"Logic Analysis": [
["main.py","Contains the main game loop and user input handling"],
["game.py","Contains the game logic, including the snake and food classes"],
["snake.py","Contains the Snake class and its methods for moving and eating food"],
["food.py","Contains the Food class and its method for generating new food"]
],
"Task list": [
"main.py",
"game.py",
"snake.py",
"food.py"
],
"Shared Knowledge": """
'game.py' contains the Game class, which manages the game state and controls the snake and food.
'snake.py' contains the Snake class, which represents the snake and its movements.
'food.py' contains the Food class, which represents the food and generates new food when eaten by the snake.
""",
"Anything UNCLEAR": "We need to decide on the game's width and height, which will be specified in the API request when starting a new game."
} [END]
Traceback (most recent call last):
File "/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/cx/qianwen/MetaGPT-main/metagpt/actions/action.py", line 78, in _aask_v1
parsed_data = CustomDecoder(strict=False).decode(content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/cx/qianwen/MetaGPT-main/metagpt/utils/custom_decoder.py", line 297, in decode
return super().decode(s)
^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/py11/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/py11/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/cx/qianwen/MetaGPT-main/startup.py", line 72, in <module>
fire.Fire(main)
File "/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/fire/core.py", line 466, in _Fire
component, remaining_args = _CallAndUpdateTrace(
^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/cx/qianwen/MetaGPT-main/startup.py", line 68, in main
asyncio.run(startup(idea, investment, n_round, code_review, run_tests, implement))
File "/home/user/anaconda3/envs/py11/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/py11/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/py11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/user/cx/qianwen/MetaGPT-main/startup.py", line 47, in startup
await company.run(n_round=n_round)
File "/home/user/cx/qianwen/MetaGPT-main/metagpt/software_company.py", line 60, in run
await self.environment.run()
File "/home/user/cx/qianwen/MetaGPT-main/metagpt/environment.py", line 67, in run
await asyncio.gather(*futures)
File "/home/user/cx/qianwen/MetaGPT-main/metagpt/roles/role.py", line 240, in run
rsp = await self._react()
^^^^^^^^^^^^^^^^^^^
File "/home/user/cx/qianwen/MetaGPT-main/metagpt/roles/role.py", line 209, in _react
return await self._act()
^^^^^^^^^^^^^^^^^
File "/home/user/cx/qianwen/MetaGPT-main/metagpt/roles/role.py", line 168, in _act
response = await self._rc.todo.run(self._rc.important_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/cx/qianwen/MetaGPT-main/metagpt/actions/project_management.py", line 184, in run
rsp = await self._aask_v1(prompt, "task", OUTPUT_MAPPING, format=format)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7faeafc5ffd0 state=finished raised JSONDecodeError>]
This is because LLM did not return the content in the required JSON format. You can try it with GPT-4; GPT-4 won't have this issue.
This is because LLM did not return the content in the required JSON format. You can try it with GPT-4; GPT-4 won't have this issue.
If we're using LiteLLM with various ollama models to run locally (so far I've tried mistral, deepseek-coder, and codellama), do you happen to have any suggestions on how we could resolve this issue? TIA
@jonny255 You can ref to https://github.com/geekan/MetaGPT/issues/526 to tryintegration_with_open_llm.md
with REPAIR_LLM_OUTPUT: true
which will try repair open llm output but it's in experimental. If you have new error logs, you can upload here.
#500
This is because LLM did not return the content in the required JSON format. You can try it with GPT-4; GPT-4 won't have this issue.
If we're using LiteLLM with various ollama models to run locally (so far I've tried mistral, deepseek-coder, and codellama), do you happen to have any suggestions on how we could resolve this issue? TIA
Did you get some results? Because I have tried codellama with ollama and litellm and everytime I get new error.
This is because LLM did not return the content in the required JSON format. You can try it with GPT-4; GPT-4 won't have this issue.
If we're using LiteLLM with various ollama models to run locally (so far I've tried mistral, deepseek-coder, and codellama), do you happen to have any suggestions on how we could resolve this issue? TIA
Did you get some results? Because I have tried codellama with ollama and litellm and everytime I get new error.
@MARYAMJAHANIR which llm have you tried. maybe llama2-7b/13b can be prior, but they can't generate required content everytime.