MetaGPT
MetaGPT copied to clipboard
Encountered a bug where unable to generate and save report while trying examples/research.py.
Bug description When trying the examples/research.py, encountered a bug where unable to generate and save report. The process stops at the step to do ConductResearch, with prompt_tokens: 1874, completion_tokens: 0, and the content is empty ('').
Environment information
- LLM type and model name: gpt-4-1106-preview
- System version: ubuntu 22.04
- Python version: 3.10.12
- packages version: 0.6.0, 0.6.4, 0.6.5
- installation method: pip install metagpt
Screenshots or logs 2024-01-18 15:29:10.095 | INFO | metagpt.utils.cost_manager:update_cost:48 - Total running cost: $0.545 | Max budget: $10.000 | Current cost: $0.037, prompt_tokens: 2735, completion_tokens: 321 2024-01-18 15:29:21.722 | INFO | metagpt.utils.cost_manager:update_cost:48 - Total running cost: $0.572 | Max budget: $10.000 | Current cost: $0.027, prompt_tokens: 2290, completion_tokens: 122 2024-01-18 15:29:21.722 | INFO | metagpt.roles.researcher:_act:56 - David(Researcher): to do ConductResearch(David) 2024-01-18 15:29:23.129 | INFO | metagpt.utils.cost_manager:update_cost:48 - Total running cost: $0.590 | Max budget: $10.000 | Current cost: $0.019, prompt_tokens: 1874, completion_tokens: 0
If you installed metagpt by pip install metagpt, there should be no examples/research.py
What commands did you execute? And what is the order of these commands?
If you installed metagpt by
pip install metagpt, there should be no examples/research.py What commands did you execute? And what is the order of these commands?
import asyncio
from metagpt.roles.researcher import Researcher
if __name__ == "__main__":
async def main(topic: str, language="en-us"):
role = Researcher(language=language)
await role.run(topic)
asyncio.run(main("Gemini Pro vs GPT-3.5"))
I have been checking this issue for several hours, and I have found the problem. It seems that there is an issue with the token calculation in ConductResearch. When I modified get_max_completion_tokens as follows, it started working again.
def get_max_completion_tokens(messages: list[dict], model: str, default: int) -> int:
if model not in TOKEN_MAX:
return default
# return TOKEN_MAX[model] - count_message_tokens(messages) - 1
return 4096
If you installed metagpt by
pip install metagpt, there should be no examples/research.py What commands did you execute? And what is the order of these commands?
Found the cause of the error.
openai.BadRequestError: Error code: 400 - {'error': {'message': 'max_tokens is too large: 10964. This model supports at most 4096 completion tokens, whereas you provided 10964.', 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': None}}
@shenchucheng could you plz take a look at this?