MetaGPT icon indicating copy to clipboard operation
MetaGPT copied to clipboard

can metagpt support TPM setting?

Open yuuuuuuan opened this issue 9 months ago • 1 comments

2025-03-21 13:36:02.044 | ERROR | metagpt.utils.common:wrapper:683 - Exception occurs, start to serialize the project, exp: Traceback (most recent call last): File "/mnt/d/.github/MetaGPT/metagpt/utils/common.py", line 692, in wrapper return await func(self, *args, **kwargs) File "/mnt/d/.github/MetaGPT/metagpt/roles/role.py", line 548, in run rsp = await self.react() openai.APIStatusError: Error code: 413 - {'error': {'message': 'Request too large for model llama3-70b-8192 in organization org_01jjvzqkd4frf9g3awzsx7d5vx service tier on_demand on tokens per minute (TPM): Limit 6000, Requested 6042, please reduce your message size and try again. Visit https://console.groq.com/docs/rate-limits for more information.', 'type': 'tokens', 'code': 'rate_limit_exceeded'}}

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/mnt/d/.github/MetaGPT/metagpt/utils/common.py", line 678, in wrapper result = await func(self, *args, **kwargs) File "/mnt/d/.github/MetaGPT/metagpt/team.py", line 134, in run await self.env.run() Exception: Traceback (most recent call last): File "/mnt/d/.github/MetaGPT/metagpt/utils/common.py", line 692, in wrapper return await func(self, *args, **kwargs) File "/mnt/d/.github/MetaGPT/metagpt/roles/role.py", line 548, in run rsp = await self.react() File "/mnt/d/.github/MetaGPT/metagpt/roles/role.py", line 515, in react rsp = await self._react() File "/mnt/d/.github/MetaGPT/metagpt/roles/di/role_zero.py", line 349, in _react has_todo = await self._think() File "/mnt/d/.github/MetaGPT/metagpt/roles/product_manager.py", line 56, in _think return await super()._think() File "/mnt/d/.github/MetaGPT/metagpt/roles/di/role_zero.py", line 257, in _think self.command_rsp = await self.llm_cached_aask(req=req, system_msgs=[system_prompt], state_data=state_data) File "/mnt/d/.github/MetaGPT/metagpt/exp_pool/decorator.py", line 187, in async_wrapper return await wrapped_func(args, kwargs) File "/mnt/d/.github/MetaGPT/metagpt/exp_pool/decorator.py", line 65, in get_or_create return await rsp if asyncio.iscoroutine(rsp) else rsp File "/mnt/d/.github/MetaGPT/metagpt/roles/di/role_zero.py", line 268, in llm_cached_aask return await self.llm.aask(req, system_msgs=system_msgs) File "/mnt/d/.github/MetaGPT/metagpt/provider/base_llm.py", line 206, in aask rsp = await self.acompletion_text(compressed_message, stream=stream, timeout=self.get_timeout(timeout)) File "/mnt/d/.github/MetaGPT/myenv/lib/python3.9/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped return await fn(*args, **kwargs) File "/mnt/d/.github/MetaGPT/myenv/lib/python3.9/site-packages/tenacity/_asyncio.py", line 47, in call do = self.iter(retry_state=retry_state) File "/mnt/d/.github/MetaGPT/myenv/lib/python3.9/site-packages/tenacity/init.py", line 314, in iter return fut.result() File "/root/.pyenv/versions/3.9.21/lib/python3.9/concurrent/futures/_base.py", line 439, in result return self.__get_result() File "/root/.pyenv/versions/3.9.21/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/mnt/d/.github/MetaGPT/myenv/lib/python3.9/site-packages/tenacity/_asyncio.py", line 50, in call result = await fn(*args, **kwargs) File "/mnt/d/.github/MetaGPT/metagpt/provider/openai_api.py", line 174, in acompletion_text return await self._achat_completion_stream(messages, timeout=timeout) File "/mnt/d/.github/MetaGPT/metagpt/provider/openai_api.py", line 92, in _achat_completion_stream response: AsyncStream[ChatCompletionChunk] = await self.aclient.chat.completions.create( File "/mnt/d/.github/MetaGPT/myenv/lib/python3.9/site-packages/openai/resources/chat/completions/completions.py", line 1927, in create return await self._post( File "/mnt/d/.github/MetaGPT/myenv/lib/python3.9/site-packages/openai/_base_client.py", line 1856, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) File "/mnt/d/.github/MetaGPT/myenv/lib/python3.9/site-packages/openai/_base_client.py", line 1550, in request return await self._request( File "/mnt/d/.github/MetaGPT/myenv/lib/python3.9/site-packages/openai/_base_client.py", line 1651, in _request raise self._make_status_error_from_response(err.response) from None openai.APIStatusError: Error code: 413 - {'error': {'message': 'Request too large for model llama3-70b-8192 in organization org_01jjvzqkd4frf9g3awzsx7d5vx service tier on_demand on tokens per minute (TPM): Limit 6000, Requested 6042, please reduce your message size and try again. Visit https://console.groq.com/docs/rate-limits for more information.', 'type': 'tokens', 'code': 'rate_limit_exceeded'}}

yuuuuuuan avatar Mar 21 '25 05:03 yuuuuuuan

I implemented the feature.You can tried it. feature/rate-limit

Terrdi avatar Mar 21 '25 18:03 Terrdi

This issue has no activity in the past 30 days. Please comment on the issue if you have anything to add.

github-actions[bot] avatar May 02 '25 00:05 github-actions[bot]

This issue was closed due to 45 days of inactivity. If you feel this issue is still relevant, please reopen the issue to continue the discussion.

github-actions[bot] avatar May 17 '25 00:05 github-actions[bot]