autogen icon indicating copy to clipboard operation
autogen copied to clipboard

Tool id string too long

Open rapturt9 opened this issue 8 months ago • 0 comments

What happened?

Describe the bug The tool id is too long and causes and context error. It is length 41 but max length 40 causing openai.BadRequestError: Error code: 400

Stack Trace `ERROR:autogen_core:Error processing publish message for python_analyst_5694c56d-6c52-4f8b-8da4-7e5b331e2210/5694c56d-6c52-4f8b-8da4-7e5b331e2210 Traceback (most recent call last): File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_core/_single_threaded_agent_runtime.py", line 533, in _on_message return await agent.on_message( ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_core/_base_agent.py", line 113, in on_message return await self.on_message_impl(message, ctx) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_agentchat/teams/_group_chat/_sequential_routed_agent.py", line 67, in on_message_impl return await super().on_message_impl(message, ctx) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_core/_routed_agent.py", line 485, in on_message_impl return await h(self, message, ctx) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_core/_routed_agent.py", line 268, in wrapper return_value = await func(self, message, ctx) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_agentchat/teams/_group_chat/_chat_agent_container.py", line 79, in handle_request async for msg in self._agent.on_messages_stream(self._message_buffer, ctx.cancellation_token): File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 793, in on_messages_stream async for inference_output in self._call_llm( File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 920, in _call_llm model_result = await model_client.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_ext/models/openai/_openai_client.py", line 622, in create result: Union[ParsedChatCompletion[BaseModel], ChatCompletion] = await future ^^^^^^^^^^^^ File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2000, in create return await self._post( ^^^^^^^^^^^^^^^^^ File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/openai/_base_client.py", line 1767, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/openai/_base_client.py", line 1461, in request return await self._request( ^^^^^^^^^^^^^^^^^^^^ File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/openai/_base_client.py", line 1562, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Invalid 'messages[3].tool_calls[0].id': string too long. Expected a string with maximum length 40, but got a string with length 41 instead.\nmodel=reasoning. context_window_fallbacks=None. fallbacks=None.\n\nSet 'context_window_fallback' - https://docs.litellm.ai/docs/routing#fallbacks. Received Model Group=reasoning\nAvailable Model Group Fallbacks=None", 'type': None, 'param': None, 'code': '400'}} Error in analyze_problem: BadRequestError: Error code: 400 - {'error': {'message': "litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Invalid 'messages[3].tool_calls[0].id': string too long. Expected a string with maximum length 40, but got a string with length 41 instead.\nmodel=reasoning. context_window_fallbacks=None. fallbacks=None.\n\nSet 'context_window_fallback' - https://docs.litellm.ai/docs/routing#fallbacks. Received Model Group=reasoning\nAvailable Model Group Fallbacks=None", 'type': None, 'param': None, 'code': '400'}} Traceback: Traceback (most recent call last):

File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_agentchat/teams/_group_chat/_chat_agent_container.py", line 79, in handle_request async for msg in self._agent.on_messages_stream(self._message_buffer, ctx.cancellation_token):

File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 793, in on_messages_stream async for inference_output in self._call_llm(

File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 920, in _call_llm model_result = await model_client.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_ext/models/openai/_openai_client.py", line 622, in create result: Union[ParsedChatCompletion[BaseModel], ChatCompletion] = await future ^^^^^^^^^^^^

File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2000, in create return await self._post( ^^^^^^^^^^^^^^^^^

File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/openai/_base_client.py", line 1767, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/openai/_base_client.py", line 1461, in request return await self._request( ^^^^^^^^^^^^^^^^^^^^

File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/openai/_base_client.py", line 1562, in _request raise self._make_status_error_from_response(err.response) from None

openai.BadRequestError: Error code: 400 - {'error': {'message': "litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Invalid 'messages[3].tool_calls[0].id': string too long. Expected a string with maximum length 40, but got a string with length 41 instead.\nmodel=reasoning. context_window_fallbacks=None. fallbacks=None.\n\nSet 'context_window_fallback' - https://docs.litellm.ai/docs/routing#fallbacks. Received Model Group=reasoning\nAvailable Model Group Fallbacks=None", 'type': None, 'param': None, 'code': '400'}}

Traceback (most recent call last): File "/Users/rampotham/Documents/GitHub/sitewiz/backend/agents/data_analyst_group/src/group_chat.py", line 145, in analyze_problem task_result, summary, evaluation_record, state_manager = await run_group_chat(options["type"], chat, task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rampotham/Documents/GitHub/sitewiz/backend/agents/data_analyst_group/src/group_chat.py", line 100, in run_group_chat task_result = await state_manager.process_stream(stream, chat) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rampotham/Documents/GitHub/sitewiz/backend/agents/data_analyst_group/promptOptimization/StateManager.py", line 119, in process_stream async for message in stream: File "/Users/rampotham/miniforge3/envs/sitewiz/lib/python3.12/site-packages/autogen_agentchat/teams/_group_chat/_base_group_chat.py", line 503, in run_stream raise RuntimeError(str(message.error)) RuntimeError: BadRequestError: Error code: 400 - {'error': {'message': "litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Invalid 'messages[3].tool_calls[0].id': string too long. Expected a string with maximum length 40, but got a string with length 41 instead.\nmodel=reasoning. context_window_fallbacks=None. fallbacks=None.\n\nSet 'context_window_fallback' - https://docs.litellm.ai/docs/routing#fallbacks. Received Model Group=reasoning\nAvailable Model Group Fallbacks=None", 'type': None, 'param': None, 'code': '400'}}`

Which packages was the bug in?

Python Core (autogen-core), Python Extensions (autogen-ext)

AutoGen library version.

Python 0.5.4

Other library version.

No response

Model used

vertex_ai/gemini-2.5-flash-preview-04-17 (through litellm proxy)

Model provider

None

Other model provider

No response

Python version

None

.NET version

None

Operating system

None

rapturt9 avatar Apr 23 '25 20:04 rapturt9