[BUG] Invalid 'messages[2].tool_calls[0].id': string too long
Required prerequisites
- [x] I have read the documentation https://camel-ai.github.io/camel/camel.html.
- [x] I have searched the Issue Tracker and Discussions that this hasn't already been reported. (+1 or comment there if it has.)
- [ ] Consider asking first in a Discussion.
What version of camel are you using?
0.2.43
System information
3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] linux 0.2.43
Problem description
Sometimes the tool calling for the openai model will raise error.
Reproducible example code
The Python snippets:
openai_model = ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type=ModelType.GPT_4O_MINI,
)
Command lines:
Extra dependencies:
Steps to reproduce:
Traceback
2025-04-17 08:38:13,534 - social.agent - ERROR - Agent 9 error: Unable to process messages: none of the provided models run successfully.
2025-04-17 08:38:13,554 - camel.models.model_manager - ERROR - Error processing with model: <camel.models.openai_model.OpenAIModel object at 0x14fa9e9af2b0>
2025-04-17 08:38:13,554 - camel.agents.chat_agent - ERROR - An error occurred while running model gpt-4o-mini, index: 1
Traceback (most recent call last):
File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/agents/chat_agent.py", line 839, in _aget_model_response
response = await self.model_backend.arun(
File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/models/model_manager.py", line 265, in arun
raise exc
File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/models/model_manager.py", line 253, in arun
response = await self.current_model.arun(
File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/models/base_model.py", line 307, in arun
return await self._arun(messages, response_format, tools)
File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/models/openai_model.py", line 243, in _arun
return await self._arequest_chat_completion(messages, tools)
File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/models/openai_model.py", line 279, in _arequest_chat_completion
return await self._async_client.chat.completions.create(
File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/openai/resources/chat/completions/completions.py", line 2000, in create
return await self._post(
File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/openai/_base_client.py", line 1767, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/openai/_base_client.py", line 1461, in request
return await self._request(
File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/openai/_base_client.py", line 1562, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'messages[2].tool_calls[0].id': string too long. Expected a string with maximum length 40, but got a string with length 46 instead.", 'type': 'invalid_request_error', 'param': 'messages[2].tool_calls[0].id', 'code': 'string_above_max_length'}}
Expected behavior
No response
Additional context
No response
hi @yiyiyi0817 is this because the prompt is either too long or the generated response exceeds the context length?
hi @yiyiyi0817 is this because the prompt is either too long or the generated response exceeds the context length?
But it shows the id of tool call is too long, I am quite confused about it. Is there any one else encountered this error?
can tyou share the entire code snippet?
perhaps there a very long tool name or some comma not being enclosed well, the entire code snippet will help debug this
perhaps there a very long tool name or some comma not being enclosed well, the entire code snippet will help debug this
here: https://github.com/camel-ai/oasis/blob/refactor/scripts/environment/twitter_simulation.py
hey @yiyiyi0817 , which model you were using? From the code snip it's gpt 4o-mini but from the link it's vllm
hey @yiyiyi0817 , which model you were using? From the code snip it's gpt 4o-mini but from the link it's vllm
As the link showed, I both use the openai model and vllm model
Hi @yiyiyi0817 , the VLLM model cannot generate the same error since the error it's from the openAI api, right? So bug comes from the openai model?
I update the script with
openai_model_1 = ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type="gpt-4o",
)
openai_model_2 = ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type="gpt-4o",
)
models = [openai_model_1, openai_model_2]
But it did not give me the same error... Can you provide more details to recover the same error? Thanks!
The tool call id is usually assigned by openAI api itself, and it can not be over than 40 characters. Is it possible it's first generated by other models? Then somehow you switched to openAI model? Just a guess..
The tool call id is usually assigned by openAI api itself, and it can not be over than 40 characters. Is it possible it's first generated by other models? Then somehow you switched to openAI model? Just a guess..
I think it might be like this, but I'm concerned that directly truncating the tool call ID could cause other issues. I'll take another careful look. I can also try to only use the vllm models to have a try.
@MuggleJinx @Wendong-Fan This issue doesn’t occur when I use multiple vLLM models—it only happens when the model list includes both openai and vllm models. For OASIS, I think it's fine to just let users input multiple vllm models with different URLs. But if CAMEL wants to support cases where users use different types of models, it might need to handle this more carefully.
thanks @MuggleJinx and @yiyiyi0817 ! I think this case should be handled, if user has some stored chat history want to apply to another model platform that different with the initial model platform generates those chat history then error could happen, we need to verifier if there's any side effect if we mananully modify the length of id if it's beyond max length limitation of openai