camel icon indicating copy to clipboard operation
camel copied to clipboard

[BUG] Invalid 'messages[2].tool_calls[0].id': string too long

Open echo-yiyiyi opened this issue 8 months ago • 12 comments

Required prerequisites

What version of camel are you using?

0.2.43

System information

3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] linux 0.2.43

Problem description

Sometimes the tool calling for the openai model will raise error.

Reproducible example code

The Python snippets:

    openai_model = ModelFactory.create(
        model_platform=ModelPlatformType.OPENAI,
        model_type=ModelType.GPT_4O_MINI,
    )

Command lines:


Extra dependencies:


Steps to reproduce:

Traceback

2025-04-17 08:38:13,534 - social.agent - ERROR - Agent 9 error: Unable to process messages: none of the provided models run successfully.
2025-04-17 08:38:13,554 - camel.models.model_manager - ERROR - Error processing with model: <camel.models.openai_model.OpenAIModel object at 0x14fa9e9af2b0>
2025-04-17 08:38:13,554 - camel.agents.chat_agent - ERROR - An error occurred while running model gpt-4o-mini, index: 1
Traceback (most recent call last):
  File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/agents/chat_agent.py", line 839, in _aget_model_response
    response = await self.model_backend.arun(
  File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/models/model_manager.py", line 265, in arun
    raise exc
  File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/models/model_manager.py", line 253, in arun
    response = await self.current_model.arun(
  File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/models/base_model.py", line 307, in arun
    return await self._arun(messages, response_format, tools)
  File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/models/openai_model.py", line 243, in _arun
    return await self._arequest_chat_completion(messages, tools)
  File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/camel/models/openai_model.py", line 279, in _arequest_chat_completion
    return await self._async_client.chat.completions.create(
  File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/openai/resources/chat/completions/completions.py", line 2000, in create
    return await self._post(
  File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/openai/_base_client.py", line 1767, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/openai/_base_client.py", line 1461, in request
    return await self._request(
  File "/ibex/user/yangz0h/miniconda3/envs/oasis-2025/lib/python3.10/site-packages/openai/_base_client.py", line 1562, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'messages[2].tool_calls[0].id': string too long. Expected a string with maximum length 40, but got a string with length 46 instead.", 'type': 'invalid_request_error', 'param': 'messages[2].tool_calls[0].id', 'code': 'string_above_max_length'}}

Expected behavior

No response

Additional context

No response

echo-yiyiyi avatar Apr 17 '25 06:04 echo-yiyiyi

hi @yiyiyi0817 is this because the prompt is either too long or the generated response exceeds the context length?

JINO-ROHIT avatar Apr 17 '25 08:04 JINO-ROHIT

hi @yiyiyi0817 is this because the prompt is either too long or the generated response exceeds the context length?

But it shows the id of tool call is too long, I am quite confused about it. Is there any one else encountered this error?

echo-yiyiyi avatar Apr 17 '25 08:04 echo-yiyiyi

can tyou share the entire code snippet?

JINO-ROHIT avatar Apr 17 '25 09:04 JINO-ROHIT

perhaps there a very long tool name or some comma not being enclosed well, the entire code snippet will help debug this

JINO-ROHIT avatar Apr 17 '25 10:04 JINO-ROHIT

perhaps there a very long tool name or some comma not being enclosed well, the entire code snippet will help debug this

here: https://github.com/camel-ai/oasis/blob/refactor/scripts/environment/twitter_simulation.py

echo-yiyiyi avatar Apr 17 '25 11:04 echo-yiyiyi

hey @yiyiyi0817 , which model you were using? From the code snip it's gpt 4o-mini but from the link it's vllm

Wendong-Fan avatar Apr 18 '25 14:04 Wendong-Fan

hey @yiyiyi0817 , which model you were using? From the code snip it's gpt 4o-mini but from the link it's vllm

As the link showed, I both use the openai model and vllm model

echo-yiyiyi avatar Apr 18 '25 14:04 echo-yiyiyi

Hi @yiyiyi0817 , the VLLM model cannot generate the same error since the error it's from the openAI api, right? So bug comes from the openai model?

I update the script with

    openai_model_1 = ModelFactory.create(
        model_platform=ModelPlatformType.OPENAI,
        model_type="gpt-4o",
    )
    openai_model_2 = ModelFactory.create(
        model_platform=ModelPlatformType.OPENAI,
        model_type="gpt-4o",
    )
    models = [openai_model_1, openai_model_2]

But it did not give me the same error... Can you provide more details to recover the same error? Thanks!

MuggleJinx avatar Apr 19 '25 10:04 MuggleJinx

The tool call id is usually assigned by openAI api itself, and it can not be over than 40 characters. Is it possible it's first generated by other models? Then somehow you switched to openAI model? Just a guess..

MuggleJinx avatar Apr 19 '25 10:04 MuggleJinx

The tool call id is usually assigned by openAI api itself, and it can not be over than 40 characters. Is it possible it's first generated by other models? Then somehow you switched to openAI model? Just a guess..

I think it might be like this, but I'm concerned that directly truncating the tool call ID could cause other issues. I'll take another careful look. I can also try to only use the vllm models to have a try.

echo-yiyiyi avatar Apr 19 '25 13:04 echo-yiyiyi

@MuggleJinx @Wendong-Fan This issue doesn’t occur when I use multiple vLLM models—it only happens when the model list includes both openai and vllm models. For OASIS, I think it's fine to just let users input multiple vllm models with different URLs. But if CAMEL wants to support cases where users use different types of models, it might need to handle this more carefully.

echo-yiyiyi avatar Apr 21 '25 07:04 echo-yiyiyi

thanks @MuggleJinx and @yiyiyi0817 ! I think this case should be handled, if user has some stored chat history want to apply to another model platform that different with the initial model platform generates those chat history then error could happen, we need to verifier if there's any side effect if we mananully modify the length of id if it's beyond max length limitation of openai

Wendong-Fan avatar Apr 21 '25 11:04 Wendong-Fan