OpenAI bad request error for Tools call with agentchat v0.4.7 | Working fine with agentchat v0.4.3
What happened?
Bug
I am evaluating the tools calling with v0.4.7. When configuring tools call, getting openai.BadRequestError: Error code: 400 Which worked fine with v0.4.3.
Reproduce
Steps to reproduce the behavior.
Run the below script.
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
import asyncio
# Define a tool
async def get_weather(city: str) -> str:
return f"The weather in {city} is 73 degrees and Sunny."
async def main() -> None:
# Define an agent
weather_agent = AssistantAgent(
name="weather_agent",
model_client=OpenAIChatCompletionClient(
model=<llm_model>,
api_key=<api_key>,
base_url="https://genai-api.example.com/v1",
model_capabilities={
"vision": True,
"function_calling": True,
"json_output": True,
},
),
tools=[get_weather],
system_message="Make tools call to get_weather(city) and return the result."
)
agent_team = RoundRobinGroupChat([weather_agent], max_turns=1)
while True:
user_input = input("Enter a message (type 'exit' to leave): ")
if user_input.strip().lower() == "exit":
break
stream = agent_team.run_stream(task=user_input)
await Console(stream)
asyncio.run(main())
Output for agentchat v0.4.3
Enter a message (type 'exit' to leave): How is whether in New York
---------- user ----------
How is whether in New York
---------- weather_agent ----------
[FunctionCall(id='chatcmpl-tool-3041237d4c7a40af83ec482dd7709fce', arguments='{"city": "New York"}', name='get_weather')]
---------- weather_agent ----------
[FunctionExecutionResult(content='The weather in New York is 73 degrees and Sunny.', call_id='chatcmpl-tool-3041237d4c7a40af83ec482dd7709fce')]
---------- weather_agent ----------
The weather in New York is 73 degrees and Sunny.
Enter a message (type 'exit' to leave): exit
Output for agentchat v0.4.7
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "[{'type': 'extra_forbidden', 'loc': ('body', 'tools', 0, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': False}]", 'type': 'BadRequestError', 'param': None, 'code': 400}
Expected behavior
Should not throw openai bad request error.
Which packages was the bug in?
Python AgentChat (autogen-agentchat>=0.4.0)
AutoGen library version.
Python 0.4.7
Other library version.
No response
Model used
No response
Model provider
None
Other model provider
No response
Python version
None
.NET version
None
Operating system
None
It looks like your server API is not OpenAI compatible.
You can log the LLMCallEvent to see what is the content of the request.
https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/framework/logging.html#enabling-logging-output
We added argument strict for tools to be compatible with OpenAI's endpoint.
Can you try change convert_tools in
https://github.com/microsoft/autogen/blob/main/python/packages/autogen-ext/src/autogen_ext/models/openai/_openai_client.py
To set strict only when it is in the schema? And see if that helps?
@ekzhu Also meet the same problem with 0.4.7 version, even with strict mode on. Hope to be addressed as soon as possible, really thanks!
# pip list
...
autogen-agentchat 0.4.7
autogen-core 0.4.7
autogen-ext 0.4.7
...
openai.BadRequestError: Error code: 400 - {'error': {'code': 'invalid_parameter_error', 'param': None, 'message': '<400> InternalError.Algo.InvalidParameter: Empty tool_calls is not supported in message.', 'type': 'invalid_request_error'}, 'id': 'chatcmpl-9f5d78eb-4633-9b3f-b93f-e4da14ddeb61', 'request_id': '9f5d78eb-4633-9b3f-b93f-e4da14ddeb61'}
It looks like your server API is not OpenAI compatible.
You can log the LLMCallEvent to see what is the content of the request.
https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/framework/logging.html#enabling-logging-output
We added argument
strictfor tools to be compatible with OpenAI's endpoint.Can you try change
convert_toolsinhttps://github.com/microsoft/autogen/blob/main/python/packages/autogen-ext/src/autogen_ext/models/openai/_openai_client.py
To set
strictonly when it is in the schema? And see if that helps?
@ekzhu The schema will always be coming with "strict". since more people are facing same issue, is there a way it can be optional here.
The schema will always be coming with "strict". since more people are facing same issue, is there a way it can be optional here.
Yes. Please update that to make it optional as well.
@avinashmihani @Dandelionym what is the end point you are using?
@ekzhu I'm using Qwen-Tongyi Platform, they support diverse models for agent and llms.
Their official API protocol is:
import os
from openai import OpenAI
client = OpenAI(
api_key=...
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
)
completion = client.chat.completions.create(
model="qwen-plus",
messages=[
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Hi!'}],
)
print(completion.model_dump_json())
Here is what I have changed:
- The first part I have changed.
The schema will always be coming with "strict". since more people are facing same issue, is there a way it can be optional here.
def schema(self) -> ToolSchema:
model_schema: Dict[str, Any] = self._args_type.model_json_schema()
if "$defs" in model_schema:
model_schema = cast(Dict[str, Any], jsonref.replace_refs(obj=model_schema, proxies=False)) # type: ignore
del model_schema["$defs"]
parameters = ParametersSchema(
type="object",
properties=model_schema["properties"],
required=model_schema.get("required", []),
additionalProperties=model_schema.get("additionalProperties", False),
)
# If strict is enabled, the tool schema should list all properties as required.
assert "required" in parameters
if self._strict and set(parameters["required"]) != set(parameters["properties"].keys()):
raise ValueError(
"Strict mode is enabled, but not all input arguments are marked as required. Default arguments are not allowed in strict mode."
)
assert "additionalProperties" in parameters
if self._strict and parameters["additionalProperties"]:
raise ValueError(
"Strict mode is enabled but additional argument is also enabled. This is not allowed in strict mode."
)
# Create the base tool schema without strict parameter
tool_schema = ToolSchema(
name=self._name,
description=self._description,
parameters=parameters,
)
# Only add strict if it's enabled (True)
# This makes the strict parameter optional in the schema
if self._strict:
tool_schema["strict"] = self._strict
return tool_schema
- The second part I have changed:
Can you try change convert_tools in
https://github.com/microsoft/autogen/blob/main/python/packages/autogen-ext/src/autogen_ext/models/openai/_openai_client.py
To set strict only when it is in the schema? And see if that helps?
def convert_tools(
tools: Sequence[Tool | ToolSchema],
) -> List[ChatCompletionToolParam]:
result: List[ChatCompletionToolParam] = []
for tool in tools:
if isinstance(tool, Tool):
tool_schema = tool.schema
else:
assert isinstance(tool, dict)
tool_schema = tool
function_def = {
"name": tool_schema["name"],
"description": tool_schema["description"] if "description" in tool_schema else "",
"parameters": cast(FunctionParameters, tool_schema["parameters"]) if "parameters" in tool_schema else {}
}
# Only add 'strict' parameter if it's explicitly in the schema
if "strict" in tool_schema:
function_def["strict"] = tool_schema["strict"]
result.append(
ChatCompletionToolParam(
type="function",
function=FunctionDefinition(**function_def),
)
)
# Check if all tools have valid names.
for tool_param in result:
assert_valid_name(tool_param["function"]["name"])
return result
But still error:
Traceback (most recent call last):
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/autogen_core/_single_threaded_agent_runtime.py", line 505, in _on_message
return await agent.on_message(
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/autogen_core/_base_agent.py", line 113, in on_message
return await self.on_message_impl(message, ctx)
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/autogen_agentchat/teams/_group_chat/_sequential_routed_agent.py", line 48, in on_message_impl
return await super().on_message_impl(message, ctx)
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/autogen_core/_routed_agent.py", line 485, in on_message_impl
return await h(self, message, ctx)
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/autogen_core/_routed_agent.py", line 268, in wrapper
return_value = await func(self, message, ctx) # type: ignore
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/autogen_agentchat/teams/_group_chat/_chat_agent_container.py", line 53, in handle_request
async for msg in self._agent.on_messages_stream(self._message_buffer, ctx.cancellation_token):
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 405, in on_messages_stream
async for chunk in self._model_client.create_stream(
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/autogen_ext/models/openai/_openai_client.py", line 731, in create_stream
stream = await stream_future
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/openai/resources/chat/completions/completions.py", line 1927, in create
return await self._post(
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/openai/_base_client.py", line 1862, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/openai/_base_client.py", line 1556, in request
return await self._request(
File "/Users/mellen/anaconda3/envs/autogen04/lib/python3.10/site-packages/openai/_base_client.py", line 1657, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'code': 'invalid_parameter_error', 'param': None, 'message': '<400> InternalError.Algo.InvalidParameter: Empty tool_calls is not supported in message.', 'type': 'invalid_request_error'}, 'id': 'chatcmpl-b4b11ad4-3ed1-9839-bf28-99d76dedb44c', 'request_id': #'b4b11ad4-3ed1-9839-bf28-99d76dedb44c'}
As it seems to be an important issue for considering other platforms' models, I hope there is an official solution to deal with this... Again, thank you all very much!
openai.BadRequestError: Error code: 400 - {'error': {'code': 'invalid_parameter_error', 'param': None, 'message': '<400> InternalError.Algo.InvalidParameter: Empty tool_calls is not supported in message.', 'type': 'invalid_request_error'}, 'id': 'chatcmpl-b4b11ad4-3ed1-9839-bf28-99d76dedb44c', 'request_id': #'b4b11ad4-3ed1-9839-bf28-99d76dedb44c'}
The error seems to be different from previous one. Looks like the tool_calls are empty in one of the request to the server -- this should be supported by OpenAI. Can you debug this further and see which LLM call caused this?
I'm using qwen2.5, and I'm having the same problem.
@dream-wujianguo, could you post your code?
Are you using local Qwen2.5 or cloud hosted one? Ollama or vLLM?
Having complete information about your issue makes it easier for us to help.
@dream-wujianguo, could you post your code?
Are you using local Qwen2.5 or cloud hosted one? Ollama or vLLM?
Having complete information about your issue makes it easier for us to help.
I am using vllm to boot, this problem has been solved, the reason is that vllm doesn't support strict mode, you need to rewrite the convert_tools function and compile it again
Okay thanks. Tracking https://github.com/vllm-project/vllm/issues/15526