`get_make` is not strict. Only `strict` function tools can be auto-parsed'
What happened?
i have a function get_make
def get_make() -> str:
return json.dumps(
[
{"make": "FORD", "make_code": "FRD"},
{"make": "AUDI", "make_code": "AUD"},
],
indent=4,
)
when i added it to JSON client
response = await self._model_client.create(
llm_messages,
cancellation_token=ctx.cancellation_token,
json_output=True,
tools=self._tool_schema,
extra_create_args={"response_format": EntityResponse},
)
its throwing the error
`get_make` is not strict. Only `strict` function tools can be auto-parsed'
What did you expect to happen?
invoke the function if needed
How can we reproduce it (as minimally and precisely as possible)?
strict json_output client with function call
AutoGen version
0.4.0.dev8
Which package was this bug in
Core
Model used
No response
Python version
No response
Operating system
No response
Any additional info you think would be helpful for fixing this bug
No response
Could you please post a complete code snippet. Especially showing how self._tool_schema, is generated
i am creating tools array
def get_tools(self):
tools = []
tools.append(FunctionTool(get_make, "function to get list of vehicle make."))
return tools
and then pass that array to agent i created
and inside i do
self._tool_schema = [tool.schema for tool in tools]
Note: I did found 1 issue in OpenAI python repo might help in understanding the issue
https://github.com/openai/openai-python/issues/1733
Thanks. it does look like something to do with openai client we are using.
I am facing the same issue. Is there a fix for this?
I am facing the same issue. Is there a fix for this?
What is your package version? And your code?
autogen-core version is 0.4.6, openai client is 1.61.1.
Here is my code;
def word_len(word: str) -> int:
"""Return the length of a word.
Args:
word (str): The word to return the length of.
Returns:
int: The length of the word.
"""
return len(word)
candidates_generator_model_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
response_format=CandidatesGeneratorFormat,
)
candidates_generator_system_prompt = (
"""Generate a list of candidate answers for the crossword clue of given length. Use the `word_len` tool to """
"""determine the length of a word."""
)
candidates_generator = AssistantAgent(
name="candidates_generator",
model_client=candidates_generator_model_client,
tools=[word_len],
system_message=candidates_generator_system_prompt,
reflect_on_tool_use=True,
)
I see. this is a bug. We need to allow an option to pass "strict = True" to the function schema when the response format is JSON schema.
Full repo here:
import asyncio
from pydantic import BaseModel
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.ui import Console
def word_len(word: str) -> int:
"""Return the length of a word.
Args:
word (str): The word to return the length of.
Returns:
int: The length of the word.
"""
return len(word)
class CandidatesGeneratorFormat(BaseModel):
candidates: list[str]
candidates_generator_model_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
response_format=CandidatesGeneratorFormat,
)
candidates_generator_system_prompt = (
"""Generate a list of candidate answers for the crossword clue of given length. Use the `word_len` tool to """
"""determine the length of a word."""
)
candidates_generator = AssistantAgent(
name="candidates_generator",
model_client=candidates_generator_model_client,
tools=[word_len],
system_message=candidates_generator_system_prompt,
reflect_on_tool_use=True,
)
async def main() -> None:
result = await Console(candidates_generator.run_stream(task="Crossword clue: 5 letters"))
asyncio.run(main())
---------- user ----------
Crossword clue: 5 letters
Traceback (most recent call last):
File "/Users/ekzhu/autogen/python/test.py", line 41, in <module>
asyncio.run(main())
File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/ekzhu/autogen/python/test.py", line 39, in main
result = await Console(candidates_generator.run_stream(task="Crossword clue: 5 letters"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ekzhu/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py", line 117, in Console
async for message in stream:
File "/Users/ekzhu/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/agents/_base_chat_agent.py", line 176, in run_stream
async for message in self.on_messages_stream(input_messages, cancellation_token):
File "/Users/ekzhu/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py", line 415, in on_messages_stream
model_result = await self._model_client.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ekzhu/autogen/python/packages/autogen-ext/src/autogen_ext/models/openai/_openai_client.py", line 529, in create
result: Union[ParsedChatCompletion[BaseModel], ChatCompletion] = await future
^^^^^^^^^^^^
File "/Users/ekzhu/autogen/python/.venv/lib/python3.12/site-packages/openai/resources/beta/chat/completions.py", line 423, in parse
_validate_input_tools(tools)
File "/Users/ekzhu/autogen/python/.venv/lib/python3.12/site-packages/openai/lib/_parsing/_completions.py", line 53, in validate_input_tools
raise ValueError(
ValueError: `word_len` is not strict. Only `strict` function tools can be auto-parsed
https://github.com/microsoft/autogen/pull/5507
Thanks for the fix!
I am facing the same issue in version 0.4.6 Thanks for the fix.
Should we apply the same fix to the MCP Workbench use-case in workbench.py, specifically in the ToolSchema definition and the list_tools function? (e.g.: https://github.com/microsoft/autogen/blob/11b7743b7d7ba0e703083054bc8fcac1749005a0/python/packages/autogen-ext/src/autogen_ext/tools/mcp/_workbench.py#L186)
Should we apply the same fix to the MCP Workbench use-case in
workbench.py, specifically in theToolSchemadefinition and thelist_toolsfunction? (e.g.: https://github.com/microsoft/autogen/blob/11b7743b7d7ba0e703083054bc8fcac1749005a0/python/packages/autogen-ext/src/autogen_ext/tools/mcp/_workbench.py#L186)
Does MCP already support strict parameter in tool schema?
Does MCP already support strict parameter in tool schema?
I can’t say for sure, but when I was playing with the ida-pro-mcp (as workbench) with structured output enabled, I encountered the same error:
ValueError: `check_connection` is not strict. Only `strict` function tools can be auto-parsed
I see. OpenAI's structured output model requires the function schema to be strict. So the MCP server should return the schema as strict.
It's a bit annoying because the MCP server is often not under control of your application and overriding the schema with "strict" may cause unintended effects. One solution is to override the tool schema in the model client with strict when structure output is used. What do you think? @SongChiYoung ?
@ekzhu I think it’s nice solution for this case. Yes we have some issues of real implement of it however it’s just technical issue.
Thanks. @wizche happy to review a PR. You can mention @SongChiYoung for a review