autogen icon indicating copy to clipboard operation
autogen copied to clipboard

`get_make` is not strict. Only `strict` function tools can be auto-parsed'

Open gilada-shubham opened this issue 1 year ago • 4 comments

What happened?

i have a function get_make

def get_make() -> str:
    return json.dumps(
        [
            {"make": "FORD", "make_code": "FRD"},
            {"make": "AUDI", "make_code": "AUD"},
        ],
        indent=4,
    )

when i added it to JSON client

response = await self._model_client.create(
            llm_messages,
            cancellation_token=ctx.cancellation_token,
            json_output=True,
            tools=self._tool_schema,
            extra_create_args={"response_format": EntityResponse},
        )

its throwing the error

`get_make` is not strict. Only `strict` function tools can be auto-parsed'

What did you expect to happen?

invoke the function if needed

How can we reproduce it (as minimally and precisely as possible)?

strict json_output client with function call

AutoGen version

0.4.0.dev8

Which package was this bug in

Core

Model used

No response

Python version

No response

Operating system

No response

Any additional info you think would be helpful for fixing this bug

No response

gilada-shubham avatar Dec 02 '24 08:12 gilada-shubham

Could you please post a complete code snippet. Especially showing how self._tool_schema, is generated

ekzhu avatar Dec 02 '24 19:12 ekzhu

i am creating tools array

 def get_tools(self):
    tools = []
    tools.append(FunctionTool(get_make, "function to get list of vehicle make."))
    return tools

and then pass that array to agent i created

and inside i do

self._tool_schema = [tool.schema for tool in tools]

gilada-shubham avatar Dec 03 '24 16:12 gilada-shubham

Note: I did found 1 issue in OpenAI python repo might help in understanding the issue

https://github.com/openai/openai-python/issues/1733

gilada-shubham avatar Dec 03 '24 16:12 gilada-shubham

Thanks. it does look like something to do with openai client we are using.

ekzhu avatar Dec 03 '24 18:12 ekzhu

I am facing the same issue. Is there a fix for this?

priyathamkat avatar Feb 11 '25 19:02 priyathamkat

I am facing the same issue. Is there a fix for this?

What is your package version? And your code?

ekzhu avatar Feb 12 '25 02:02 ekzhu

autogen-core version is 0.4.6, openai client is 1.61.1.

Here is my code;

def word_len(word: str) -> int:
    """Return the length of a word.

    Args:
        word (str): The word to return the length of.

    Returns:
        int: The length of the word.
    """
    return len(word)

candidates_generator_model_client = OpenAIChatCompletionClient(
    model="gpt-4o-mini",
    response_format=CandidatesGeneratorFormat,
)
candidates_generator_system_prompt = (
    """Generate a list of candidate answers for the crossword clue of given length. Use the `word_len` tool to """
    """determine the length of a word."""
)
candidates_generator = AssistantAgent(
    name="candidates_generator",
    model_client=candidates_generator_model_client,
    tools=[word_len],
    system_message=candidates_generator_system_prompt,
    reflect_on_tool_use=True,
)

priyathamkat avatar Feb 12 '25 03:02 priyathamkat

I see. this is a bug. We need to allow an option to pass "strict = True" to the function schema when the response format is JSON schema.

ekzhu avatar Feb 12 '25 06:02 ekzhu

Full repo here:

import asyncio
from pydantic import BaseModel
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.ui import Console

def word_len(word: str) -> int:
    """Return the length of a word.

    Args:
        word (str): The word to return the length of.

    Returns:
        int: The length of the word.
    """
    return len(word)

class CandidatesGeneratorFormat(BaseModel):
    candidates: list[str]


candidates_generator_model_client = OpenAIChatCompletionClient(
    model="gpt-4o-mini",
    response_format=CandidatesGeneratorFormat,
)
candidates_generator_system_prompt = (
    """Generate a list of candidate answers for the crossword clue of given length. Use the `word_len` tool to """
    """determine the length of a word."""
)
candidates_generator = AssistantAgent(
    name="candidates_generator",
    model_client=candidates_generator_model_client,
    tools=[word_len],
    system_message=candidates_generator_system_prompt,
    reflect_on_tool_use=True,
)

async def main() -> None:
    result = await Console(candidates_generator.run_stream(task="Crossword clue: 5 letters"))

asyncio.run(main())
---------- user ----------
Crossword clue: 5 letters
Traceback (most recent call last):
  File "/Users/ekzhu/autogen/python/test.py", line 41, in <module>
    asyncio.run(main())
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/Users/ekzhu/autogen/python/test.py", line 39, in main
    result = await Console(candidates_generator.run_stream(task="Crossword clue: 5 letters"))
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ekzhu/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py", line 117, in Console
    async for message in stream:
  File "/Users/ekzhu/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/agents/_base_chat_agent.py", line 176, in run_stream
    async for message in self.on_messages_stream(input_messages, cancellation_token):
  File "/Users/ekzhu/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py", line 415, in on_messages_stream
    model_result = await self._model_client.create(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ekzhu/autogen/python/packages/autogen-ext/src/autogen_ext/models/openai/_openai_client.py", line 529, in create
    result: Union[ParsedChatCompletion[BaseModel], ChatCompletion] = await future
                                                                     ^^^^^^^^^^^^
  File "/Users/ekzhu/autogen/python/.venv/lib/python3.12/site-packages/openai/resources/beta/chat/completions.py", line 423, in parse
    _validate_input_tools(tools)
  File "/Users/ekzhu/autogen/python/.venv/lib/python3.12/site-packages/openai/lib/_parsing/_completions.py", line 53, in validate_input_tools
    raise ValueError(
ValueError: `word_len` is not strict. Only `strict` function tools can be auto-parsed

ekzhu avatar Feb 12 '25 06:02 ekzhu

https://github.com/microsoft/autogen/pull/5507

ekzhu avatar Feb 12 '25 06:02 ekzhu

Thanks for the fix!

priyathamkat avatar Feb 12 '25 21:02 priyathamkat

I am facing the same issue in version 0.4.6 Thanks for the fix.

chengyu-liu-cs avatar Feb 13 '25 13:02 chengyu-liu-cs

Should we apply the same fix to the MCP Workbench use-case in workbench.py, specifically in the ToolSchema definition and the list_tools function? (e.g.: https://github.com/microsoft/autogen/blob/11b7743b7d7ba0e703083054bc8fcac1749005a0/python/packages/autogen-ext/src/autogen_ext/tools/mcp/_workbench.py#L186)

wizche avatar Jun 27 '25 10:06 wizche

Should we apply the same fix to the MCP Workbench use-case in workbench.py, specifically in the ToolSchema definition and the list_tools function? (e.g.: https://github.com/microsoft/autogen/blob/11b7743b7d7ba0e703083054bc8fcac1749005a0/python/packages/autogen-ext/src/autogen_ext/tools/mcp/_workbench.py#L186)

Does MCP already support strict parameter in tool schema?

ekzhu avatar Jun 27 '25 11:06 ekzhu

Does MCP already support strict parameter in tool schema?

I can’t say for sure, but when I was playing with the ida-pro-mcp (as workbench) with structured output enabled, I encountered the same error:

ValueError: `check_connection` is not strict. Only `strict` function tools can be auto-parsed

wizche avatar Jun 27 '25 11:06 wizche

I see. OpenAI's structured output model requires the function schema to be strict. So the MCP server should return the schema as strict.

It's a bit annoying because the MCP server is often not under control of your application and overriding the schema with "strict" may cause unintended effects. One solution is to override the tool schema in the model client with strict when structure output is used. What do you think? @SongChiYoung ?

ekzhu avatar Jun 27 '25 11:06 ekzhu

@ekzhu I think it’s nice solution for this case. Yes we have some issues of real implement of it however it’s just technical issue.

SongChiYoung avatar Jun 27 '25 14:06 SongChiYoung

Thanks. @wizche happy to review a PR. You can mention @SongChiYoung for a review

ekzhu avatar Jun 27 '25 20:06 ekzhu