autogen icon indicating copy to clipboard operation
autogen copied to clipboard

Multiple tool call failure with Gemini

Open apremalal opened this issue 5 months ago • 4 comments

What happened?

Describe the bug Use of gemini models with multiple tool calls leads to openai.BadRequestError. This happens only if I set model_client_stream=True To Reproduce

Following snippet will reproduce the issue.

from autogen_core.tools import BaseTool, FunctionTool
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from typing import List
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_core import CancellationToken
import asyncio


async def get_current_time() -> str:
    return "The current time is 12:00 PM."

async def get_stock_price(stock: str ="MFST") -> str:
    return "The stock price is 500 dollors."


async def main():
    tools: List[BaseTool] = [
        FunctionTool(
            get_current_time,
            name="get_current_time",
            description="Get current time in Paris.",
        ),
        FunctionTool(
            get_stock_price,
            name="get_stock_price",
            description="Fetch the stock price for a given company."
        ),
    ]
    model_client = OpenAIChatCompletionClient(
            model="gemini-2.5-flash",
            api_key="",
    )

    response = await AssistantAgent(
            name="Agent",
            model_client=model_client,
            model_client_stream=True,
            reflect_on_tool_use=True,
            tools=tools,
            max_tool_iterations=10,
            system_message= "You are an assistant with some tools that can be used to answer some questions",
        ).on_messages([TextMessage(content="What is current time of Paris now? And what is stock price of MSFT", source="user")], CancellationToken())

    print(response)



if __name__ == "__main__":
    asyncio.run(main())

Which packages was the bug in?

Python AgentChat (autogen-agentchat>=0.4.0)

AutoGen library version.

Python 0.6.4

Other library version.

No response

Model used

gemini-2.5-flash

Model provider

Google Gemini

Other model provider

No response

Python version

3.11

.NET version

None

Operating system

Ubuntu

apremalal avatar Jul 23 '25 07:07 apremalal

It seems that the Gemini server generates malformed tool call argument that looks like this {}{"stock":"MSFT"}. The first chunk comes in is {}, and then the second chunk is {"stock":"MSFT"}.

ekzhu avatar Jul 23 '25 18:07 ekzhu

I have the same problem VLLM - QWEN. model_client_stream=True, reflect_on_tool_use=True,

as soon as model_client_stream is activated, HandOff no longer works without errors.

autogen-agentchat 0.6.4 arguments=“” is always an empty string.

The HandOff really only happens after Streem has been terminated.

model_client_stream=False So everything works perfectly.

Especially if you want to ask the user a question, this is very annoying. I hope there is a possibility to fix the problem.

alb99 avatar Jul 24 '25 06:07 alb99

Any idea how we should go about addressing this?

quantexperts avatar Jul 28 '25 12:07 quantexperts

was getting same error before. so i have used pydantic model as input not just parameters. till now didn't got multiple tool call based error.

class IPParms(Basemodel):
   stock: str = Field()

def stocl_tool(params: IPParams):
   pass

Edit: got error today even after using pydantic models

Ravikumarchavva avatar Aug 05 '25 18:08 Ravikumarchavva