MALFORMED_FUNCTION_CALL seems to be easily triggered by the model (Gemini) due to writing todo.
Currently, I have found that the frequency of this issue occurring on write todo is quite high.
│ │ AIMessage(
│ │ │ content='',
│ │ │ additional_kwargs={},
│ │ │ response_metadata={
│ │ │ │ 'is_blocked': False,
│ │ │ │ 'safety_ratings': [],
│ │ │ │ 'usage_metadata': {
│ │ │ │ │ 'prompt_token_count': 5553,
│ │ │ │ │ 'total_token_count': 5553,
│ │ │ │ │ 'prompt_tokens_details': [{'modality': 1, 'token_count': 5553}],
│ │ │ │ │ 'candidates_token_count': 0,
│ │ │ │ │ 'thoughts_token_count': 0,
│ │ │ │ │ 'cached_content_token_count': 0,
│ │ │ │ │ 'cache_tokens_details': [],
│ │ │ │ │ 'candidates_tokens_details': []
│ │ │ │ },
│ │ │ │ 'finish_reason': 'MALFORMED_FUNCTION_CALL',
│ │ │ │ 'finish_message': 'Malformed function call: print(default_api.write_todos(todos=[\n default_api.WriteTodosTodos(content=\'Perform web search for "CrewAI vs LangGraph comparison"\', status=\'pending\'),\n default_api.WriteTodosTodos(content=\'Analyze search results for key information on architecture, scalability, integration, community, documentation, licensing, cost, and use cases for both CrewAI and LangGraph.\', status=\'pending\'),\n default_api.WriteTodosTodos(content=\'Perform web search for "CrewAI architecture performance integration"\', status=\'pending\'),\n default_api.WriteTodosTodos(content=\'Analyze search results for key information on architecture, scalability, integration, community, documentation, licensing, cost, and use cases for both CrewAI and LangGraph.\', status=\'pending\'),\n default_api.WriteTodosTodos(content=\'Perform web search for "LangGraph architecture performance integration"\', status=\'pending\'),\n default_api.WriteTodosTodos(content=\'Analyze search results for key information on architecture, scalability, integration, community, documentation, licensing, cost, and use cases for both CrewAI and LangGraph.\', status=\'pending\'),\n default_api.WriteTodosTodos(content=\'Consolidate all findings and generate a comprehensive report comparing CrewAI and LangGraph.\', status=\'pending\')\n]))',
│ │ │ │ 'model_name': 'gemini-2.5-flash'
│ │ │ },
│ │ │ id='run--8a36437b-6e83-4081-881e-e4bb174b752d-0',
│ │ │ usage_metadata={'input_tokens': 5553, 'output_tokens': 0, 'total_tokens': 5553, 'input_token_details': {'cache_read': 0}}
│ │ )
│ ],
Happening to me alot, although I tell the model not to write code, but gemini 2.5 pro for some reason always does that
up to repo owner
Facing the same issue. Is there any way to fix it?
@XinyueZ it looks like you were running into this with gemini-2.5-flash, were there other models where you were seeing this behavior as well?
@riadzeitounn looks like gemini-2.5-pro also had this issue
@haadirakhangi was your issue with gemini as well?
Personally I usually test with the latest OpenAI (gpt-4.1, gpt-5) / Anthropic models (sonnet 3-7 and up, recently 4 and 4-5) and I don't see this issue. I'll investigate this.
It might be good to add retries automatically to try and mitigate this issue
Retry is a good idea, but where? Internal or external at use side?
@XinyueZ it looks like you were running into this with gemini-2.5-flash, were there other models where you were seeing this behavior as well?
@riadzeitounn looks like gemini-2.5-pro also had this issue
@haadirakhangi was your issue with gemini as well?
Personally I usually test with the latest OpenAI (gpt-4.1, gpt-5) / Anthropic models (sonnet 3-7 and up, recently 4 and 4-5) and I don't see this issue. I'll investigate this.
It might be good to add retries automatically to try and mitigate this issue
Yeah, I ran into the same problem with Gemini. It seems like Gemini can't handle tools names that includes hypens -. I noticed this while using an MCP server that followed that naming convention.
@haadirakhangi was your issue with gemini as well?
Personally I usually test with the latest OpenAI (gpt-4.1, gpt-5) / Anthropic models (sonnet 3-7 and up, recently 4 and 4-5) and I don't see this issue. I'll investigate this.
It might be good to add retries automatically to try and mitigate this issue
Both gemini-2.5-flash-lite and gemini-2.5-pro work fine for me, but gemini-2.5-flash fails to generate the write_todos tool call