Function calling error when using Gemini 2.0
Describe the bug
Why I try to run SWE-Agent with "gemini/gemini-2.0-flash" model on the swebench dataset, I get a LiteLLM error related to function calling.
Steps/commands/code to Reproduce
sweagent run-batch
--config config/default.yaml
--agent.model.name gemini/gemini-2.0-flash
--agent.model.per_instance_cost_limit 2.00
--instances.type swe_bench
--instances.subset lite --instances.split dev
--instances.slice :3
--instances.shuffle=True
Error message/results
"response":
"Exit due to unknown error: litellm.BadRequestError: VertexAIException BadRequestError - {\n
\"errorl": f\n
\"code\": 400, \n \ "message)": \"* GenerateContentRequest.
toolsl01. function_declarations|4].parameters.properties:
should be non-empty for OBJECT typel\n* GenerateContentRequest.tools[0].function_declarations[5].parameters.properties: should be non
-empty for OBJECT typel\n* GenerateContentRequest.tools[0].function_declarations[11].parameters.properties: should be non-empty for OBJECT typelln", In
I"status!": \"INVALID_ARGUMENT\" \n
"thought": "Exit due to unknown error: litellm.BadRequestError: VertexAIException BadRequestError - {\n
\ "message\": \"* GenerateContentRequest.t
oliotinetin declaration. metes. pertie mt teentent est t fu"d declarations parameters. propertate: sheute duest. -
empty for OBJECT typel\n* GenerateContentRequest.tools[0]. function_declarations[111.parameters.properties: should be non-empty for OBJECT typellnl", \n ("status!": \ "INVALID_ARGUMENT\"\n
System Information
Linux 6.1.0-31-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.128-1 (2025-02-07) x86_64 GNU/Linux
Checklist
- [x] I'm running with the latest docker container/on the latest development version (i.e., I ran
git pull)) - [x] I have copied the full command/code that I ran (as text, not as screenshot!)
- [x] If applicable: I have copied the full log file/error message that was the result (as text, not as screenshot!)
- [x] I have enclosed code/log messages in triple backticks (docs) and clicked "Preview" to make sure it's displayed correctly.
Thanks for raising this @karan15234 . Hmm, I can try to get access to Gemini later, but it's probably only gonna happen in the next few days. Here's a few pointers if you want to get started on this yourself (PR would be much appreciated).
- We're using
litellmto handle the models. here is there page on gemini. In particular, they list
from litellm import completion
import os
# set env
os.environ["GEMINI_API_KEY"] = ".."
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
}
]
messages = [{"role": "user", "content": "What's the weather like in Boston today?"}]
response = completion(
model="gemini/gemini-1.5-flash",
messages=messages,
tools=tools,
)
# Add any assertions, here to check response args
print(response)
assert isinstance(response.choices[0].message.tool_calls[0].function.name, str)
assert isinstance(
response.choices[0].message.tool_calls[0].function.arguments, str
)
as the example of using function calls. As a first step, can you check if that one works for your setup?
- Given that 1 is 'yes', we'd have to see what goes wrong with
swe-agent. As a first step, let's try to reproduce the problem.
First check that something like this
from sweagent.agent.models import LiteLLMModel, ToolConfig, GenericAPIModelConfig
model_config = GenericAPIModelConfig(name="...",)
model = LiteLLMModel(model_config, tools=ToolConfig())
messages = [{"role": "user", "content": "What's the weather like in Boston today?"}]
model.query(messages)
works. This technically shouldn't have function calls in there yet
-
Next, let's add some function to
ToolConfig()but run with the same message -
Next, let's use the function in the message
I'd have to look up 3 and 4, but if you wanna get started on the others, would be much appreciated
I wonder if this is still happening @klieret ? No rush, but Gemini 2.5 is free now, I can give you my API key if you ping me
I tried to reproduce the error, but I don't know which provider was used. I tried with openrouter, but I got a different error:
🤠 ERROR Exiting due to unknown error: litellm.UnsupportedParamsError: gemini does not support parameters:
['reasoning_effort'], for model=gemini-2.0-flash. To drop these, set `litellm.drop_params=True` or for proxy:
`litellm_settings:
drop_params: true`
.
If you want to use these params dynamically send allowed_openai_params=['reasoning_effort'] in your request.
same question in gpt-4o, how to resolve it~
$ sweagent run --agent.model.name=gpt-4o --env.repo.path=/home/wql/AVR_JAVA_STUDY/Java_rep/netty --problem_statement.path=/home/wql/AVR_JAVA_STUDY/Java_rep/netty/cve-2015.md --env.deployment.image=python:3.12 --agent.model.top_p=null --agent.model.temperature=1 --agent.model.reasoning_effort=null
🤖 DEBUG n_cache_control: 1
🤠 ERROR Exiting due to unknown error: litellm.UnsupportedParamsError: openai does
not support parameters: ['reasoning_effort'], for model=gpt-4o. To drop
these, set `litellm.drop_params=True` or for proxy:
`litellm_settings:
drop_params: true`
.
If you want to use these params dynamically send
allowed_openai_params=['reasoning_effort'] in your request.
Traceback (most recent call last):
File
"/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/agents.py",
line 1109, in forward_with_handling
return self.forward(history)
^^^^^^^^^^^^^^^^^^^^^
File
"/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/agents.py",
line 1042, in forward
output = self.model.query(history) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/models.py",
line 805, in query
for attempt in Retrying(
File
"/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/tenacity
/__init__.py", line 445, in __iter__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/tenacity
/__init__.py", line 378, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File
"/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/tenacity
/__init__.py", line 400, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File
"/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/concurrent/futures/_ba
se.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File
"/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/concurrent/futures/_ba
se.py", line 401, in __get_result
raise self._exception
File
"/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/models.py",
line 831, in query
result = self._query(messages, n=n, temperature=temperature)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/models.py",
line 787, in _query
outputs.extend(self._single_query(messages))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/models.py",
line 718, in _single_query
response: litellm.types.utils.ModelResponse = litellm.completion( #
type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^
File
"/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/
utils.py", line 1332, in wrapper
raise e
File
"/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/
utils.py", line 1207, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/
main.py", line 3452, in completion
raise exception_type(
File
"/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/
main.py", line 1246, in completion
optional_params = get_optional_params(
^^^^^^^^^^^^^^^^^^^^
File
"/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/
utils.py", line 3308, in get_optional_params
_check_valid_arg(
File
"/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/
utils.py", line 3291, in _check_valid_arg
raise UnsupportedParamsError(
litellm.exceptions.UnsupportedParamsError: litellm.UnsupportedParamsError:
openai does not support parameters: ['reasoning_effort'], for
model=gpt-4o. To drop these, set `litellm.drop_params=True` or for proxy:
`litellm_settings:
drop_params: true`
.
If you want to use these params dynamically send
allowed_openai_params=['reasoning_effort'] in your request.
🤠 WARN Exit due to unknown error: litellm.UnsupportedParamsError: openai does not
support parameters: ['reasoning_effort'], for model=gpt-4o. To drop these,
set `litellm.drop_params=True` or for proxy:
`litellm_settings:
drop_params: true`
.
If you want to use these params dynamically send
allowed_openai_params=['reasoning_effort'] in your request.
🤠 WARN Attempting autosubmission after error
🤠 INFO Executing submission command git add -A && git diff --cached >
/root/model.patch in /netty
🤠 INFO Found submission:
🤠 INFO 🤖 MODEL INPUT
Your command ran successfully and did not produce any output.
🤠 INFO Trajectory saved to
/home/wql/clash/clash-for-linux-install-master/trajectories/wql/no_config_
_gpt-4o__t-1.00__p-1.00__c-3.00___acc3bb/acc3bb/acc3bb.traj
⚡️ INFO No patch to save.
🏃 INFO Done
🪴 INFO Beginning environment shutdown...
🦖 DEBUG Ensuring deployment is stopped because object is deleted
This error can be overcome by passing null for top_p as follows
sweagent run \
--env.repo.github_url https://github.com/SWE-agent/test-repo \
--problem_statement.github_url https://github.com/SWE-agent/test-repo/issues/1 \
--agent.model.name azure/o3 \
--agent.model.top_p null \
--agent.model.temperature 1
This error can be overcome by passing
nullfortop_pas followssweagent run \ --env.repo.github_url https://github.com/SWE-agent/test-repo \ --problem_statement.github_url https://github.com/SWE-agent/test-repo/issues/1 \ --agent.model.name azure/o3 \ --agent.model.top_p null \ --agent.model.temperature 1
我在swe-agent1.10版本上运行了 sweagent run --env.repo.path=/home/wql/AVR_JAVA_STUDY/Java_rep/netty --problem_statement.path=/home/wql/AVR_JAVA_STUDY/Java_rep/netty/cve-2015.md --agent.model.name=gpt-4o --env.deployment.image=python:3.12 --agent.model.top_p null --agent.model.temperature 1 命令,但是报错如下:
```
Can you help me implement the necessary changes to the repository so that the requirements specified in
the <pr_description> are met?
I've already taken care of all changes to any of the test files described in the <pr_description>. This
means you DON'T have to modify the testing logic or any of the tests in any way!
Your task is to make the minimal changes to non-tests files in the /netty directory to ensure the
<pr_description> is satisfied.
Follow these steps to resolve the issue:
1. As a first step, it might be a good idea to find and read code relevant to the <pr_description>
2. Create a script to reproduce the error and execute it with `python <filename.py>` using the bash tool,
to confirm the error
3. Edit the sourcecode of the repo to resolve the issue
4. Rerun your reproduce script and confirm that the error is fixed!
5. Think about edgecases and make sure your fix handles them as well
Your thinking should be thorough and so it's fine if it's very long.
🤠 INFO ========================= STEP 1 =========================
🤖 DEBUG n_cache_control: 1
🤠 ERROR Exiting due to unknown error: litellm.UnsupportedParamsError: openai does not support parameters:
['reasoning_effort'], for model=gpt-4o. To drop these, set litellm.drop_params=True or for proxy:
`litellm_settings:
drop_params: true`
.
If you want to use these params dynamically send allowed_openai_params=['reasoning_effort'] in your
request.
Traceback (most recent call last):
File "/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/agents.py", line 1109, in
forward_with_handling
return self.forward(history)
^^^^^^^^^^^^^^^^^^^^^
File "/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/agents.py", line 1042, in forward
output = self.model.query(history) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/models.py", line 805, in query
for attempt in Retrying(
File "/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/tenacity/__init__.py", line 445,
in __iter__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/tenacity/__init__.py", line 378,
in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/tenacity/__init__.py", line 400,
in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/concurrent/futures/_base.py", line 449, in
result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/concurrent/futures/_base.py", line 401, in
__get_result
raise self._exception
File "/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/models.py", line 831, in query
result = self._query(messages, n=n, temperature=temperature)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/models.py", line 787, in _query
outputs.extend(self._single_query(messages))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/sweagent/agent/models.py", line 718, in
_single_query
response: litellm.types.utils.ModelResponse = litellm.completion( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/utils.py", line 1332, in
wrapper
raise e
File "/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/utils.py", line 1207, in
wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/main.py", line 3452, in
completion
raise exception_type(
File "/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/main.py", line 1246, in
completion
optional_params = get_optional_params(
^^^^^^^^^^^^^^^^^^^^
File "/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/utils.py", line 3308, in
get_optional_params
_check_valid_arg(
File "/home/wql/miniconda3/envs/SWE-agent/lib/python3.11/site-packages/litellm/utils.py", line 3291, in
_check_valid_arg
raise UnsupportedParamsError(
litellm.exceptions.UnsupportedParamsError: litellm.UnsupportedParamsError: openai does not support
parameters: ['reasoning_effort'], for model=gpt-4o. To drop these, set `litellm.drop_params=True` or for
proxy:
`litellm_settings:
drop_params: true`
.
If you want to use these params dynamically send allowed_openai_params=['reasoning_effort'] in your
request.
🤠 WARN Exit due to unknown error: litellm.UnsupportedParamsError: openai does not support parameters:
['reasoning_effort'], for model=gpt-4o. To drop these, set litellm.drop_params=True or for proxy:
`litellm_settings:
drop_params: true`
.
If you want to use these params dynamically send allowed_openai_params=['reasoning_effort'] in your
request.
🤠 WARN Attempting autosubmission after error 🤠 INFO Executing submission command git add -A && git diff --cached > /root/model.patch in /netty 🤠 INFO Found submission: 🤠 INFO 🤖 MODEL INPUT Your command ran successfully and did not produce any output. 🤠 INFO Trajectory saved to /home/wql/AVR_JAVA_STUDY/SWE-agent/SWE-agent/trajectories/wql/no_config__gpt-4o__t-1.00__p-None__c-3.00__ _acc3bb/acc3bb/acc3bb.traj ⚡️ INFO No patch to save. 🏃 INFO Done 🪴 INFO Beginning environment shutdown... 🦖 DEBUG Ensuring deployment is stopped because object is deleted
I tried to reproduce the error, but I don't know which provider was used. I tried with openrouter, but I got a different error:
🤠 ERROR Exiting due to unknown error: litellm.UnsupportedParamsError: gemini does not support parameters: ['reasoning_effort'], for model=gemini-2.0-flash. To drop these, set `litellm.drop_params=True` or for proxy: `litellm_settings: drop_params: true` . If you want to use these params dynamically send allowed_openai_params=['reasoning_effort'] in your request.
Same question as this
@WordDealer In both the cases the call failed with unknown parameter reasoning_effort. It is understandable that you don't expect the error when you pass null as the value or when you don't even pass that parameter. Let me check the code and comeback.
1.0.0is ok,1.1 is bad for me
---- Replied Message ----
From @.> Date 09/12/2025 17:33 To @.> Cc @.>@.> Subject Re: [SWE-agent/SWE-agent] Function calling error when using Gemini 2.0 (Issue #941)
0xba1a left a comment (SWE-agent/SWE-agent#941)
@WordDealer In both the cases the call failed with unknown parameter reasoning_effort. It is understandable that you don't expect the error when you pass null as the value or when you don't even pass that parameter. Let me check the code and comeback.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: @.***>
@WordDealer You're facing this error as reasoning_effort parameter is passed by the config file. As you didn't supply any config file in you command-line, it takes config/default.yaml as the default config file. This file is having reasoning_effort set.
You can overcome the error just by commenting out the last two lines of default.yaml.
...
...
model:
temperature: 1.
# completion_kwargs:
# reasoning_effort: 'high'
Hope this helps.
I will try it ,Thank you very much ~
---- Replied Message ----
From @.> Date 09/14/2025 17:31 To @.> Cc @.>@.> Subject Re: [SWE-agent/SWE-agent] Function calling error when using Gemini 2.0 (Issue #941)
0xba1a left a comment (SWE-agent/SWE-agent#941)
@WordDealer You're facing this error as reasoning_effort parameter is passed by the config file. As you didn't supply any config file in you command-line, it takes config/default.yaml as the default config file. This file is having reasoning_effort set.
You can overcome the error just by commenting out the last two lines of default.yaml.
...
...
model:
temperature: 1.
# completion_kwargs:
# reasoning_effort: 'high'
Hope this helps.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: @.***>
@WordDealer There is an open PR by @klieret to fix this issue - https://github.com/SWE-agent/SWE-agent/pull/1281/files
It will get merged soon.
thanks a lot!
---- Replied Message ----
From @.> Date 09/15/2025 00:11 To @.> Cc @.>@.> Subject Re: [SWE-agent/SWE-agent] Function calling error when using Gemini 2.0 (Issue #941)
0xba1a left a comment (SWE-agent/SWE-agent#941)
@WordDealer There is an open PR by @klieret to fix this issue - https://github.com/SWE-agent/SWE-agent/pull/1281/files
It will get merged soon.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: @.***>