generative-ai
generative-ai copied to clipboard
[Bug]: Error Invalid Argument when use more than one tools in GenerativeModel class
File Name
Using Vertex Search (a.k.a Agent Builder) and Calling Function
What happened?
Hi, I have an issue when using more than one Tool class in GenerativeModel class. Long short story, I tried to make a Gemini chatbot that capable to use Grounding (with Agent Builder) and Callign Function, so Gemini chatbot could maximize its potential. I read the documentation of GenerativeModel class in this link and it said that the tools parameter could take more than one Tool class. Here's the code I used:
- Snippet code of Tools for Function Calling
list_datasets_func = FunctionDeclaration(
name="list_datasets",
description="Get a list of datasets that will help answer the user's question",
parameters={
"type": "object",
"properties": {},
},
)
list_tables_func = FunctionDeclaration(
name="list_tables",
description="List tables in a dataset that will help answer the user's question",
parameters={
"type": "object",
"properties": {
"dataset_id": {
"type": "string",
"description": "Dataset ID to fetch tables from.",
}
},
"required": [
"dataset_id",
],
},
)
get_table_func = FunctionDeclaration(
name="get_table",
description="Get information about a table, including the description, schema, and number of rows that will help answer the user's question. Always use the fully qualified dataset and table names.",
parameters={
"type": "object",
"properties": {
"table_id": {
"type": "string",
"description": "Fully qualified ID of the table to get information about",
}
},
"required": [
"table_id",
],
},
)
sql_query_func = FunctionDeclaration(
name="sql_query",
description="Get information from data in BigQuery using SQL queries",
parameters={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "SQL query on a single line that will help give quantitative answers to the user's question when run on a BigQuery dataset and table. In the SQL query, always use the fully qualified dataset and table names.",
}
},
"required": [
"query",
],
},
)
sql_query_tool = Tool(
function_declarations=[
list_datasets_func,
list_tables_func,
get_table_func,
sql_query_func,
],
)
- Snippet code of Tools for Grounding Search (with Vertex Search a.k.a Agent Builder)
vertex_search_tool = Tool.from_retrieval(
retrieval=preview_generative_models.grounding.Retrieval(
source=preview_generative_models.grounding.VertexAISearch(
datastore=path+f'/dataStores/{DATASTORE_ID}'
),
)
)
- Snipper code of building chatbot with both tools.
model = GenerativeModel(
"gemini-1.0-pro",
generation_config={"temperature": 0},
tools=[
vertex_search_tool,
sql_query_tool,
],
)
However, when I sent a simple sentence to chatbot mode (with codes below), it give an error of invalid argument, but when I only use one Tool class as tools parameter, there's no error and run smoothly. I have assigned the error in Relevant Log Output section.
chat = model.start_chat()
client = bigquery.Client(project=PROJECT_ID)
prompt = "Get a list of datasets that will help answer the user's question."
response = chat.send_message(prompt)
Detail of environment and library:
- Python version : 3.10.13
- SDK version of
google-cloud-aiplatform
library : 1.48.0 - SDK version of
vertexai
library : 1.48.0
I hope you could give me a solution or insight for this problem. Thank a lot beforehand! =D
Relevant log output
_InactiveRpcError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:65, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
64 try:
---> 65 return callable_(*args, **kwargs)
66 except grpc.RpcError as exc:
File /opt/conda/lib/python3.10/site-packages/grpc/_channel.py:1176, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1170 (
1171 state,
1172 call,
1173 ) = self._blocking(
1174 request, timeout, metadata, credentials, wait_for_ready, compression
1175 )
-> 1176 return _end_unary_response_blocking(state, call, False, None)
File /opt/conda/lib/python3.10/site-packages/grpc/_channel.py:1005, in _end_unary_response_blocking(state, call, with_call, deadline)
1004 else:
-> 1005 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Request contains an invalid argument."
debug_error_string = "UNKNOWN:Error received from peer ipv4:172.217.219.95:443 {grpc_message:"Request contains an invalid argument.", grpc_status:3, created_time:"2024-04-29T10:07:50.000741461+00:00"}"
>
The above exception was the direct cause of the following exception:
InvalidArgument Traceback (most recent call last)
Cell In[35], line 1
----> 1 response = chat.send_message(prompt)
2 response = response.candidates[0].content.parts[0]
File /opt/conda/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:807, in ChatSession.send_message(self, content, generation_config, safety_settings, tools, stream)
800 return self._send_message_streaming(
801 content=content,
802 generation_config=generation_config,
803 safety_settings=safety_settings,
804 tools=tools,
805 )
806 else:
--> 807 return self._send_message(
808 content=content,
809 generation_config=generation_config,
810 safety_settings=safety_settings,
811 tools=tools,
812 )
File /opt/conda/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:903, in ChatSession._send_message(self, content, generation_config, safety_settings, tools)
901 while True:
902 request_history = self._history + history_delta
--> 903 response = self._model._generate_content(
904 contents=request_history,
905 generation_config=generation_config,
906 safety_settings=safety_settings,
907 tools=tools,
908 )
909 # By default we're not adding incomplete interactions to history.
910 if self._response_validator is not None:
File /opt/conda/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:494, in _GenerativeModel._generate_content(self, contents, generation_config, safety_settings, tools, tool_config)
469 """Generates content.
470
471 Args:
(...)
485 A single GenerationResponse object
486 """
487 request = self._prepare_request(
488 contents=contents,
489 generation_config=generation_config,
(...)
492 tool_config=tool_config,
493 )
--> 494 gapic_response = self._prediction_client.generate_content(request=request)
495 return self._parse_response(gapic_response)
File /opt/conda/lib/python3.10/site-packages/google/cloud/aiplatform_v1beta1/services/prediction_service/client.py:2102, in PredictionServiceClient.generate_content(self, request, model, contents, retry, timeout, metadata)
2099 self._validate_universe_domain()
2101 # Send the request.
-> 2102 response = rpc(
2103 request,
2104 retry=retry,
2105 timeout=timeout,
2106 metadata=metadata,
2107 )
2109 # Done; return the response.
2110 return response
File /opt/conda/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:113, in _GapicCallable.__call__(self, timeout, retry, *args, **kwargs)
110 metadata.extend(self._metadata)
111 kwargs["metadata"] = metadata
--> 113 return wrapped_func(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:67, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
65 return callable_(*args, **kwargs)
66 except grpc.RpcError as exc:
---> 67 raise exceptions.from_grpc_error(exc) from exc
InvalidArgument: 400 Request contains an invalid argument.
Code of Conduct
- [X] I agree to follow this project's Code of Conduct
As far as I can tell in my own testing and usage, the tools
kwarg will only take one Tool
class with 1 or more functions defined. So you'll need to represent your grounding call as another function call and add it to the same Tool
that contains the 4 BQ/SQL functions.
I apologize for the late reply. Perhaps, could you please provide the best practive to build the calling function for grounding call? Anyway, Thank you very much for your insight. It means a lot to me. More or less, I can see how to implement your solution to my application. =D
Got it. Sure thing, let me reopen this and come up with a better way for us to document using the grounding tool alongside functions-as-tools. Thanks for opening this!
Thank you for reopening the issue! Looking forward to the improved documentation on using the grounding tool alongside functions-as-tools.
Hi @yamazakikakuyo. Unfortunately I was not able to find a good way to solve this and use both Tools at the same time, nor was I able to decompose the underlying FunctionDeclarations
and grounding retrieval functions and combine them into a Tool.
The current way of using both Tools is to either 1) make a Gemini API call with the grounding tool, then make subsequent Gemini API calls with a Tool that refers to one or more functions, or 2) craft a new FunctionDeclaration
that you then use to manually invoke a Gemini API call within another function call (a bit redundant and nested, I know).
In the meantime, please open a feature request on the Vertex AI issue tracker since this seems like a solid use case to me of combining different Tool specs or Tools with Gemini API calls.
Hi @koverholt, Thank you for your detailed response and the suggestions provided. I understand the current limitations and the workarounds you mentioned for using both Tools simultaneously.
I have opened feature request on the Vertex AI issue tracker to highlight this use case of combining different Tool specs or Tools with Gemini API calls. I hope I could contribute on Gemini development. Nevertheless, your input has been very helpful, and I appreciate your guidance on this matter.
Best Regards.
Thank you for opening that feature request, we really appreciate it! Linking it here for future reference:
https://issuetracker.google.com/issues/340729475