generative-ai-python
generative-ai-python copied to clipboard
Unexpected type of call %s" % type(call) when do async chat call `send_message_async`
Description of the bug:
Unexpected type of call %s" % type(call) when do async chat call send_message_async
.
I use the rest
as transport, use my own proxy api endpoint.
Here is the code:
import asyncio
import os
import traceback
import google.generativeai as genai
from dotenv import load_dotenv
load_dotenv()
GEMINI_API_KEY = os.getenv('GEMINI_API_KEY')
GEMINI_API_ENDPOINT = os.getenv('GEMINI_API_ENDPOINT')
async def main():
try:
genai.configure(
api_key=GEMINI_API_KEY,
transport="rest",
client_options={"api_endpoint": GEMINI_API_ENDPOINT}
)
model = genai.GenerativeModel('gemini-pro')
chat = model.start_chat()
# response = chat.send_message("Use python to write a fib func", stream=True) # line a. This line works
response = await chat.send_message_async("Use python to write a fib func", stream=True) # line b. This line will cause an error
for chunk in response:
print('=' * 80)
print(chunk.text)
except Exception as e:
traceback.print_exc()
def run_main():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
loop.run_until_complete(main())
except Exception as e:
print(e)
finally:
loop.close()
run_main()
And after run this code, the error:
D:\xxx\venv\Scripts\python.exe D:\xxx\chore\gemini\gemini_test.py
Traceback (most recent call last):
File "D:\xxx\chore\gemini\gemini_test.py", line 27, in main
response = await chat.send_message_async("Use python to write a fib func", stream=True) # This line will cause an error
File "D:\xxx\venv\lib\site-packages\google\generativeai\generative_models.py", line 410, in send_message_async
response = await self.model.generate_content_async(
File "D:\xxx\venv\lib\site-packages\google\generativeai\generative_models.py", line 272, in generate_content_async
iterator = await self._async_client.stream_generate_content(request)
File "D:\xxx\venv\lib\site-packages\google\api_core\retry_async.py", line 223, in retry_wrapped_func
return await retry_target(
File "D:\xxx\venv\lib\site-packages\google\api_core\retry_async.py", line 121, in retry_target
return await asyncio.wait_for(
File "C:\xxx\Python\Python310\lib\asyncio\tasks.py", line 445, in wait_for
return fut.result()
File "D:\xxx\venv\lib\site-packages\google\api_core\grpc_helpers_async.py", line 187, in error_remapped_callable
raise TypeError("Unexpected type of call %s" % type(call))
TypeError: Unexpected type of call <class 'google.api_core.rest_streaming.ResponseIterator'>
Process finished with exit code 0
Actual vs expected behavior:
It should work as the same as method send_message
in line a
D:\xxx\venv\Scripts\python.exe D:\xxx\chore\gemini\gemini_test.py
================================================================================
```python
def fib(n):
"""Calculates the nth Fibonacci
================================================================================
number.
Args:
n: The index of the Fibonacci number to calculate.
Returns:
The nth Fibonacci number.
================================================================================
"""
if n < 2:
return n
else:
return fib(n-1) + fib(n-2)
Process finished with exit code 0
### Any other information you'd like to share?
python: 3.10.5
os: windows 11
_No response_
The send_message() function works correctly only on the first attempt; subsequent attempts fail, which, based on the error messages on my local laptop, appear to be related to type or dictionary issues. To circumvent these errors, I've found that appending or expanding the chat list is effective. The included to_markdown() function originates from Google's sample code and may encounter issues when executed with a locally installed Google-generative AI package. To rectify this, I've added a line of code that ensures it operates smoothly on my local machine. Similar adjustments were made to the chat history list to ensure functionality.
def to_markdown(text):
text=text._result.candidates[0].content.parts[0].text
text = text.replace('•', ' *')
return Markdown(textwrap.indent(text, '> ', predicate=lambda _: True))
model = genai.GenerativeModel('gemini-pro')
messages = [
{'role':'user',
'parts': ["Briefly explain how a computer works to a young child."]}
]
response = model.generate_content(messages)
to_markdown(response)
messages.append({'role':'model',
'parts':[response._result.candidates[0].content.parts[0].text]})
messages.append({'role':'user',
'parts':["Okay, how about a more detailed explanation to a high school student?"]})
response = model.generate_content(messages)
to_markdown(response)
messages.append({'role':'model',
'parts':[response._result.candidates[0].content.parts[0].text]})
messages
from IPython.display import Markdown, display
# Function to display text as Markdown
def print_md(text):
display(Markdown(text))
# Loop through each conversation part and display
for convo in messages:
# Print role and parts
role_str = f"#### Role: {convo['role']}"
parts_str = "\n\n".join(convo['parts'])
print_md(f"{role_str}\n{parts_str}\n\n")
The send_message() function works correctly only on the first attempt; subsequent attempts fail, which, based on the error messages on my local laptop, appear to be related to type or dictionary issues. To circumvent these errors, I've found that appending or expanding the chat list is effective. The included to_markdown() function originates from Google's sample code and may encounter issues when executed with a locally installed Google-generative AI package. To rectify this, I've added a line of code that ensures it operates smoothly on my local machine. Similar adjustments were made to the chat history list to ensure functionality.
def to_markdown(text): text=text._result.candidates[0].content.parts[0].text text = text.replace('•', ' *') return Markdown(textwrap.indent(text, '> ', predicate=lambda _: True))
model = genai.GenerativeModel('gemini-pro') messages = [ {'role':'user', 'parts': ["Briefly explain how a computer works to a young child."]} ] response = model.generate_content(messages) to_markdown(response)
messages.append({'role':'model', 'parts':[response._result.candidates[0].content.parts[0].text]}) messages.append({'role':'user', 'parts':["Okay, how about a more detailed explanation to a high school student?"]}) response = model.generate_content(messages) to_markdown(response)
messages.append({'role':'model', 'parts':[response._result.candidates[0].content.parts[0].text]}) messages
from IPython.display import Markdown, display # Function to display text as Markdown def print_md(text): display(Markdown(text)) # Loop through each conversation part and display for convo in messages: # Print role and parts role_str = f"#### Role: {convo['role']}" parts_str = "\n\n".join(convo['parts']) print_md(f"{role_str}\n{parts_str}\n\n")
Thank you for your response, but what exactly for me the question is why send_message_async
is not work compare to the method send_message
, I did't see anything related to send_message_async
.
I surmise that 'send_message_async' functions as a wrapper around 'send_message'. The issue with 'send_message()' in creating an iterable object could potentially be inherited by its asynchronous counterpart, 'send_message_async'.
meet the same problem with generate_content_async
functions
response = await model.generate_content_async(
contents=prompt,
generation_config=genai.types.GenerationConfig(temperature=request.temperature, top_p=request.top_p),
stream=True,
)
meet the same problem when using transport='rest'
and streaming=True
, maybe something wrong with google/ai/generativelanguage_v1beta/services/generative_service/transports/rest.py:972
or google/ai/generativelanguage_v1beta/services/generative_service/async_client.py:658
Just run into the same problem with transport='rest'
and send_message_async
.
This is https://github.com/google-gemini/generative-ai-python/issues/499 -> https://github.com/googleapis/gapic-generator-python/issues/1962