The `response.text` quick accessor only works when the response contains a valid `Part`, but none was returned. Check the `candidate.safety_ratings` to see if the response was blocked.
Description of the bug:
The response.text quick accessor only works when the response contains a valid Part, but none was returned. Check the candidate.safety_ratings to see if the response was blocked.
Actual vs expected behavior:
Traceback (most recent call last):
File "D:\Study\codes\Gemini_score.py", line 64, in response.text quick accessor only works when the response contains a valid Part, but none was returned. Check the candidate.safety_ratings to see if the response was blocked.
Any other information you'd like to share?
THIS IS MY CODES:
import pandas as pd import pdb import google.generativeai as genai import os
genai.configure(api_key="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")
model = genai.GenerativeModel('gemini-1.5-pro')
response = model.generate_content(system_prompt) strr = str(id) + ":" + response.text
system_prompt is a string
@141forever, Similar feature request #282 is already filled. Requesting you to please follow and +1 the similar issue for updates and close this thread. Thank you!
@singhniraj08 I'm facing similar problem as the OP, since the finish_reason is always OTHER and the issue you linked is talking about safety I'm not sure it is related problem.
My prompt is as simple as translating a JSON object with the content "Hi Peter, how's it going?" or "Chief Technology Officer" along with related metadata. Using the same prompt, I can easily get the result from ChatGPT, but Gemini (both Pro and Flash) just responds with finish_reason=OTHER so I don't even know what the problem is with the prompt.
Additional information: With the same prompt I always get the expected result via the public gemini website https://gemini.google.com With https://aistudio.google.com/app/prompts/new_chat , it is failed every time. I can share the prompt via email if you guys are interested
@trunglebka, #282 is for implementation of more helpful error messages when the response is blocked by Gemini because of safety or other reasons. The SDK doesn't control the service's responses. If you feel the response is blocked which shouldn't be the normal case, we would suggest you to use "Send Feedback" option in Gemini docs. Ref: Screenshot below. You can also post this issue on Discourse forum.
@singhniraj08 Thanks for pointing that out, I wasn't paying close attention
Im having the same issue, Im doing some rewriting text about healthcare USML dataset. And in a specific question I got this error.
Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.
@baptvit @141forever
set the safety_settings and generate the content. https://ai.google.dev/gemini-api/docs/safety-settings?hl=en
@architectyou yeah, Ive tried to set the safety_settings
response = GEMINI_MODEL.generate_content(prompt, safety_settings=[
{"category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_NONE"},
{"category": "OTHER", "threshold": "BLOCK_NONE"}
])
response.text
But even though I set up, I had to catch the exception:
def rewrite_sentence(sentence_original, model=GEMINI_MODEL, template_prompt=TEMPLATE_PROMPT):
try:
prompt= template_prompt.replace("<original_sentence>", sentence_original)
response = model.generate_content(prompt, safety_settings=[
{"category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_NONE"}
])
return response.text
except Exception as e:
return "Error text"
Same error here, the safety settings have no effect on this particular issue
Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.
This issue was closed because it has been inactive for 28 days. Please post a new issue if you need further assistance. Thanks!
Facing same issue for reviewing code (which is supposed to be written badly): The response.text quick accessor only works when the response contains a valid Part, but none was returned. Check the candidate.safety_ratings to see if the response was blocked.
Facing same issue for reviewing code (which is supposed to be written badly): The
response.textquick accessor only works when the response contains a validPart, but none was returned. Check thecandidate.safety_ratingsto see if the response was blocked.
Facing same issue as well
Facing the same issue here
I'm still seeing the issue. Does google even care?
Pretty much unusable in a production context until this is fixed :(
You need to pass Safety Settings. Here is how I did in one of my open source repo: https://github.com/ajitesh123/auto-review-ai/blob/main/backend/llm.py#L177-L198
(Please star my repo if this solves for you :))
You need to pass Safety Settings. Here is how I did in one of my open source repo: https://github.com/ajitesh123/auto-review-ai/blob/main/backend/llm.py#L177-L198
(Please star my repo if this solves for you :))
Sadly changing the safety sessions has no effect on this issue
Same error here too
@phil-scholarcy @OGsiji I can look into this issue for you. Can you please post a minimal working code sample here that caused this error for you? I tried OP's original code sample and I am unable to hit that error.
Same now for Gemini 2.5. and Safety Settings have no effect.
Facing the same issue still