generative-ai-python icon indicating copy to clipboard operation
generative-ai-python copied to clipboard

The `response.text` quick accessor only works when the response contains a valid `Part`, but none was returned. Check the `candidate.safety_ratings` to see if the response was blocked.

Open 141forever opened this issue 1 year ago • 11 comments

Description of the bug:

The response.text quick accessor only works when the response contains a valid Part, but none was returned. Check the candidate.safety_ratings to see if the response was blocked.

Actual vs expected behavior:

Traceback (most recent call last): File "D:\Study\codes\Gemini_score.py", line 64, in strr = str(id) + ":" + response.text File "D:\Study\codes\venv\lib\site-packages\google\generativeai\types\generation_types.py", line 401, in text raise ValueError( ValueError: The response.text quick accessor only works when the response contains a valid Part, but none was returned. Check the candidate.safety_ratings to see if the response was blocked.

Any other information you'd like to share?

THIS IS MY CODES:

import pandas as pd import pdb import google.generativeai as genai import os

genai.configure(api_key="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

model = genai.GenerativeModel('gemini-1.5-pro')

response = model.generate_content(system_prompt) strr = str(id) + ":" + response.text

system_prompt is a string

141forever avatar Jun 03 '24 11:06 141forever

@141forever, Similar feature request #282 is already filled. Requesting you to please follow and +1 the similar issue for updates and close this thread. Thank you!

singhniraj08 avatar Jun 04 '24 04:06 singhniraj08

@singhniraj08 I'm facing similar problem as the OP, since the finish_reason is always OTHER and the issue you linked is talking about safety I'm not sure it is related problem. My prompt is as simple as translating a JSON object with the content "Hi Peter, how's it going?" or "Chief Technology Officer" along with related metadata. Using the same prompt, I can easily get the result from ChatGPT, but Gemini (both Pro and Flash) just responds with finish_reason=OTHER so I don't even know what the problem is with the prompt.

trunglebka avatar Jun 07 '24 02:06 trunglebka

Additional information: With the same prompt I always get the expected result via the public gemini website https://gemini.google.com With https://aistudio.google.com/app/prompts/new_chat , it is failed every time. I can share the prompt via email if you guys are interested

trunglebka avatar Jun 07 '24 03:06 trunglebka

@trunglebka, #282 is for implementation of more helpful error messages when the response is blocked by Gemini because of safety or other reasons. The SDK doesn't control the service's responses. If you feel the response is blocked which shouldn't be the normal case, we would suggest you to use "Send Feedback" option in Gemini docs. Ref: Screenshot below. You can also post this issue on Discourse forum. image

singhniraj08 avatar Jun 07 '24 04:06 singhniraj08

@singhniraj08 Thanks for pointing that out, I wasn't paying close attention

trunglebka avatar Jun 07 '24 04:06 trunglebka

Im having the same issue, Im doing some rewriting text about healthcare USML dataset. And in a specific question I got this error.

baptvit avatar Jun 19 '24 21:06 baptvit

Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.

github-actions[bot] avatar Jul 04 '24 01:07 github-actions[bot]

@baptvit @141forever

set the safety_settings and generate the content. https://ai.google.dev/gemini-api/docs/safety-settings?hl=en

architectyou avatar Jul 18 '24 01:07 architectyou

@architectyou yeah, Ive tried to set the safety_settings

response = GEMINI_MODEL.generate_content(prompt, safety_settings=[
    {"category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_NONE"},
    {"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_NONE"},
    {"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_NONE"},
    {"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_NONE"},
    {"category": "OTHER", "threshold": "BLOCK_NONE"}
])
response.text

But even though I set up, I had to catch the exception:

def rewrite_sentence(sentence_original, model=GEMINI_MODEL, template_prompt=TEMPLATE_PROMPT):
    try:
        prompt= template_prompt.replace("<original_sentence>", sentence_original)
        response = model.generate_content(prompt, safety_settings=[
    {"category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_NONE"},
    {"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_NONE"},
    {"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_NONE"},
    {"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_NONE"}
])
        return response.text
    except Exception as e:
        return "Error text"

baptvit avatar Jul 18 '24 12:07 baptvit

Same error here, the safety settings have no effect on this particular issue

phil-scholarcy avatar Jul 25 '24 12:07 phil-scholarcy

Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.

github-actions[bot] avatar Aug 09 '24 01:08 github-actions[bot]

This issue was closed because it has been inactive for 28 days. Please post a new issue if you need further assistance. Thanks!

github-actions[bot] avatar Aug 24 '24 01:08 github-actions[bot]

Facing same issue for reviewing code (which is supposed to be written badly): The response.text quick accessor only works when the response contains a valid Part, but none was returned. Check the candidate.safety_ratings to see if the response was blocked.

ajitesh123 avatar Oct 11 '24 02:10 ajitesh123

Facing same issue for reviewing code (which is supposed to be written badly): The response.text quick accessor only works when the response contains a valid Part, but none was returned. Check the candidate.safety_ratings to see if the response was blocked.

Facing same issue as well

Sumeet213 avatar Oct 15 '24 07:10 Sumeet213

Facing the same issue here

brunoedcf avatar Oct 31 '24 13:10 brunoedcf

I'm still seeing the issue. Does google even care?

ynie avatar Oct 31 '24 16:10 ynie

Pretty much unusable in a production context until this is fixed :(

phil-scholarcy avatar Nov 04 '24 12:11 phil-scholarcy

You need to pass Safety Settings. Here is how I did in one of my open source repo: https://github.com/ajitesh123/auto-review-ai/blob/main/backend/llm.py#L177-L198

(Please star my repo if this solves for you :))

ajitesh123 avatar Nov 04 '24 12:11 ajitesh123

You need to pass Safety Settings. Here is how I did in one of my open source repo: https://github.com/ajitesh123/auto-review-ai/blob/main/backend/llm.py#L177-L198

(Please star my repo if this solves for you :))

Sadly changing the safety sessions has no effect on this issue

phil-scholarcy avatar Nov 04 '24 12:11 phil-scholarcy

Same error here too

OGsiji avatar Nov 04 '24 22:11 OGsiji

@phil-scholarcy @OGsiji I can look into this issue for you. Can you please post a minimal working code sample here that caused this error for you? I tried OP's original code sample and I am unable to hit that error.

shilpakancharla avatar Nov 07 '24 05:11 shilpakancharla

Same now for Gemini 2.5. and Safety Settings have no effect.

hussainbiedouh avatar Jun 08 '25 07:06 hussainbiedouh

Facing the same issue still

NurzihanReya avatar Jul 31 '25 15:07 NurzihanReya