generative-ai-docs icon indicating copy to clipboard operation
generative-ai-docs copied to clipboard

Candidate was blocked due to SAFETY at the time when the BLOCK_NONE setting was set

Open kpripper opened this issue 1 year ago • 1 comments

Description of the bug:

[GoogleGenerativeAI Error]: Candidate was blocked due to SAFETY

result.response.candidates[0].safetyRatings has the following value

 [
  { category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT', probability: 'LOW' },
  { category: 'HARM_CATEGORY_HATE_SPEECH', probability: 'NEGLIGIBLE' },
  { category: 'HARM_CATEGORY_HARASSMENT', probability: 'HIGH' },
  {
    category: 'HARM_CATEGORY_DANGEROUS_CONTENT',
    probability: 'NEGLIGIBLE'
  }
]

My model.safetySettings

[
  { category: 'HARM_CATEGORY_HATE_SPEECH', threshold: 'BLOCK_NONE' },
  {
    category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
    threshold: 'BLOCK_NONE'
  },
  { category: 'HARM_CATEGORY_HARASSMENT', threshold: 'BLOCK_NONE' },
  {
    category: 'HARM_CATEGORY_DANGEROUS_CONTENT',
    threshold: 'BLOCK_NONE'
  }
  ]

Actual vs expected behavior:

Content should not be blocked.

Any other information you'd like to share?

No response

kpripper avatar Jan 04 '24 21:01 kpripper

Hello, @kpripper ! Well...even if you set it to 'BLOCK_NONE', you are still subject to the rules of the Google pretrained model, to which you don't have access. This model, which we don't have access to, governs the blocked categories in the pretrained model, following Google's rules and norms. I believe you can find these guidelines in the section where you look for your security settings and when you agree to use the API. I hope this helps! Regards! (:

TTMOR avatar Jan 12 '24 03:01 TTMOR

@kpripper Could you share more information please? What was your input + API call?

keertk avatar Jan 18 '24 00:01 keertk

Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.

github-actions[bot] avatar Feb 01 '24 01:02 github-actions[bot]

Chamidu@Chamidu-PC:~/Documents/UI 2 WEB$ node app.js Server is running on http://localhost:3000 Error processing image: GoogleGenerativeAIResponseError: [GoogleGenerativeAI Error]: Candidate was blocked due to RECITATION at response.text (/home/Chamidu/node_modules/@google/generative-ai/dist/index.js:256:23) at /home/Chamidu/Documents/UI 2 WEB/app.js:75:27 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { response: { candidates: [ [Object] ], promptFeedback: { safetyRatings: [Array] }, text: [Function (anonymous)] } }
### anyone Can help fix that???

Chamidu0423 avatar Feb 10 '24 22:02 Chamidu0423

Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.

github-actions[bot] avatar Feb 26 '24 01:02 github-actions[bot]

This issue was closed because it has been inactive for 28 days. Please post a new issue if you need further assistance. Thanks!

github-actions[bot] avatar Mar 12 '24 01:03 github-actions[bot]

I have similar issue..

GoogleGenerativeAIResponseError: [GoogleGenerativeAI Error]: Candidate was blocked due to RECITATION

I am sending the prompt: "who are you"

I don't know exactly what causes it, repeating the request will sometimes trigger that error, at another time it will response: "I am Gemini, a multimodal AI model, developed by Google."

HakimNB avatar Mar 23 '24 14:03 HakimNB

my prompt is just what is 1 + 1, got the same error

haiffy420 avatar Apr 01 '24 04:04 haiffy420

I got the siimilar error with Gemini 1.5 Pro v1beta chat api. Response was blocked due to OTHER. Inspect response object for details. Response received {"promptFeedback":{"blockReason":"OTHER"}}

GoogleGenerativeAIResponseError: [GoogleGenerativeAI Error]: Text not available. Response was blocked due to OTHER

my safety settings are const safetySettings = [ { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_NONE, }, ];

my prompt is (in Turkish): translate following to English: Opere meme Ca tanısıyla izlenen hastada sağ meme mastektomizedir. Mastektomi lojunda belirgin kitle lezyonu saptanmadı.

I'm getting same error both in Google AI Studio, and in my nodejs program using api

note, api call thinks that parameter is illegal, event though I see that parameter in source code: { category: HarmCategory.HARM_CATEGORY_UNSPECIFIED, threshold: HarmBlockThreshold.BLOCK_NONE, },

zakcali avatar Apr 18 '24 06:04 zakcali

i have the same error, after fews minutes I disapeared

alysonfarias avatar Jun 24 '24 03:06 alysonfarias

My language is portuguese, i send a prompt to gemini that says: "Gostaria de um resumo sobre a série pousando no amor", returns this error, Error: [GoogleGenerativeAI Error]: Candidate was blocked due to SAFETY at Object.response.functionCalls. But, if i send a prompt message saying: "Gostaria de um resumo sobre a série Crashing landing on you", returns the resume successfully, i dont understand...

suelliton avatar Jul 06 '24 21:07 suelliton

It would be good to know why this occurs. The same prompt works sometimes and sometimes not..

abooo96 avatar Aug 03 '24 20:08 abooo96