generative-ai-docs
generative-ai-docs copied to clipboard
Candidate was blocked due to SAFETY at the time when the BLOCK_NONE setting was set
Description of the bug:
[GoogleGenerativeAI Error]: Candidate was blocked due to SAFETY
result.response.candidates[0].safetyRatings has the following value
[
{ category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT', probability: 'LOW' },
{ category: 'HARM_CATEGORY_HATE_SPEECH', probability: 'NEGLIGIBLE' },
{ category: 'HARM_CATEGORY_HARASSMENT', probability: 'HIGH' },
{
category: 'HARM_CATEGORY_DANGEROUS_CONTENT',
probability: 'NEGLIGIBLE'
}
]
My model.safetySettings
[
{ category: 'HARM_CATEGORY_HATE_SPEECH', threshold: 'BLOCK_NONE' },
{
category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
threshold: 'BLOCK_NONE'
},
{ category: 'HARM_CATEGORY_HARASSMENT', threshold: 'BLOCK_NONE' },
{
category: 'HARM_CATEGORY_DANGEROUS_CONTENT',
threshold: 'BLOCK_NONE'
}
]
Actual vs expected behavior:
Content should not be blocked.
Any other information you'd like to share?
No response
Hello, @kpripper ! Well...even if you set it to 'BLOCK_NONE', you are still subject to the rules of the Google pretrained model, to which you don't have access. This model, which we don't have access to, governs the blocked categories in the pretrained model, following Google's rules and norms. I believe you can find these guidelines in the section where you look for your security settings and when you agree to use the API. I hope this helps! Regards! (:
@kpripper Could you share more information please? What was your input + API call?
Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.
Chamidu@Chamidu-PC:~/Documents/UI 2 WEB$ node app.js
Server is running on http://localhost:3000
Error processing image: GoogleGenerativeAIResponseError: [GoogleGenerativeAI Error]: Candidate was blocked due to RECITATION
at response.text (/home/Chamidu/node_modules/@google/generative-ai/dist/index.js:256:23)
at /home/Chamidu/Documents/UI 2 WEB/app.js:75:27
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
response: {
candidates: [ [Object] ],
promptFeedback: { safetyRatings: [Array] },
text: [Function (anonymous)]
}
}
### anyone Can help fix that???
Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.
This issue was closed because it has been inactive for 28 days. Please post a new issue if you need further assistance. Thanks!
I have similar issue..
GoogleGenerativeAIResponseError: [GoogleGenerativeAI Error]: Candidate was blocked due to RECITATION
I am sending the prompt: "who are you"
I don't know exactly what causes it, repeating the request will sometimes trigger that error, at another time it will response: "I am Gemini, a multimodal AI model, developed by Google."
my prompt is just what is 1 + 1, got the same error
I got the siimilar error with Gemini 1.5 Pro v1beta chat api.
Response was blocked due to OTHER. Inspect response object for details. Response received {"promptFeedback":{"blockReason":"OTHER"}}
GoogleGenerativeAIResponseError: [GoogleGenerativeAI Error]: Text not available. Response was blocked due to OTHER
my safety settings are const safetySettings = [ { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_NONE, }, ];
my prompt is (in Turkish):
translate following to English: Opere meme Ca tanısıyla izlenen hastada sağ meme mastektomizedir. Mastektomi lojunda belirgin kitle lezyonu saptanmadı.
I'm getting same error both in Google AI Studio, and in my nodejs program using api
note, api call thinks that parameter is illegal, event though I see that parameter in source code:
{ category: HarmCategory.HARM_CATEGORY_UNSPECIFIED, threshold: HarmBlockThreshold.BLOCK_NONE, },
i have the same error, after fews minutes I disapeared
My language is portuguese, i send a prompt to gemini that says: "Gostaria de um resumo sobre a série pousando no amor", returns this error, Error: [GoogleGenerativeAI Error]: Candidate was blocked due to SAFETY at Object.response.functionCalls. But, if i send a prompt message saying: "Gostaria de um resumo sobre a série Crashing landing on you", returns the resume successfully, i dont understand...
It would be good to know why this occurs. The same prompt works sometimes and sometimes not..