SAFETY RULES for Gemini.
I've been trying so many times, even tried to add my own, but I am still getting errors about safety, tf
Huh, that's weird. Gemini is a very recent addition, have you tried with OpenAI or local version yet?
I found one more, can you try:
[
{
category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
threshold: HarmBlockThreshold.BLOCK_NONE
},
{
category: HarmCategory.HARM_CATEGORY_HARASSMENT,
threshold: HarmBlockThreshold.BLOCK_NONE
},
{
category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
threshold: HarmBlockThreshold.BLOCK_NONE
},
{
category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
threshold: HarmBlockThreshold.BLOCK_NONE
},
{
category: HarmCategory.HARM_CATEGORY_UNSPECIFIED,
threshold: HarmBlockThreshold.BLOCK_NONE
}
]
If you have some example code snippets that would result in safety errors, I'd be happy to test against them. It would help to resolve the issue.
@Anooxy17 Could you share the input script that trigger the failure? I'll try on my company's Vertex AI account to see if we get a different outcome.
@jehna @Manouchehri asd.txt
I put here whole file, if it's possible put output. I've only 3 requests on ChatGPT, and then syntax and safety on Gemini all the time, so if u have maybe output of this thing I'm trying to deobfuscate since I remember, and only your script is making me 60% max, feel free to put output, keep me updated if there's something wrong ;)
Thanks for the file! I'll check on it later – unfortunately something came up, so I don't have time to work on humanify for a little while. I'll get back to you a bit later on this 👍
Thannks for respond now, good luck 😊
Sorry for another interrupt, but have to ask, any ideas? 😄
Maybe killing children processes?
Things that makes sense in programming but could be badly interpreted by non-programmers