langchain
langchain copied to clipboard
Feature: No safety settings when using langchain_google_genai's ChatGoogleGenerativeAI
Feature request
The safety settings are there in the google_generativeai library are are not there in the langchain_google_genai library The safety settings is an basically array of dictionaries passed when sending the prompt
Motivation
The problem with not having this is that when we use the ChatGoogleGenerativeAI model, if there is some kind of prompt which violate the basic safety settings then the model won't return with your answer
If we can change the safety settings and send it with the prompt to the model we could fix this issue
Your contribution
I am currently reading the code of the library and will raise a PR if i could fix the issue
🤖
That's great to hear that you're looking into the code and considering raising a PR to address this issue. Your contribution will definitely be valuable to the LangChain community. If you have any questions or need any assistance while working on the PR, feel free to ask. Keep up the good work!
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
I have a similar problem.
When I want to use genai.configure function to set something, but I don't know how to set these in the langchain_google_genai library.
And I find that the source code only deals with the setting of api_key, genai.configure(api_key=google_api_key)
yes, it is quite frustrating because it triggers the safety warnings with most politics related text or pdfs, i am currently working on fixing it, will raise the PR once its done
just added the PR, will close the issue once its merged
Any updates?
Any updates ?
@Spritan and @rayanfer32
From the PR above, it looks like you can just add safety_settings=None
when you initiate your langchain
model. For example:
langchain_model = ChatGoogleGenerativeAI(
model="gemini-pro", google_api_key=GOOGLE_API_KEY,
safety_settings=None
)
Or be more specific:
langchain_model = ChatGoogleGenerativeAI(
model="gemini-pro",
google_api_key=GOOGLE_API_KEY,
temperature=.2,
safety_settings={
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE
},
)
Google GenerativeAI is still missing the safety_settings which were added to VertexAI. Without any default values set Google GenerativeAI is prone to silently fail.
https://github.com/langchain-ai/langchain/pull/15344
https://github.com/langchain-ai/langchain/blob/master/libs/partners/google-genai/langchain_google_genai/llms.py
@baron I think the team has fixed it and it's working for me with version 0.0.9 #16836
@ironerumi Thanks for letting me know! Yes, it seems fixed now. I can finally work with the official wrapper again 👍
It seems like this is still not working? I tried setting BLOCK_NONE for everything and it still won't invoke a proper response. It still returns "I am not able to answer that question... Would you like me to try something different?" I'm using below, which I think is close:
llm = ChatGoogleGenerativeAI( model="gemini-pro", temperature=0.7, safety_settings={ HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE, }, )
It seems like this is still not working? I tried setting BLOCK_NONE for everything and it still won't invoke a proper response. It still returns "I am not able to answer that question... Would you like me to try something different?" I'm using below, which I think is close:
llm = ChatGoogleGenerativeAI( model="gemini-pro", temperature=0.7, safety_settings={ HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE, }, )
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
}
That's basically what I have too and unfortunately Google will still flat out refuse some queries. I highly recommend you try Langsmith to trace these calls so you can precisely narrow down the server response and add a fallback mechanism within your chain so it doesn't stop the execution. If you increase debugging it should show more helpful errors (but sometimes it will just fail). You could also try using GoogleGenerativeAI instead.
I am curious about this as well. I have tried using both
llm = ChatGoogleGenerativeAI(safety_settings=None,model="gemini-pro",temperature=0.7, top_p=0.85)
and the enumerated values and both times is fails. If anyone has a solution to this please let us know
It's just my assumption that when lowering the threshold Google will response more sensitive contents, but even if when set to None it won't response truly harmful contents like describe how to create a bomb in the kitchen or something.
So while asking those truly harmful questions, the difference when setting the threshold or not, will be whether you will get an answer like "I cannot tell you that" or the return content will be empty.
'BLOCK NONE' is restricted, try the following settings import google.generativeai as genai
safety_settings={ genai.types.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH, genai.types.HarmCategory.HARM_CATEGORY_HARASSMENT: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH, genai.types.HarmCategory.HARM_CATEGORY_HATE_SPEECH: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH, genai.types.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH }
Neither BLOC_NONE, nor BLOCK_ONLY_HIGH are working
yea, even I set block only high it will still block the MEDIUM content
gemini_safety_setting: dict = {}
for category in HarmCategory:
gemini_safety_setting[category] = HarmBlockThreshold.BLOCK_ONLY_HIGH
# Initialize the GoogleGenerativeAI instance with the API key
self.ai = ChatGoogleGenerativeAI(
google_api_key=google_api_key,
model=os.environ.get('GEMINI_MODEL'),
safety_settings=gemini_safety_setting
)
return msg
google.generativeai.types.generation_types.StopCandidateException: index: 0
finish_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: MEDIUM
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
}
we really want this setting work very urgent, thanks
I also noticed the parameter on ChatGoogleGenerativeAI is ignored. However, you can pass safety_settings to your model invoke method (or using model.bind(...) when chaining with LCEL) and it works fine.
Facing this issue too. Was getting empty responses even after passing necessary safety_settings
to ChatGoogleGenerativeAI
or its respective chain's invoke method. After some digging I figured the kwargs
is lost somewhere in the propagation of calls, Resulting in None safety_settings
at ChatGoogleGenerativeAI._generate
Work around for now, patching with explicit setting of safety_settings
. Will update if I figure out better
llm = ChatGoogleGenerativeAI(...)
safety_settings = {
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE
}
# store the original method
og_generate = ChatGoogleGenerativeAI._generate
# patch
ChatGoogleGenerativeAI._generate = partial(llm._generate, safety_settings=safety_settings)
chain = RetrievalQAWithSourcesChain(...)
result = chain.invoke({"question": question, ...})
# revert the patch
ChatGoogleGenerativeAI._generate = og_generate
@PrajwalPrashanth brilliant, you saved my day!
@PrajwalPrashanth amazing, you saved my day too!
One thing, when using stream:
llm = ChatGoogleGenerativeAI(...)
safety_settings = {
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE
}
# store the original method
og_stream = ChatGoogleGenerativeAI._stream
# patch
ChatGoogleGenerativeAI._stream = partial(llm._stream, safety_settings=safety_settings)
result = llm.stream({"question": question, ...})
# revert the patch
ChatGoogleGenerativeAI._stream = og_stream