langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Feature: No safety settings when using langchain_google_genai's ChatGoogleGenerativeAI

Open 7vansh7 opened this issue 1 year ago • 6 comments

Feature request

The safety settings are there in the google_generativeai library are are not there in the langchain_google_genai library The safety settings is an basically array of dictionaries passed when sending the prompt

Motivation

The problem with not having this is that when we use the ChatGoogleGenerativeAI model, if there is some kind of prompt which violate the basic safety settings then the model won't return with your answer

If we can change the safety settings and send it with the prompt to the model we could fix this issue

Your contribution

I am currently reading the code of the library and will raise a PR if i could fix the issue

7vansh7 avatar Dec 23 '23 09:12 7vansh7

🤖

That's great to hear that you're looking into the code and considering raising a PR to address this issue. Your contribution will definitely be valuable to the LangChain community. If you have any questions or need any assistance while working on the PR, feel free to ask. Keep up the good work!


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Dec 23 '23 09:12 dosubot[bot]

I have a similar problem. When I want to use genai.configure function to set something, but I don't know how to set these in the langchain_google_genai library. And I find that the source code only deals with the setting of api_key, genai.configure(api_key=google_api_key)

lrbmike avatar Dec 23 '23 10:12 lrbmike

yes, it is quite frustrating because it triggers the safety warnings with most politics related text or pdfs, i am currently working on fixing it, will raise the PR once its done

7vansh7 avatar Dec 23 '23 10:12 7vansh7

just added the PR, will close the issue once its merged

7vansh7 avatar Dec 23 '23 16:12 7vansh7

Any updates?

Spritan avatar Jan 16 '24 11:01 Spritan

Any updates ?

rayanfer32 avatar Jan 17 '24 13:01 rayanfer32

@Spritan and @rayanfer32 From the PR above, it looks like you can just add safety_settings=None when you initiate your langchain model. For example:

langchain_model = ChatGoogleGenerativeAI(
        model="gemini-pro", google_api_key=GOOGLE_API_KEY,
        safety_settings=None
    )

Or be more specific:

    langchain_model = ChatGoogleGenerativeAI(
        model="gemini-pro",
        google_api_key=GOOGLE_API_KEY,
        temperature=.2,
        safety_settings={
                HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
                HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
                HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
                HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE
            },
    )

vinnyricciardi avatar Feb 01 '24 14:02 vinnyricciardi

Google GenerativeAI is still missing the safety_settings which were added to VertexAI. Without any default values set Google GenerativeAI is prone to silently fail.

https://github.com/langchain-ai/langchain/pull/15344

https://github.com/langchain-ai/langchain/blob/master/libs/partners/google-genai/langchain_google_genai/llms.py

baron avatar Feb 09 '24 12:02 baron

@baron I think the team has fixed it and it's working for me with version 0.0.9 #16836

ironerumi avatar Feb 15 '24 12:02 ironerumi

@ironerumi Thanks for letting me know! Yes, it seems fixed now. I can finally work with the official wrapper again 👍

baron avatar Feb 15 '24 22:02 baron

It seems like this is still not working? I tried setting BLOCK_NONE for everything and it still won't invoke a proper response. It still returns "I am not able to answer that question... Would you like me to try something different?" I'm using below, which I think is close:

llm = ChatGoogleGenerativeAI( model="gemini-pro", temperature=0.7, safety_settings={ HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE, }, )

blackslashcreative avatar Feb 23 '24 16:02 blackslashcreative

It seems like this is still not working? I tried setting BLOCK_NONE for everything and it still won't invoke a proper response. It still returns "I am not able to answer that question... Would you like me to try something different?" I'm using below, which I think is close:

llm = ChatGoogleGenerativeAI( model="gemini-pro", temperature=0.7, safety_settings={ HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE, }, )

            HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
            HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
            HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
            HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
        }       

That's basically what I have too and unfortunately Google will still flat out refuse some queries. I highly recommend you try Langsmith to trace these calls so you can precisely narrow down the server response and add a fallback mechanism within your chain so it doesn't stop the execution. If you increase debugging it should show more helpful errors (but sometimes it will just fail). You could also try using GoogleGenerativeAI instead.

baron avatar Feb 23 '24 23:02 baron

I am curious about this as well. I have tried using both llm = ChatGoogleGenerativeAI(safety_settings=None,model="gemini-pro",temperature=0.7, top_p=0.85) and the enumerated values and both times is fails. If anyone has a solution to this please let us know

EthanNadler avatar Mar 13 '24 19:03 EthanNadler

It's just my assumption that when lowering the threshold Google will response more sensitive contents, but even if when set to None it won't response truly harmful contents like describe how to create a bomb in the kitchen or something.

So while asking those truly harmful questions, the difference when setting the threshold or not, will be whether you will get an answer like "I cannot tell you that" or the return content will be empty.

ironerumi avatar Mar 15 '24 04:03 ironerumi

'BLOCK NONE' is restricted, try the following settings import google.generativeai as genai

safety_settings={ genai.types.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH, genai.types.HarmCategory.HARM_CATEGORY_HARASSMENT: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH, genai.types.HarmCategory.HARM_CATEGORY_HATE_SPEECH: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH, genai.types.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH }

MaharshiYeluri02 avatar Mar 30 '24 13:03 MaharshiYeluri02

Neither BLOC_NONE, nor BLOCK_ONLY_HIGH are working

marwan-elsafty avatar May 31 '24 12:05 marwan-elsafty

image yea, even I set block only high it will still block the MEDIUM content

gemini_safety_setting: dict = {}
            for category in HarmCategory:
                gemini_safety_setting[category] = HarmBlockThreshold.BLOCK_ONLY_HIGH
            # Initialize the GoogleGenerativeAI instance with the API key
            self.ai = ChatGoogleGenerativeAI(
                google_api_key=google_api_key,
                model=os.environ.get('GEMINI_MODEL'),
                safety_settings=gemini_safety_setting
            )

return msg

google.generativeai.types.generation_types.StopCandidateException: index: 0
finish_reason: SAFETY
safety_ratings {
  category: HARM_CATEGORY_SEXUALLY_EXPLICIT
  probability: MEDIUM
}
safety_ratings {
  category: HARM_CATEGORY_HATE_SPEECH
  probability: NEGLIGIBLE
}
safety_ratings {
  category: HARM_CATEGORY_HARASSMENT
  probability: NEGLIGIBLE
}
safety_ratings {
  category: HARM_CATEGORY_DANGEROUS_CONTENT
  probability: NEGLIGIBLE
}

ChungNYCU avatar Jun 11 '24 01:06 ChungNYCU

we really want this setting work very urgent, thanks

ez945y avatar Jun 17 '24 09:06 ez945y

I also noticed the parameter on ChatGoogleGenerativeAI is ignored. However, you can pass safety_settings to your model invoke method (or using model.bind(...) when chaining with LCEL) and it works fine.

jfperusse-bhvr avatar Jun 28 '24 20:06 jfperusse-bhvr

Facing this issue too. Was getting empty responses even after passing necessary safety_settings to ChatGoogleGenerativeAI or its respective chain's invoke method. After some digging I figured the kwargs is lost somewhere in the propagation of calls, Resulting in None safety_settings at ChatGoogleGenerativeAI._generate

Work around for now, patching with explicit setting of safety_settings. Will update if I figure out better

llm =  ChatGoogleGenerativeAI(...)
safety_settings = {
            HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE
        }

# store the original method
og_generate = ChatGoogleGenerativeAI._generate

# patch
ChatGoogleGenerativeAI._generate = partial(llm._generate, safety_settings=safety_settings)

chain = RetrievalQAWithSourcesChain(...)
result = chain.invoke({"question": question, ...})

# revert the patch
ChatGoogleGenerativeAI._generate = og_generate

PrajwalPrashanth avatar Jul 23 '24 23:07 PrajwalPrashanth

@PrajwalPrashanth brilliant, you saved my day!

madox2 avatar Jul 30 '24 15:07 madox2

@PrajwalPrashanth amazing, you saved my day too!

One thing, when using stream:

llm =  ChatGoogleGenerativeAI(...)
safety_settings = {
    HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE
}

# store the original method
og_stream = ChatGoogleGenerativeAI._stream

# patch
ChatGoogleGenerativeAI._stream = partial(llm._stream, safety_settings=safety_settings)

result = llm.stream({"question": question, ...})

# revert the patch
ChatGoogleGenerativeAI._stream = og_stream

vitorecomp avatar Jul 30 '24 21:07 vitorecomp