NeMo-Guardrails
NeMo-Guardrails copied to clipboard
How to integrate NeMo-guardrails with Gemini Pro
I try this code :
!rm -r config
!pip install -q -U google-generativeai import pathlib import textwrap import google.generativeai as genai from IPython.display import display from IPython.display import Markdown def to_markdown(text): text = text.replace('•', ' *') return Markdown(textwrap.indent(text, '> ', predicate=lambda _: True)) configure genai.configure(api_key="AIzaSyByffophpM1vZBeJdcV_mL7t6A2htetK3M") GenerativeModel model = genai.GenerativeModel('gemini-pro') model %%time response = model.generate_content("What is the meaning of life?") response If you're running this inside a notebook, patch the AsyncIO loop. import nest_asyncio nest_asyncio.apply() Step 1: create a new guardrails configuration Every guardrails configuration must be stored in a folder. The standard folder structure is as follows:
. ├── config │ ├── actions.py │ ├── config.py │ ├── config.yml │ ├── rails.co │ ├── ... See the Configuration Guide for information about the contents of these files.
Create a folder, such as config, for your configuration: !mkdir config A subdirectory or file config already exists. Create a config.yml file with the following content: %%writefile config/config.yml models:
- type: main engine: genai model: gemini-pro Overwriting config/config.yml from langchain.base_language import BaseLanguageModel from nemoguardrails.llm.providers import register_llm_provider class CustomLLM(BaseLanguageModel): """A custom LLM.""" register_llm_provider("custom_llm", CustomLLM) !pip install Cmake Requirement already satisfied: Cmake in d:\all programs install\anaconda\lib\site-packages (3.28.3) !pip install nemoguardrails --use-deprecated=legacy-resolver The models key in the config.yml file configures the LLM model. For a complete list of supported LLM models, see Supported LLM Models.
Step 2: load the guardrails configuration To load a guardrails configuration from a path, you must create a RailsConfig instance using the from_path method in your Python code:
from nemoguardrails import RailsConfig config = RailsConfig.from_path("./config") 7 Step 3: use the guardrails configuration Use this empty configuration by creating an LLMRails instance and using the generate_async method in your Python code:
from nemoguardrails import LLMRails rails = LLMRails(config) response = rails.generate(messages=[{ "role": "user", "content": "Hello!" }]) print(response)
Exception Traceback (most recent call last) Cell In[22], line 3 1 from nemoguardrails import LLMRails ----> 3 rails = LLMRails(config) 5 response = rails.generate(messages=[{ 6 "role": "user", 7 "content": "Hello!" 8 }]) 9 print(response)
File D:\all programs install\anaconda\Lib\site-packages\nemoguardrails\rails\llm\llmrails.py:204, in LLMRails.init(self, config, llm, verbose) 201 self._validate_config() 203 # Next, we initialize the LLM engines (main engine and action engines if specified). --> 204 self._init_llms() 206 # Next, we initialize the LLM Generate actions and register them. 207 llm_generation_actions_class = ( 208 LLMGenerationActions 209 if config.colang_version == "1.0" 210 else LLMGenerationActionsV2dotx 211 )
File D:\all programs install\anaconda\Lib\site-packages\nemoguardrails\rails\llm\llmrails.py:321, in LLMRails._init_llms(self)
318 if llm_config.engine == "openai":
319 msg += " Please install langchain-openai using pip install langchain-openai."
--> 321 raise Exception(msg)
323 provider_cls = get_llm_provider(llm_config)
324 # We need to compute the kwargs for initializing the LLM
Exception: Unknown LLM engine: genai.
HI @ahsan3219 ,
There is an VertexAI Langchain wrapper that you can use. Is there a reason for not using it?
I don't fully understand the code you shared above. In this part are you implementing the full Langchain wrapper over VertexAI? That seems complex and duplicating the work of the wrapper already exiting in Langchain.
class CustomLLM(BaseLanguageModel):
"""A custom LLM."""
register_llm_provider("custom_llm", CustomLLM)
Here you are also registering a custom LLM provider for NeMo Guardrails called "custom_llm" and in the config.yml file you call it "genai" (which does not exist).
If you opt to use the existing VertexAI wrapper from Langchain, all you need to do is have this in your config.yml:
engine: vertexai
model: gemini-pro
parameters:
model_name: gemini-pro
In the next release, you will be able to use it without adding both model and model_name in the config.
@ahsan3219 : Make sure you invalidate the key you shared in the code above: https://github.com/NVIDIA/NeMo-Guardrails/security/secret-scanning/3.
Thanks you
On Mon, Mar 11, 2024, 5:36 PM Traian Rebedea @.***> wrote:
HI @ahsan3219 https://github.com/ahsan3219 ,
There is an VertexAI Langchain wrapper that you can use. Is there a reason for not using it?
I don't fully understand the code you shared above. In this part are you implementing the full Langchain wrapper over VertexAI? That seems complex and duplicating the work of the wrapper already exiting in Langchain.
class CustomLLM(BaseLanguageModel): """A custom LLM."""
register_llm_provider("custom_llm", CustomLLM)
Here you are also registering a custom LLM provider for NeMo Guardrails called "custom_llm" and in the config.yml file you call it "genai" (which does not exist).
If you opt to use the existing VertexAI wrapper from Langchain, all you need to do is have this in your config.yml:
engine: vertexai model: gemini-pro parameters: model_name: gemini-proIn the next release, you will be able to use it without adding both model and model_name in the config.
— Reply to this email directly, view it on GitHub https://github.com/NVIDIA/NeMo-Guardrails/issues/390#issuecomment-1988342589, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASKRYRLLZFGUURADTD2UWJ3YXWQNHAVCNFSM6AAAAABEQE6WU2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOBYGM2DENJYHE . You are receiving this because you were mentioned.Message ID: @.***>
Actually there a re a lot of version conflicts - This is an example for using gemini with nemoguardrails.
-
langchain-google-vertexai 0.1.0 requires langchain-core <0.2,>=0.1.27, but you have langchain-core 0.2.0 which is incompatible.
-
langchain-community 0.2.17 requires langchain-core<0.3.0,>=0.2.39, but you have langchain-core 0.2.0 which is incompatible.
Please have a look into this! Thanks
Hi @parth-verma7 ,
try to use v0.10.0 release of NeMo Guardrails, and make sure to upgrade the other packages, I can see the most recent version of langchain-google-vertexai is 2.0.3.
Hope it helps.