NeMo-Guardrails icon indicating copy to clipboard operation
NeMo-Guardrails copied to clipboard

bug: Colang 2.0 issue when using LangChain

Open knitzschke opened this issue 1 year ago • 2 comments

Did you check docs and existing issues?

  • [x] I have read all the NeMo-Guardrails docs
  • [x] I have updated the package to the latest version before submitting this issue
  • [ ] (optional) I have used the develop branch
  • [x] I have searched the existing issues of NeMo-Guardrails

Python version (python --version)

Python 3.11.8

Operating system/version

Windows 11 Enterprise

NeMo-Guardrails version (if you must use a specific version and not the latest

0.11.0

nemoguardrails==0.11.0 langchain==0.3.4 langchain-community==0.3.3 langchain-core==0.3.12 langchain-openai==0.2.3

Describe the bug

I am trying to use Colang 2.X in my LangChain app for a beta example. I am using LangChain with Azure OpenAI model endpoint and trying to get the Dialog Rails example in the NVIDIA Docs (hello_world_3) example with llm continuation to work following the example here: https://docs.nvidia.com/nemo/guardrails/colang_2/getting_started/dialog-rails.html

However when I try to invoke the chain to test if the RAILS of "hi" is working I get the following error:

ValueError: The `output_vars` option is not supported for Colang 2.0 configurations.

Which originates from here: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/nemoguardrails/rails/llm/llmrails.py#L882

I am able to use Colang 1 with success, but unable to use Colang 2 when using it in the LangChain chain. When I use nemoguardrails chat I am able to test out the rail of "hi", however the llm continuation dpesnt seem to work when I try to type in what they have in the example above - akka I am not getting any response back and it just spins.

I have my ./config/main.co file as the following:

import core
import llm

flow main
  activate llm continuation
  activate greeting

flow greeting
  user expressed greeting
  bot express greeting

flow user expressed greeting
  user said "hi" or user said "hello"

flow bot express greeting
  bot say "Hello World! Im working for you!"

within a Jupyter notebook I have the following:

import os
from dotenv import load_dotenv

load_dotenv(override=True)

import nest_asyncio

nest_asyncio.apply()

from langchain_openai import AzureChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import AzureChatOpenAI

from nemoguardrails import RailsConfig, LLMRails
from nemoguardrails.integrations.langchain.runnable_rails import RunnableRails


model = AzureChatOpenAI(
....
# credentials submitted here:
)

output_parser = StrOutputParser()
prompt = ChatPromptTemplate.from_template("{topic}")

chain = prompt | model | output_parser

config = RailsConfig.from_path("./config/")
rails = RunnableRails(config)

chain_with_guardrails = prompt | (rails | model) | output_parser

text = "hi"

chain_with_guardrails.invoke(text)

Steps To Reproduce

Follow printed output above for example set up and jupyter notebook.

Expected Behavior

  1. when trying it in jupyter I could not be getting a error and instead receiving the rails prompt flow indicated in the main.co file.
  2. when testing it in the nemogaurdrail CLI I should be getting a LLM generated response for "how are you", instead of it constantly spinning up new workflows

Actual Behavior

Described above"

However when I try to invoke the chain to test if the RAILS of "hi" is working I get the following error:

ValueError: The `output_vars` option is not supported for Colang 2.0 configurations.

Which originates from here: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/nemoguardrails/rails/llm/llmrails.py#L882

knitzschke avatar Dec 03 '24 19:12 knitzschke