NeMo-Guardrails
NeMo-Guardrails copied to clipboard
Issue with custom bot_message
I wanted to add a fixed prefix to all my bot responses so I had a colang file of this format:
define bot refuses to respond
"Guardrail response: Sorry I cannot respond"
define bot respond to the question
"ABC bot response: $bot_message"
But it leads to generations like:
ABC bot response: ABC bot response: Hello! How are you?
This is most likely due to changes in the bot response in the chat history which are causing changes in subsequent replies. Is there a way to resolve this?
What exactly are you trying to achieve? Make a distinction between predefined messages and LLM generated messages? Or something else?
Yes, I wish to make a distinction between LLM-generated and predefined/guardrail messages. I decided to add a prefix so that they would be discernible, but later the "prefix" seeped into the chat history and began coming out of the actual LLM response
@drazvan let me know if any other information is needed, thanks!
Is this for debugging purposes? If yes, you could look for this in the logs: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/nemoguardrails/actions/llm/generation.py#L706 (maybe raise the level to Warning).
A clean solution would be to use the generation options (https://github.com/NVIDIA/NeMo-Guardrails/blob/main/docs/user_guides/advanced/generation-options.md#detailed-logging-information) and set the log.llm_calls option to True.
res = rails.generate(messages=messages, options={
"log": {
"llm_calls": True,
}
})
Then you can inspect the response to check whether there are any LLM calls with the task set to generate_bot_message.
Let me know if this works.
@drazvan It's for presentation purposes, so that the user can understand that whether the response is from LLM or is a fixed guardrail response.
I see. You can use the suggested route with the generation options and prepend the text before you send it to the UI. Are you using the Chat CLI, Server UI, or something else?
I am using the chat CLI. Is there a mode or a way to use the chat CLI such that it does not use the chat history (the previous messages) to generate the next bot response?
Here's a quick hack for you. Change line 87 here: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/nemoguardrails/cli/chat.py#L87
with:
if not streaming or not rails_app.main_llm_supports_streaming:
# We print bot messages in green.
- print(Styles.GREEN + f"{bot_message['content']}" + Styles.RESET_ALL)
+ message = bot_message["content"]
+ for messages in rails_config.bot_messages.values():
+ if bot_message["content"] in messages:
+ message = "PRE-DEFINED MESSAGE: " + message
+
+ print(Styles.GREEN + f"{message}" + Styles.RESET_ALL)
Sample output on the ABC config:
Starting the chat (Press Ctrl + C twice to quit) ...
> hi
Hello! How can I help you today?
> you are stupid!
PRE-DEFINED MESSAGE: I'm sorry, I can't respond to that.
@drazvan The hack resolved the issue, thanks.