langchain
langchain copied to clipboard
Guard Module
The guard directive makes it easy to add a protective step on top of a llm chain.
For example:
@Guard(restrictions=['must not talk about politics or political figures'], llm=llm, retries=1)
def call_chain():
return chain.run(adjective="political")
This use of @Guard will ask the provided llm to check if the output of chain.run violates the restriction provided. If it does, it will retry once before throwing an error.
While writing this directive I also had to write a boolean normalization function so included that as well though normalization functions to translate llm responses into json, lists, etc would be a great feature to add as well in the future.
Still need to make some additions to the docs and test on agents not just chains. Boolean normalization is fully tested.
- i like how youre thinking big and makign this its own documentation section
- if you are doing that, lets split out guards and normalization into their own modules (i would actually make them separate modules in the code, because i think normalization could be used in output parsing
- relatedly, i would call it
output_parsingrather than normalization - For the documetnation section, lets add a how to section with examples. right now the getting started ntoebook has like five examples in it, lets split those up and have those be their own examples in the documentation
happy to help with those comments later in the week!
@hwchase17 sounds great on all those! I completely agree about the normalization- I just had to add that function to make the RestrictionGuard work. I'll open up another PR just for the normalization stuff? @BenderV you already have the parser interface, right? So is the thought is we make a normalization module that is called by the parser interface?
I'll see if I can put together the example doc tomorrow!
@hwchase17 renaming the module to "Alignment" instead of "Guards"; see @pruksmhc 's comment above. I'm putting guards as a sub-section in the alignment module.
"yp_yurilee" (not sure of github @) made the very good point that moderating user input is also a great way of guarding an application. Since guards are just directives that wrap any function that outputs a string then it should be easy to apply them to user input. ofc depends on the application but something like
@RestrictionGuard(restrictions=['must not request violence'], retries=1)
def get_user_input():
return input()
could be very useful. We should include this sort of design in the examples and elsewhere in guard docs.
@John-Church I'm yp_yurilee :) Happy to help with docs for integrating user guards. I'm also thinking through personality alignment.
Oh ok cool!! That makes sense @pruksmhc! :)
Would love the help!! I haven't written any how-to guides yet. I think one specifically for user guarding would be amazing
@John-Church I can get to it later tonight.
@John-Church I think I'm going to need permission to push to that branch to help with documentation. However, I tried restriction guard with Streamlit and it seems to work well! Could use that for the how-to on user guards.
from langchain.guards.guards import RestrictionGuard
llm = OpenAI(temperature=0)
@RestrictionGuard(restrictions=['must not request violence'], llm=llm, retries=0)
def return_text_input():
return st.text_input("You:", value='')
just added you, @pruksmhc !!
And yes, @pruksmhc for the user input I think having a few streamlit examples would be great! May also be good to start with just input() though for the first few since it's what everyone knows already and it can be run easily in ipynb
@hwchase17 docs are updated, the file organization you changed around looks good to me. There are a few nice to have that we haven't added yet but could in the future, like SentimentGuard, but I think it's worth shipping with the current functionality and adding on later.
fixed merge conflicts and took over in https://github.com/hwchase17/langchain/pull/1637
closing in favor of #1637