guardrails
guardrails copied to clipboard
[feat] add async guard support to langchain integration
Description currently can not use AsyncGuard with the langchain integration
Why is this needed [If you have a concrete use case, add details here.]
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
model = ChatOpenAI(model="gpt-4o")
from guardrails import AsyncGuard
from guardrails.hub import DetectPII
import asyncio
guard = AsyncGuard().use_many(
DetectPII(pii_entities=["PERSON","LOCATION"])
)
prompt = ChatPromptTemplate.from_template("Answer this question {question}")
output_parser = StrOutputParser()
chain = prompt | model | guard.to_runnable() | output_parser
async def main():
result = await chain.ainvoke({"question": "What are the top five airlines for domestic travel in the US?"})
print(result)
asyncio.run(main())
Implementation details That makes sense, looks like we need overloads for Async in here: https://github.com/guardrails-ai/guardrails/blob/main/guardrails/integrations/langchain/guard_runnable.py Or probably a different AsyncGuardRunnable class
End result should execute correctly