mem0
mem0 copied to clipboard
Add support to pass Callback Handlers
🚀 The feature
Langchain supports callbacks to be passed to the function invoking the LLM. Support passing these to Langchain.
Motivation, pitch
This allows tools like Chainlit to display step by step outputs.
@cl.on_message
async def on_message(message: cl.Message):
runnable = cl.user_session.get("runnable") # type: Runnable
msg = cl.Message(content="")
async for chunk in runnable.astream(
{"question": message.content},
config=RunnableConfig(callbacks=[cl.LangchainCallbackHandler()]),
):
await msg.stream_token(chunk)
await msg.send()
Thank you for opening this issue, @ramnathv! We're looking into it. 🌟
If you're interested, we'd welcome your contribution on this. Feel free to ask for any guidance you need.
Happy coding! 🚀