chainlit
chainlit copied to clipboard
LangGraph Support
Is your feature request related to a problem? Please describe. I'm trying to utilize LangGraph with Chainlit, and when I run my workflow I would like to see the Steps the graph takes, however, the step class can only be utilized in an async state, and the graph is constructed out of synchronous class objects.
Describe the solution you'd like Given some on_message decorator function like so:
@cl.on_message
async def on_message(message: cl.Message):
"""Handle a message.
Args:
message (cl.Message): User prompt input
"""
app = cl.user_session.get("app")
# Currently functions, without steps
res = await cl.make_async(app.invoke)({"keys": {"question": message.content}})
It results in the following outputs in my terminal: (Based on https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_self_rag_mistral_nomic.ipynb)
2024-03-08 22:10:08 - Use pytorch device_name: cpu
---RETRIEVE---
---CHECK RELEVANCE---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---DECIDE TO GENERATE---
---DECISION: GENERATE---
---GENERATE---
---GRADE GENERATION vs DOCUMENTS---
---DECISION: SUPPORTED, MOVE TO FINAL GRADE---
---FINAL GRADE---
---GRADE GENERATION vs QUESTION---
---DECISION: USEFUL---
However, Chainlit gives me the following:
A LangGraph is constructed with nodes and is then compiled into an application, here's my implementation:
def create_workflow(config, retriever):
workflow = StateGraph(GraphState)
from functools import partial
retrieve_with_retriever = partial(retrieve, retriever=retriever)
grade_documents_with_local_llm = partial(grade_documents, local_llm=config["model"])
generate_with_local_llm = partial(generate, local_llm=config["model"])
transform_query_with_local_llm = partial(transform_query, local_llm=config["model"])
grade_generation_v_documents_with_local_llm = partial(
grade_generation_v_documents, local_llm=config["model"]
)
grade_generation_v_question_with_local_llm = partial(
grade_generation_v_question, local_llm=config["model"]
)
workflow.add_node("retrieve", retrieve_with_retriever)
workflow.add_node("grade_documents", grade_documents_with_local_llm)
workflow.add_node("generate", generate_with_local_llm)
workflow.add_node("transform_query", transform_query_with_local_llm)
workflow.add_node("prepare_for_final_grade", prepare_for_final_grade)
workflow.set_entry_point("retrieve")
workflow.add_edge("retrieve", "grade_documents")
workflow.add_conditional_edges(
"grade_documents",
decide_to_generate,
{
"transform_query": "transform_query",
"generate": "generate",
},
)
workflow.add_edge("transform_query", "retrieve")
workflow.add_conditional_edges(
"generate",
grade_generation_v_documents_with_local_llm,
{
"supported": "prepare_for_final_grade",
"not supported": "generate",
},
)
workflow.add_conditional_edges(
"prepare_for_final_grade",
grade_generation_v_question_with_local_llm,
{
"useful": END,
"not useful": "transform_query",
},
)
return workflow.compile()
For each node
defined, a step should be generated with the name of that function, and the return value of that function. Here's what a node function might look like:
def decide_to_generate(state):
"""
Determines whether to generate an answer, or re-generate a question.
Args:
state (dict): The current state of the agent, including all keys.
Returns:
str: Next node to call
"""
print("---DECIDE TO GENERATE---")
state_dict = state["keys"]
question = state_dict["question"]
filtered_documents = state_dict["documents"]
if not filtered_documents:
# All documents have been filtered check_relevance
# We will re-generate a new query
print("---DECISION: TRANSFORM QUERY---")
return "transform_query"
else:
# We have relevant documents, so generate answer
print("---DECISION: GENERATE---")
return "generate"
In this method, the initial document retrievals would be reflected in the retrieve
node, otherwise most steps would just return a string that represents the output sent to the graph's state machine.
Describe alternatives you've considered Generating a DAG based on the graph configuration, or allowing some kind of manual method to define what I outlined above, rather than generating the steps for the user.
Additional context n/a
I saw the langroid implementation that achieves a similar result: https://github.com/langroid/langroid/blob/main/langroid/agent/callbacks/chainlit.py#L566
+1 on this. Definitely need something wrt Langgraph as the dev team at Langchain is heavily moving towards that
The impact will be huge.
If project requires not streaming, it simple to implement. But such requirement is rarely the case. Streaming is a must. Here are a few thoughts.
- LangChain is moving to event based architecture (
astream_events
), and so can LangGraph, as it is based on LangChain'sRunnable
Interface. - Events like
on_chat_model_start
,on_tool_start
, etc are predefined in callback module oflangchain_core
package. - Projects like Streamlit implemented their LangChain support by defining their custom callback handler in
langchain_community
package - Although, bacause how LangGraph allows users flexiblity, defining callback handler are not as straight forward as there can be more and more complications defined by the user in the Graph.
- We can still somewhat generalised it, like
on_node_start
, etc.
LangGraph is the sweet spot in the abstraction game. And Chainlit is the sweet spot for chatbot interface. Huge impact.
Hello, an example has been added in Chainlit/cookbook for langGraph support in this PR. Feel free to enhance it with functionalities !