chainlit icon indicating copy to clipboard operation
chainlit copied to clipboard

LangGraph Support

Open tylertitsworth opened this issue 3 months ago • 1 comments

Is your feature request related to a problem? Please describe. I'm trying to utilize LangGraph with Chainlit, and when I run my workflow I would like to see the Steps the graph takes, however, the step class can only be utilized in an async state, and the graph is constructed out of synchronous class objects.

Describe the solution you'd like Given some on_message decorator function like so:

@cl.on_message
async def on_message(message: cl.Message):
    """Handle a message.

    Args:
        message (cl.Message): User prompt input
    """
    app = cl.user_session.get("app")
    # Currently functions, without steps
    res = await cl.make_async(app.invoke)({"keys": {"question": message.content}})

It results in the following outputs in my terminal: (Based on https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_self_rag_mistral_nomic.ipynb)

2024-03-08 22:10:08 - Use pytorch device_name: cpu
---RETRIEVE---
---CHECK RELEVANCE---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---DECIDE TO GENERATE---
---DECISION: GENERATE---
---GENERATE---
---GRADE GENERATION vs DOCUMENTS---
---DECISION: SUPPORTED, MOVE TO FINAL GRADE---
---FINAL GRADE---
---GRADE GENERATION vs QUESTION---
---DECISION: USEFUL---

However, Chainlit gives me the following: image

A LangGraph is constructed with nodes and is then compiled into an application, here's my implementation:

def create_workflow(config, retriever):
    workflow = StateGraph(GraphState)

    from functools import partial

    retrieve_with_retriever = partial(retrieve, retriever=retriever)
    grade_documents_with_local_llm = partial(grade_documents, local_llm=config["model"])
    generate_with_local_llm = partial(generate, local_llm=config["model"])
    transform_query_with_local_llm = partial(transform_query, local_llm=config["model"])
    grade_generation_v_documents_with_local_llm = partial(
        grade_generation_v_documents, local_llm=config["model"]
    )
    grade_generation_v_question_with_local_llm = partial(
        grade_generation_v_question, local_llm=config["model"]
    )

    workflow.add_node("retrieve", retrieve_with_retriever)
    workflow.add_node("grade_documents", grade_documents_with_local_llm)
    workflow.add_node("generate", generate_with_local_llm)
    workflow.add_node("transform_query", transform_query_with_local_llm)
    workflow.add_node("prepare_for_final_grade", prepare_for_final_grade)

    workflow.set_entry_point("retrieve")
    workflow.add_edge("retrieve", "grade_documents")
    workflow.add_conditional_edges(
        "grade_documents",
        decide_to_generate,
        {
            "transform_query": "transform_query",
            "generate": "generate",
        },
    )
    workflow.add_edge("transform_query", "retrieve")
    workflow.add_conditional_edges(
        "generate",
        grade_generation_v_documents_with_local_llm,
        {
            "supported": "prepare_for_final_grade",
            "not supported": "generate",
        },
    )
    workflow.add_conditional_edges(
        "prepare_for_final_grade",
        grade_generation_v_question_with_local_llm,
        {
            "useful": END,
            "not useful": "transform_query",
        },
    )

    return workflow.compile()

For each node defined, a step should be generated with the name of that function, and the return value of that function. Here's what a node function might look like:

def decide_to_generate(state):
    """
    Determines whether to generate an answer, or re-generate a question.

    Args:
        state (dict): The current state of the agent, including all keys.

    Returns:
        str: Next node to call
    """

    print("---DECIDE TO GENERATE---")
    state_dict = state["keys"]
    question = state_dict["question"]
    filtered_documents = state_dict["documents"]

    if not filtered_documents:
        # All documents have been filtered check_relevance
        # We will re-generate a new query
        print("---DECISION: TRANSFORM QUERY---")
        return "transform_query"
    else:
        # We have relevant documents, so generate answer
        print("---DECISION: GENERATE---")
        return "generate"

In this method, the initial document retrievals would be reflected in the retrieve node, otherwise most steps would just return a string that represents the output sent to the graph's state machine.

Describe alternatives you've considered Generating a DAG based on the graph configuration, or allowing some kind of manual method to define what I outlined above, rather than generating the steps for the user.

Additional context n/a

tylertitsworth avatar Mar 09 '24 06:03 tylertitsworth