opik icon indicating copy to clipboard operation
opik copied to clipboard

[FR]: How to configure pydantic-ai with langgraph + LogFire so all LLM traces are unified in a single trace?

Open lucasboscatti opened this issue 3 months ago • 5 comments

Proposal summary

I’m experimenting with langgraph and tracing via Opik. Here’s the behavior I’m seeing:

  • With langgraph + langchain: When I trace execution using Opik, all LLM-related information (inputs, outputs, tokens, pricing, timing) is captured in a single trace, distributed across the nodes of the graph.

  • With langgraph + pydantic-ai (tracing enabled via LogFire with capture_all=True): Instead of one unified trace, I get multiple separate traces, which makes it harder to visualize the entire flow in a single execution graph.


Question: How can I configure pydantic-ai so that all LLM-related traces are logged within a single trace and distributed across the nodes of the graph, similar to how langgraph handles it with langchain?

The reason I’m exploring pydantic-ai is because langchain feels too complex for my use case.

Motivation

No response

lucasboscatti avatar Sep 26 '25 14:09 lucasboscatti

Thank you for raising this issue. We'll take a look.

CC: @Lothiraldan

dsblank avatar Oct 01 '25 20:10 dsblank

Hi @lucasboscatti - can you please share with us the code you are using that we will be able to reproduce quickly?

YarivHashaiComet avatar Oct 22 '25 09:10 YarivHashaiComet

Not @lucasboscatti but I was able to reproduce when wrapping PydanticAI in langgraph.

import os
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
import logfire
from pydantic_ai import Agent
from pydantic_ai.models.test import TestModel

logfire.configure(send_to_logfire=False)

agent = Agent(TestModel(custom_output_text="response"))

class State(TypedDict):
    messages: list[str]

def node1(state: State) -> State:
    with logfire.span("agent_call_1"):
        result = agent.run_sync("prompt 1")
    return {"messages": state["messages"] + [result.output]}

def node2(state: State) -> State:
    with logfire.span("agent_call_2"):
        result = agent.run_sync("prompt 2")
    return {"messages": state["messages"] + [result.output]}

graph = StateGraph(State)
graph.add_node("node1", node1)
graph.add_node("node2", node2)
graph.add_edge(START, "node1")
graph.add_edge("node1", "node2")
graph.add_edge("node2", END)

app = graph.compile()
app.invoke({"messages": []})

collincunn avatar Oct 30 '25 14:10 collincunn

Hey, I forgot about this issue, sorry. I switched to Langchain (unfortunately), and it worked as expected. But it's very similar to what @collincunn did. Just create a node and call pydantic-ai, it won't be recognized as a child, and it will create another span.

lucasboscatti avatar Oct 30 '25 15:10 lucasboscatti

Not on the Opik team, but below is a fix.

This can be fixed by wrapping everything in a single span:

...
with logfire.span("langgraph_workflow"):
    result = app.invoke({"messages": []})

But I also recommend you check out pydantic-graph for graph stuff as opposed to langgraph if you prefer pydantic AI for agent/LLM execution.

Please close if this works for you!

collincunn avatar Oct 30 '25 20:10 collincunn