Show what the agent is doing
Hi, is there a way to show what the main agent and its sub-agents are doing? Specifically, I'd like to know which tool is being used and what it's doing. I'd like to see the internal process the agent is performing. I'm currently using it like this:
agent = create_deep_agent(
tools=[internet_search],
system_prompt=contextual_research_instructions,
subagents=[critique_sub_agent, research_sub_agent],
model=llm
)
result = await agent.ainvoke({"messages": [{"role": "user", "content": query}]})
response = result["messages"][-1].content
use langsmith tracing capability
@ameen7626 Excuse me, I explained myself poorly. What I want to achieve is to capture what the main agent is doing, in order to show it.
This trace shows the main agent (research) coordinating multiple sub-agents and tools (internet_search, critique_sub_agent, etc.), along with their corresponding model calls (gpt-4o) and outputs. The waterfall view makes it easy to follow the agent’s internal reasoning chain — from user input → tool calls → sub-agent responses → final synthesis.
To enable this kind of visibility, just set the following environment variables:
export LANGCHAIN_TRACING_V2="true" export LANGCHAIN_PROJECT="your_project_name" export LANGCHAIN_API_KEY="your_langsmith_api_key"
Once configured, every agent.invoke() automatically streams traces to smith.langchain.com , giving you a full hierarchical breakdown of the main agent and its sub-agents in real time.
@ameen7626 Excuse me, I explained myself poorly. What I want to achieve is to capture what the main agent is doing, in order to show it.
You need to use atream_events while invoking the agent. The response will have server side events which have all the necessary streaming data to achieve what you are asking. Look at langgraphs documentation for streaming and what various events mean
Thank you for sharing the steps. @ameen7626 @allthatido That's what I was looking for; I thought it was different with the agents. Thank you.
We recommend that users use stream or astream API. These are dedicated LangGraph APIs for streaming.
astream_events API will work also, but it's an older API that was optimized when orchestration was done with LCEL.
you can also use a simple LoggingMiddleware to trace what the agent is doing before and after each model call.
from langchain.agents.middleware import AgentMiddleware, AgentState
from langgraph.runtime import Runtime
class LoggingMiddleware(AgentMiddleware):
def before_model(self, state: AgentState, runtime: Runtime) -> dict[str, Any] | None:
print(f"About to call the model with state: {state}")
return None
def after_model(self, state: AgentState, runtime: Runtime) -> dict[str, Any] | None:
print(f"Model returned state: {state}")
return None
...
agent = create_deep_agent(
model=model,
tools=[internet_search],
middleware=[LoggingMiddleware()],
system_prompt=research_instructions,
subagents=[critique_sub_agent, research_sub_agent],
)