logfire icon indicating copy to clipboard operation
logfire copied to clipboard

Agno Tracing Support

Open adiberk opened this issue 5 months ago • 10 comments

Description

Agno uses open telemetry. Does this mean it is supported automatically without any other configuration?

https://docs.agno.com/observability/introduction

adiberk avatar Jul 08 '25 20:07 adiberk

Yes, install openinference-instrumentation-agno and use it like so:

from agno.agent import Agent
from agno.models.openai import OpenAIChat
from openinference.instrumentation.agno import AgnoInstrumentor

import logfire

logfire.configure()
AgnoInstrumentor().instrument()

agent = Agent(model=OpenAIChat(id='gpt-4o-mini'))
agent.print_response('Hi')

alexmojaki avatar Jul 09 '25 12:07 alexmojaki

Image

Seems to work! I will say that it doesn't look the same as say the openai agents sdk which has more detailed information such as tool calls in highlights and the agents being called with each other etc.

This might be because of custom tracing spans? But would be cool to have "handoff" concept for team based executions as well as tool highlights (input and response) etc.

adiberk avatar Jul 09 '25 13:07 adiberk

I think that's just a limitation of openinference-instrumentation-agno or agno itself. There's not much we can do without improving one of those libraries.

samuelcolvin avatar Jul 09 '25 14:07 samuelcolvin

I think that's just a limitation of openinference-instrumentation-agno or agno itself. There's not much we can do without improving one of those libraries.

Could be? Other tracing providers seem to display and detect the tool calls and responses correctly (agent ops, lang fuse etc.).

adiberk avatar Jul 13 '25 17:07 adiberk

If the data isn't in the spans, then it doesn't matter where it's being sent.

If the data is there but not displayed nicely, that's more fixable. But they're not making it easy. For example, the OpenAIChat.invoke span has the attribute input.value containing:

{
  "messages": [
    "role='user' content='Hi' name=None tool_call_id=None tool_calls=None audio=None images=None videos=None files=None audio_output=None image_output=None thinking=None redacted_thinking=None provider_data=None citations=None reasoning_content=None tool_name=None tool_args=None tool_call_error=None stop_after_tool_call=False add_to_agent_memory=True from_history=False metrics=MessageMetrics(input_tokens=0, output_tokens=0, total_tokens=0, audio_tokens=0, input_audio_tokens=0, output_audio_tokens=0, cached_tokens=0, cache_write_tokens=0, reasoning_tokens=0, prompt_tokens=0, completion_tokens=0, prompt_tokens_details=None, completion_tokens_details=None, additional_metrics=None, time=None, time_to_first_token=None, timer=None) references=None created_at=1752436356"
  ],
  "tools": []
}

That message isn't in an easy to parse format.

alexmojaki avatar Jul 13 '25 20:07 alexmojaki

If the data isn't in the spans, then it doesn't matter where it's being sent.

If the data is there but not displayed nicely, that's more fixable. But they're not making it easy. For example, the OpenAIChat.invoke span has the attribute input.value containing:

{
  "messages": [
    "role='user' content='Hi' name=None tool_call_id=None tool_calls=None audio=None images=None videos=None files=None audio_output=None image_output=None thinking=None redacted_thinking=None provider_data=None citations=None reasoning_content=None tool_name=None tool_args=None tool_call_error=None stop_after_tool_call=False add_to_agent_memory=True from_history=False metrics=MessageMetrics(input_tokens=0, output_tokens=0, total_tokens=0, audio_tokens=0, input_audio_tokens=0, output_audio_tokens=0, cached_tokens=0, cache_write_tokens=0, reasoning_tokens=0, prompt_tokens=0, completion_tokens=0, prompt_tokens_details=None, completion_tokens_details=None, additional_metrics=None, time=None, time_to_first_token=None, timer=None) references=None created_at=1752436356"
  ],
  "tools": []
}

That message isn't in an easy to parse format.

Hm I hear you. I wish it was following a standard. Is there a standard for Agent spans and how they should be formatted? Also maybe I can contribute a PR to capture and translate those spans in the format they send.

adiberk avatar Jul 13 '25 20:07 adiberk

@alexmojaki - let me know if there is anything I can do - It seems it is using a format similar to how datadog sends log traces (with equal sign etc. instead of json formatted). I can even raise an issue with them as well if you have some suggestions.

adiberk avatar Jul 21 '25 17:07 adiberk

I think it's just str() on a pydantic model, in which case making it produce JSON instead should be easy and would help a lot.

The exact standard for how messages are stored in spans is currently being overhauled in https://github.com/open-telemetry/semantic-conventions/pull/2179.

alexmojaki avatar Jul 21 '25 18:07 alexmojaki

@alexmojaki

I have been talking to Agno team. From what I understand they say they are using the OpenTel standard and standard open telemetry fields.

They say the instrumentor works fine for Langfuse, Arize and some others. And the agent input, responses and tool calls all show properly.

Are you able to highlight what exactly isn't standard? Maybe I can somehow get it to work myself if need be?

Here is the link to the Agno instrumentor

https://pypi.org/project/openinference-instrumentation-agno/

adiberk avatar Oct 16 '25 12:10 adiberk

They're using the OTel standard in the basic sense but not the semantic conventions. input.value is not a conventional attribute. The conventions are in https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/#inference. The input messages would go in gen_ai.input.messages as JSON following a specific schema.

alexmojaki avatar Oct 21 '25 16:10 alexmojaki