openinference icon indicating copy to clipboard operation
openinference copied to clipboard

[bug] Answer to Bedrock models when streaming is a JSON, not human-readable text

Open anionescubd opened this issue 2 months ago • 3 comments

Describe the bug I'm trying to set up tracing using agno. I've successfully managed to set it up for agents that use OpenAI models, but for AWS Bedrock models, when streaming is enabled, the output is not "prettified." Specifically, with streaming enabled, I receive text chunks (as shown in the attached picture). When I disable streaming, the traces look correct and display the actual LLM output as expected, rather than JSON.

To Reproduce Packages installed: agno==1.8.1 yfinance==0.2.66 openai==1.92.0 arize-phoenix==11.37.0 openinference-instrumentation-agno==0.1.15 arize-phoenix-otel==0.13.1 opentelemetry-exporter-otlp==1.37.0 openinference-instrumentation-openai==0.1.33 openinference-instrumentation-bedrock==0.1.27 openinference-instrumentation-anthropic==0.1.19

I tested with the following code:

`import asyncio import os

from agno.agent import Agent from agno.models.openai import OpenAIChat from agno.models.aws import AwsBedrock from agno.tools.yfinance import YFinanceTools from phoenix.otel import register from openinference.instrumentation.agno import AgnoInstrumentor

Configure the Phoenix tracer

tracer_provider = register( project_name="default", endpoint="http://127.0.0.1:6006/v1/traces", auto_instrument=True ) AgnoInstrumentor().instrument(tracer_provider=tracer_provider)

Create and configure the agent

agent = Agent( name="Stock Price Agent", model = AwsBedrock(id="claude-sonnet-4", aws_access_key_id="access key", aws_secret_access_key="secret key"), tools=[YFinanceTools()], instructions="You are a stock price agent. Answer questions in the style of a stock analyst.", debug_mode=True, )

Use the agent

agent.print_response("What is the current price of Tesla?", stream=True)`

Expected behavior I expected the output of the streaming to be a text with the response, instead of a JSON with text chunks.

Screenshots

Image

Desktop (please complete the following information):

  • OS: Ubuntu
  • Version 24.04.1

anionescubd avatar Sep 26 '25 18:09 anionescubd

Hi @anionescubd! I'm Dosu and I’m helping the openinference team.

This is a known limitation: streaming responses from AWS Bedrock models are recorded as JSON chunks in traces because the Bedrock instrumentation does not currently support prettifying or post-processing these outputs into human-readable text. There are no configuration options or hooks to change this behavior, and recent updates have not addressed this issue yet. The infrastructure for prettifying streaming chunks (like for Anthropic models) exists in the codebase, but it is not applied to Bedrock streaming responses at this time (source).

If you need prettified output in traces, you would need to implement custom post-processing outside of the current OpenInference instrumentation, or wait for future updates that might add this feature.

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other  Chat with Dosu Join Discord Share on X

dosubot[bot] avatar Sep 26 '25 18:09 dosubot[bot]

Thanks for filing this issue! We'll take a look!

nate-mar avatar Sep 29 '25 18:09 nate-mar

@dirkbrnd will take a look at this one! Thanks Dirk!

nate-mar avatar Oct 06 '25 21:10 nate-mar