openinference icon indicating copy to clipboard operation
openinference copied to clipboard

[feature request] support for gpt-realtime

Open na-proyectran opened this issue 2 months ago • 3 comments

I noticed that when using openai-agents = "0.3.2" (which includes support for the new gpt-realtime model), traces suddenly stopped working.

I’m using the RealtimeAgent class and followed the integration guide here: https://arize.com/docs/phoenix/integrations/llm-providers/openai/openai-agents-sdk-tracing

This is also the same link referenced in the OpenAI Agents documentation: https://openai.github.io/openai-agents-python/tracing/

Could you clarify whether this is a bug, or if support for the gpt-realtime model still needs to be added?

na-proyectran avatar Sep 24 '25 00:09 na-proyectran

Hi @na-proyectran! I'm Dosu and I’m helping the openinference team.

Tracing support for the gpt-realtime model with openai-agents 0.3.2 and the RealtimeAgent class is not currently implemented in either the Arize Phoenix or OpenInference codebases. There are no explicit references or instrumentation hooks for gpt-realtime or RealtimeAgent, and recent updates have only added these models for cost tracking, not tracing. This means the lack of traces is due to a missing feature or incomplete adaptation for the new model, not a regression or bug in existing tracing hooks or integrations (source).

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other  Chat with Dosu Join Discord Share on X

dosubot[bot] avatar Sep 24 '25 00:09 dosubot[bot]

I’m using openai-agents (0.3.2) from OpenAI combined with LlamaIndex for RAG (index, Qdrant client, etc.). I’m not using LlamaIndex agents because they don’t support gpt-realtime yet.

What I want is to be able to instrument LlamaIndex requests.

I have defined:

# Callback handlers
llama_debug = LlamaDebugHandler(print_trace_on_end=True)
inference_handler = OpenInferenceCallbackHandler()
Settings.callback_manager.set_handlers([llama_debug, inference_handler])

# Arize Phoenix Instrumentor
tracer_provider = register(auto_instrument=False)
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)

In Phoenix, I can see when embeddings are generated while indexing to Qdrant, but nothing else.

What I think might be happening is that when using the OpenAI Agents client, LlamaIndex has no context of the LLM (gpt-realtime). In other words, there’s no LLM defined in Settings. On the other hand, since I do have all the callbacks configured, I would expect to at least see the retriever steps.

I'm using: arize-phoenix = "11.36.0" arize-phoenix-otel = "0.13.1" openinference-instrumentation-llama_index = "4.3.5" llama-index-callbacks-openinference = "0.4.1"

na-proyectran avatar Sep 24 '25 07:09 na-proyectran

hi @na-proyectran - as a company we are still internally discussing how to enable the observability support for gpt-realtime model.

caroger avatar Nov 14 '25 01:11 caroger