OTEL Metric Support
Hi Team,
This library is great for setting up auto intrumentation for llm libraries.
From what I have seen it looks like we are only using SPAN feature from OpenTelemetry.
It will be great if we also support OTEL Metrics to keep counters for total input tokens, output tokens, cached tokens, number of llm calls etc..
[like] Danzo, Steve reacted to your message:
From: Ujjwal Kumar @.> Sent: Sunday, April 13, 2025 3:46:47 PM To: Arize-ai/openinference @.> Cc: Subscribed @.***> Subject: [Arize-ai/openinference] OTEL Metric Support (Issue #1511)
This is an EXTERNAL EMAIL. Stop and think before clicking a link or opening attachments.
Hi Team,
This library is great for setting up auto intrumentation for llm libraries.
From what I have seen it looks like we are only using SPAN feature from OpenTelemetry.
It will be great if we also support OTEL Metricshttps://opentelemetry.io/docs/specs/otel/metrics/ to keep counters for total input tokens, output tokens, cached tokens, number of llm calls etc..
— Reply to this email directly, view it on GitHubhttps://github.com/Arize-ai/openinference/issues/1511, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BARLQLFUTV47CCF4G54FNE32ZKBGPAVCNFSM6AAAAAB3BHKBD6VHI2DSMVQWIX3LMV43ASLTON2WKOZSHE4TCMZSHEYDENY. You are receiving this because you are subscribed to this thread.Message ID: @.***>
[https://avatars.githubusercontent.com/u/13854526?s=20&v=4]kujjwal02 created an issue (Arize-ai/openinference#1511)https://github.com/Arize-ai/openinference/issues/1511
Hi Team,
This library is great for setting up auto intrumentation for llm libraries.
From what I have seen it looks like we are only using SPAN feature from OpenTelemetry.
It will be great if we also support OTEL Metricshttps://opentelemetry.io/docs/specs/otel/metrics/ to keep counters for total input tokens, output tokens, cached tokens, number of llm calls etc..
— Reply to this email directly, view it on GitHubhttps://github.com/Arize-ai/openinference/issues/1511, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BARLQLFUTV47CCF4G54FNE32ZKBGPAVCNFSM6AAAAAB3BHKBD6VHI2DSMVQWIX3LMV43ASLTON2WKOZSHE4TCMZSHEYDENY. You are receiving this because you are subscribed to this thread.Message ID: @.***>
@kujjwal02 - this is good feedback and in general agree that for these instrumentors to be robust for observability they need to generate metrics too. We are however pretty focused on the tracing part as it's the biggest pain our users face right now.
If you have feedback for what types of metrics you'd like to see please keep us informed! Also PRs are welcome if you would like to add some :)