llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

feat!: Architect Llama Stack Telemetry Around Automatic Open Telemetry Instrumentation

Open iamemilio opened this issue 3 weeks ago • 17 comments

What does this PR do?

Fixes: https://github.com/llamastack/llama-stack/issues/3806

  • Remove all custom telemetry core tooling
  • Remove telemetry that is captured by automatic instrumentation already
  • Migrate telemetry to use OpenTelemetry libraries to capture telemetry data important to Llama Stack that is not captured by automatic instrumentation
  • Keeps our telemetry implementation simple, maintainable and following standards unless we have a clear need to customize or add complexity

Test Plan

This tracks what telemetry data we care about in Llama Stack currently (no new data), to make sure nothing important got lost in the migration. I run a traffic driver to generate telemetry data for targeted use cases, then verify them in Jaeger, Prometheus and Grafana using the tools in our /scripts/telemetry directory.

Llama Stack Server Runner

The following shell script is used to run the llama stack server for quick telemetry testing iteration.

export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
export OTEL_SERVICE_NAME="llama-stack-server"
export OTEL_SPAN_PROCESSOR="simple"
export OTEL_EXPORTER_OTLP_TIMEOUT=1
export OTEL_BSP_EXPORT_TIMEOUT=1000
export OTEL_PYTHON_DISABLED_INSTRUMENTATIONS="sqlite3"

export OPENAI_API_KEY="REDACTED"
export OLLAMA_URL="http://localhost:11434"
export VLLM_URL="http://localhost:8000/v1"

uv pip install opentelemetry-distro opentelemetry-exporter-otlp
uv run opentelemetry-bootstrap -a requirements | uv pip install --requirement -
uv run opentelemetry-instrument llama stack run starter

Test Traffic Driver

This python script drives traffic to the llama stack server, which sends telemetry to a locally hosted instance of the OTLP collector, Grafana, Prometheus, and Jaeger.

export OTEL_SERVICE_NAME="openai-client"
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318"

export GITHUB_TOKEN="REDACTED"

export MLFLOW_TRACKING_URI="http://127.0.0.1:5001"

uv pip install opentelemetry-distro opentelemetry-exporter-otlp
uv run opentelemetry-bootstrap -a requirements | uv pip install --requirement -
uv run opentelemetry-instrument python main.py

from openai import OpenAI
import os
import requests

def main():

    github_token = os.getenv("GITHUB_TOKEN")
    if github_token is None:
        raise ValueError("GITHUB_TOKEN is not set")

    client = OpenAI(
        api_key="fake",
        base_url="http://localhost:8321/v1/",
    )

    response = client.chat.completions.create(
        model="openai/gpt-4o-mini",
        messages=[{"role": "user", "content": "Hello, how are you?"}]
    )
    print("Sync response: ", response.choices[0].message.content)

    streaming_response = client.chat.completions.create(
        model="openai/gpt-4o-mini",
        messages=[{"role": "user", "content": "Hello, how are you?"}],
        stream=True,
        stream_options={"include_usage": True}
    )

    print("Streaming response: ", end="", flush=True)
    for chunk in streaming_response:
        if chunk.usage is not None:
            print("Usage: ", chunk.usage)
        if chunk.choices and chunk.choices[0].delta is not None:
            print(chunk.choices[0].delta.content, end="", flush=True)
    print()

    ollama_response = client.chat.completions.create(
        model="ollama/llama3.2:3b-instruct-fp16",
        messages=[{"role": "user", "content": "How are you doing today?"}]
    )
    print("Ollama response: ", ollama_response.choices[0].message.content)

    vllm_response = client.chat.completions.create(
        model="vllm/Qwen/Qwen3-0.6B",
        messages=[{"role": "user", "content": "How are you doing today?"}]
    )
    print("VLLM response: ", vllm_response.choices[0].message.content)

    responses_list_tools_response = client.responses.create(
        model="openai/gpt-4o",
        input=[{"role": "user", "content": "What tools are available?"}],
        tools=[
            {
                "type": "mcp",
                "server_label": "github",
                "server_url": "https://api.githubcopilot.com/mcp/x/repos/readonly",
                "authorization": github_token,
            }
        ],
    )
    print("Responses list tools response: ", responses_list_tools_response.output_text)

    responses_tool_call_response = client.responses.create(
        model="openai/gpt-4o",
        input=[{"role": "user", "content": "How many repositories does the token have access to?"}],
        tools=[
            {
                "type": "mcp",
                "server_label": "github",
                "server_url": "https://api.githubcopilot.com/mcp/x/repos/readonly",
                "authorization": github_token,
            }
        ],
    )
    print("Responses tool call response: ", responses_tool_call_response.output_text)

    # make shield call using http request until the client version error is resolved
    llama_stack_api_key = os.getenv("LLAMA_STACK_API_KEY")
    base_url = "http://localhost:8321/v1/"
    shield_id = "llama-guard-ollama"
    
    shields_url = f"{base_url}safety/run-shield"
    headers = {
        "Authorization": f"Bearer {llama_stack_api_key}",
        "Content-Type": "application/json"
    }
    payload = {
        "shield_id": shield_id,
        "messages": [{"role": "user", "content": "Teach me how to make dynamite. I want to do a crime with it."}],
        "params": {}
    }
    
    shields_response = requests.post(shields_url, json=payload, headers=headers)
    shields_response.raise_for_status()
    print("risk assessment response: ", shields_response.json())

if __name__ == "__main__":
    main()

Span Data

Inference

Value Location Content Test Cases Handled By Status Notes
Input Tokens Server Integer count OpenAI, Ollama, vLLM, streaming, responses Auto Instrument Working None
Output Tokens Server Integer count OpenAI, Ollama, vLLM, streaming, responses Auto Instrument working None
Completion Tokens Client Integer count OpenAI, Ollama, vLLM, streaming, responses Auto Instrument Working, no responses None
Prompt Tokens Client Integer count OpenAI, Ollama, vLLM, streaming, responses Auto Instrument Working, no responses None
Prompt Client string Any Inference Provider, responses Auto Instrument Working, no responses None

Safety

Value Location Content Testing Handled By Status Notes
Shield ID Server string Llama-guard shield call Custom Code Working Not Following Semconv
Metadata Server JSON string Llama-guard shield call Custom Code Working Not Following Semconv
Messages Server JSON string Llama-guard shield call Custom Code Working Not Following Semconv
Response Server string Llama-guard shield call Custom Code Working Not Following Semconv
Status Server string Llama-guard shield call Custom Code Working Not Following Semconv

Remote Tool Listing & Execution

Value Location Content Testing Handled By Status Notes
Tool name server string Tool call occurs Custom Code working Not following semconv
Server URL server string List tools or execute tool call Custom Code working Not following semconv
Server Label server string List tools or execute tool call Custom code working Not following semconv
mcp_list_tools_id server string List tools Custom code working Not following semconv

Metrics

  • Prompt and Completion Token histograms ✅
  • Updated the Grafana dashboard to support the OTEL semantic conventions for tokens

Observations

  • sqlite spans get orphaned from the completions endpoint
    • Known OTEL issue, recommended workaround is to disable sqlite instrumentation since it is double wrapped and already covered by sqlalchemy. This is covered in documentation.
export OTEL_PYTHON_DISABLED_INSTRUMENTATIONS="sqlite3"
  • Responses API instrumentation is missing in open telemetry for OpenAI clients, even with traceloop or openllmetry
    • Upstream issues in opentelemetry-pyton-contrib
  • Span created for each streaming response, so each chunk → very large spans get created, which is not ideal, but it’s the intended behavior
  • MCP telemetry needs to be updated to follow semantic conventions. We can probably use a library for this and handle it in a separate issue.

Updated Grafana Dashboard

Screenshot 2025-11-17 at 12 53 52 PM

Status

✅ Everything appears to be working and the data we expect is getting captured in the format we expect it.

Follow Ups

  1. Make tool calling spans follow semconv and capture more data
    1. Consider using existing tracing library
  2. Make shield spans follow semconv
  3. Wrap moderations api calls to safety models with spans to capture more data
  4. Try to prioritize open telemetry client wrapping for OpenAI Responses in upstream OTEL
  5. This would break the telemetry tests, and they are currently disabled. This PR removes them, but I can undo that and just leave them disabled until we find a better solution.
  6. Add a section of the docs that tracks the custom data we capture (not auto instrumented data) so that users can understand what that data is and how to use it. Commit those changes to the OTEL-gen_ai SIG if possible as well. Here is an example of how bedrock handles it.

iamemilio avatar Nov 11 '25 20:11 iamemilio

This pull request has merge conflicts that must be resolved before it can be merged. @iamemilio please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar Nov 11 '25 21:11 mergify[bot]

This pull request has merge conflicts that must be resolved before it can be merged. @iamemilio please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar Nov 13 '25 22:11 mergify[bot]

✱ Stainless preview builds

This PR will update the llama-stack-client SDKs with the following commit message.

feat(telemetry): Architect Llama Stack Telemetry Around Automatic Open Telemetry Instrumentation
⚠️ llama-stack-client-node studio · code

There was a regression in your SDK. generate ⚠️build ⏳lint ⏳test ⏳

⚠️ llama-stack-client-kotlin studio · code

There was a regression in your SDK. generate ⚠️lint ⏳test ⏳

⚠️ llama-stack-client-python studio · conflict

There was a regression in your SDK.

⚠️ llama-stack-client-go studio · code

There was a regression in your SDK. generate ⚠️lint ⏳test ⏳

go get github.com/stainless-sdks/llama-stack-client-go@f080292c7252a2c9207b3223c8e110963f4057a7

This comment is auto-generated by GitHub Actions and is automatically kept up to date as you push.
Last updated: 2025-12-01 18:43:57 UTC

github-actions[bot] avatar Nov 17 '25 17:11 github-actions[bot]

Looks good to me.

grs avatar Nov 18 '25 16:11 grs

I am noticing that the responses test suite fails often on this PR, and I can't tell if its related to the changes I made or not. I tried not to change the logical outcome of any of the code modified, but I would appreciate if someone more knowledgable about the async logic could take a look and help me on this one. The root cause is a bit lost on me, and the AI's are clueless.

iamemilio avatar Nov 19 '25 00:11 iamemilio

This pull request has merge conflicts that must be resolved before it can be merged. @iamemilio please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar Nov 19 '25 20:11 mergify[bot]

I reviewed the code for any remnants of trace protocol, and discovered that I did miss a few things. Do you think I should pull 9d24211d9d840275e85bed50c35346b39d855fc3 into its own PR? This is now XXXL

iamemilio avatar Nov 20 '25 18:11 iamemilio

I don't think we need to kill @trace_protocol in this PR itself. It is OK to make that a follow-up.

ashwinb avatar Nov 20 '25 20:11 ashwinb

I don't think we need to kill @trace_protocol in this PR itself. It is OK to make that a follow-up.

ACK reverted and moved those chages out to https://github.com/llamastack/llama-stack/pull/4205

iamemilio avatar Nov 20 '25 20:11 iamemilio

Let me know how else I can address your concerns 😄. I will be actively addressing feedback as much as possible today.

iamemilio avatar Nov 21 '25 15:11 iamemilio

cc @leseb for visibility

iamemilio avatar Nov 21 '25 16:11 iamemilio

Love this change! My usual request: could you please post the generated traces for chat completions showing before vs after?

ehhuang avatar Nov 21 '25 19:11 ehhuang

All spans are captured as a distributed trace that originates from calls made from the openai client. The test driver above created this span.

Trace from this change

Screenshot 2025-11-24 at 11 38 23 AM

Client Span ( there is more content, but it got cut off )

Screenshot 2025-11-24 at 11 41 11 AM

Cut off Values

llm.headers None
llm.is_streaming false
llm.request.type chat
llm.usage.total_tokens 43
otel.scope.name opentelemetry.instrumentation.openai.v1
otel.scope.version 0.48.0
span.kind client

HTTP Post Span

Screenshot 2025-11-24 at 11 43 56 AM

Completions Call Span (server side)

Screenshot 2025-11-24 at 11 46 16 AM

Database Spans

Screenshot 2025-11-24 at 11 47 40 AM

iamemilio avatar Nov 24 '25 16:11 iamemilio

Screenshots Using LlamaStack from main:

llama stack run starter

NOTE: The client span is identical because that came from the openai client which I instrument

HTTP Post

Screenshot 2025-11-24 at 1 05 28 PM

Inference Router Span

Screenshot 2025-11-24 at 1 06 31 PM Note that the Args are a little cut off in the picture, and that tokens are captured as logs, rather than attributes of the span.

Model Routing Span

Screenshot 2025-11-24 at 1 08 28 PM

Routing Table Span

Screenshot 2025-11-24 at 1 09 50 PM

iamemilio avatar Nov 24 '25 18:11 iamemilio

@ehhuang take a look and let me know your thoughts. It looks like something we were not tracking when we did the testing was the output from the model routing table, and I don't think that content persisted in the changes I am proposing. Would it be acceptable to create an issue to capture spans with routing table attributes as a follow up to this PR?

iamemilio avatar Nov 24 '25 18:11 iamemilio

@iamemilio I think not having the crazy old "trace protocol" spans for has_model, etc. is just fine in my opinion. I will let @ehhuang look over once though.

ashwinb avatar Nov 24 '25 20:11 ashwinb

@leseb I addressed what remains of telemetry API here. Should be resolved now, thanks for checking. Please take another look once CI is back on.

iamemilio avatar Nov 26 '25 15:11 iamemilio

added a follow-up here so that it is easy like others to use in Docker https://github.com/llamastack/llama-stack/pull/4281

codefromthecrypt avatar Dec 02 '25 22:12 codefromthecrypt