vercelAIIntegration: span op stays 'default' instead of 'gen_ai.invoke_agent' - AI Agents dashboard shows no data
Is there an existing issue for this?
- [x] I have checked for existing issues https://github.com/getsentry/sentry-javascript/issues
How do you use Sentry?
Sentry SaaS (sentry.io)
Which SDK are you using?
@sentry/nextjs
SDK Version
10.29.0
Framework Version
Next.js 15.4.1
Link to Sentry event
https://pedestalai.sentry.io/explore/traces/?query=span.description%3Aai.generateText&project=4510505607364608
Reproduction Example/SDK Setup
sentry.server.config.ts:
import * as Sentry from "@sentry/nextjs";
Sentry.init({
dsn: "...",
tracesSampleRate: 1,
integrations: [
Sentry.vercelAIIntegration({
recordInputs: true,
recordOutputs: true,
}),
],
sendDefaultPii: true,
});
AI SDK call with telemetry:
const result = await generateText({
model: google("gemini-2.5-flash"),
prompt: "...",
experimental_telemetry: {
isEnabled: true,
functionId: "generate-title",
recordInputs: true,
recordOutputs: true,
},
});
Steps to Reproduce
- Configure
vercelAIIntegration()in Sentry.init() - Make AI SDK calls with
experimental_telemetry: { isEnabled: true } - Check spans in Trace Explorer - they appear with
span.op: default - Check AI Agents dashboard at
/insights/ai-agents/- no data appears
Expected Result
Spans should have:
-
span.op: gen_ai.invoke_agent(forai.generateText,ai.streamText) -
span.op: gen_ai.generate_text(forai.generateText.doGenerate)
This would make them appear in the AI Agents dashboard.
Actual Result
Spans have:
-
span.op: default❌ -
span.description: ai.generateText✅ - All other
gen_ai.*attributes are correctly set ✅
The common attributes (gen_ai.system, gen_ai.request.model, gen_ai.usage.*, etc.) are populated correctly, but the op transformation never happens.
Root Cause Analysis
The issue is in packages/core/src/tracing/vercel-ai/index.ts:
function onVercelAiSpanStart(span: Span): void {
const { data: attributes, description: name } = spanToJSON(span);
// ...
// THE BUG: These attributes don't exist yet when the span STARTS
const aiModelId = attributes[AI_MODEL_ID_ATTRIBUTE];
const aiModelProvider = attributes[AI_MODEL_PROVIDER_ATTRIBUTE];
// This check FAILS because attributes are undefined at span start
if (typeof aiModelId !== 'string' || typeof aiModelProvider !== 'string' || !aiModelId || !aiModelProvider) {
return; // ← Exits early, never sets the op!
}
processGenerateSpan(span, name, attributes); // ← Never reached
}
The AI SDK adds ai.model.id and ai.model.provider attributes after the span starts (when the LLM response completes), not at span creation time.
The vercelAiEventProcessor (which runs after spans end when attributes ARE available) only renames attributes but does NOT set the op.
Suggested Fix
Add op transformation logic to processEndedVercelAiSpan:
function processEndedVercelAiSpan(span: SpanJSON): void {
const { data: attributes, origin, description: name } = span;
if (origin !== 'auto.vercelai.otel') {
return;
}
// FIX: Set op based on span name (attributes are now available)
if (name === 'ai.generateText' || name === 'ai.streamText' || name === 'ai.generateObject') {
span.op = 'gen_ai.invoke_agent';
} else if (name === 'ai.generateText.doGenerate') {
span.op = 'gen_ai.generate_text';
} else if (name === 'ai.streamText.doStream') {
span.op = 'gen_ai.stream_text';
}
// ... handle other cases
// existing attribute renaming logic...
}
Related
- The test file acknowledges this limitation: https://github.com/getsentry/sentry-javascript/blob/develop/dev-packages/e2e-tests/test-applications/nextjs-16/tests/ai-test.test.ts#L19-L21
- Related closed PR (never merged): https://github.com/vercel/ai/pull/6716
Hello! Thanks for reaching out, this should be resolved with: https://github.com/getsentry/sentry-javascript/pull/18471
Hey team, which release is this planned to ship in?
This will be shipped with 10.31.0
Adding some more notes here, as I have upgraded to 10.31 but still am seeing the same issue:
Here is an example trace: https://sentry.io/organizations/pedestalai/explore/traces/trace/885461c2ea04cde2522d946032937915/?environment=production&mode=samples&node=span-7547169075cd7a0f&pageEnd&pageStart&project=4510527891111936&source=traces&statsPeriod=24h&table=trace×tamp=1766008637.935
We can reproduce this in a real production trace: the Vercel AI spans have the expected ai.*/gen_ai.* attributes but still show up with span.op: default, which prevents them from appearing in the AI Agents dashboard (docs).
Example trace (prod): 885461c2ea04cde2522d946032937915 (project: wayfair-day-ahead)
-
ai.generateObject→span.op: default -
ai.generateObject.doGenerate→span.op: default - Yet
ai.model.id: gpt-5is present on those same spans, along withgen_ai.system: openai.responses, token usage, etc.
Why this happens in the SDK (exact code paths / lines)
1) span.op is only set in spanStart, and it’s gated on attributes being present at span start
In packages/core/src/tracing/vercel-ai/index.ts, onVercelAiSpanStart bails out early unless ai.model.id is present on the span at start time:
-
onVercelAiSpanStartreads attributes fromspanToJSON(span)(span-start snapshot) and then does:
// packages/core/src/tracing/vercel-ai/index.ts
// Lines 67–75
67: // The AI model ID must be defined for generate, stream, and embed spans.
68: // The provider is optional and may not always be present.
69: const aiModelId = attributes[AI_MODEL_ID_ATTRIBUTE];
70: if (typeof aiModelId !== 'string' || !aiModelId) {
71: return;
72: }
73:
74: processGenerateSpan(span, name, attributes);
75: }
If Vercel AI SDK attaches ai.model.id after span start (which is consistent with the behavior described in this issue), this function returns at line 70–72, and we never run processGenerateSpan.
2) The “ended span” processor does not set op, so the span stays default
There is an event processor that runs on ended spans (processEndedVercelAiSpan), but it only renames attributes / computes token totals. It never sets span.op:
// packages/core/src/tracing/vercel-ai/index.ts
// Lines 111–116, and onwards
111: function processEndedVercelAiSpan(span: SpanJSON): void {
112: const { data: attributes, origin } = span;
113:
114: if (origin !== 'auto.vercelai.otel') {
115: return;
116: }
…and then it continues doing attribute transforms (no span.op mapping anywhere in this function).
So if processGenerateSpan never ran (because we returned early in onVercelAiSpanStart), nothing later fixes it, and op remains default → Agents dashboard won’t pick it up.
3) The current comment about spanEnd is true for mutating the live span object, but we can mutate the span JSON in the event processor
The file currently says:
// packages/core/src/tracing/vercel-ai/index.ts
// Lines 295–299
295: export function addVercelAiProcessors(client: Client): void {
296: client.on('spanStart', onVercelAiSpanStart);
297: // Note: We cannot do this on `spanEnd`, because the span cannot be mutated anymore at this point
298: client.addEventProcessor(Object.assign(vercelAiEventProcessor, { id: 'VercelAiEventProcessor' }));
299: }
Even if we can’t mutate the Span object on spanEnd, we can still update the SpanJSON inside vercelAiEventProcessor / processEndedVercelAiSpan (which is exactly where all the AI attributes are already being normalized). That’s also where span.op should be set when the needed attributes arrive late.
Why PR #18471 is an incomplete fix
PR #18471 relaxes the spanStart validation to require only ai.model.id (not provider). That helps the case where provider is missing at start, but it still assumes model id is present at start.
This issue’s failure mode (and our production trace) is consistent with: ai.model.id is present on the final/ended span record, but is not available at spanStart, causing the early return at lines 70–72 above.
Additionally, even after #18471, there is still no op assignment in the ended-span processor (processEndedVercelAiSpan), so there’s no “second chance” to fix op when attributes show up late.
Suggested fix (matches the root cause)
Add the same op-mapping logic currently in processGenerateSpan (e.g. ai.generateObject → gen_ai.invoke_agent, ai.generateObject.doGenerate → gen_ai.generate_text, etc.) into processEndedVercelAiSpan (or a new helper it calls) when origin === 'auto.vercelai.otel'.
That would ensure:
- Spans show up correctly in Agents even if AI SDK attributes arrive after span start
- The op mapping is applied at the same stage as all other
gen_ai.*normalization.
https://github.com/getsentry/sentry-javascript/pull/18553 I put up this PR (as a preliminary attempt to fix the issue)
Thanks, left some comments in the PR 👍
Thank you for the detailed explanation. I have opened this PR. It checks whether the Vercel AI span starts with ai, which should be more effective than relying on the model ID to process generation spans. Please let me know if this does not work.