[Question] detailed per-interaction telemetry (token counts and input composition) for opencode-model communications
I’d like opencode to provide detailed telemetry for each interaction with the model so I can see exactly what is sent and received. Specifically: Per-request and per-response token counts. A breakdown of the input composition (prompt/system/instructions/user messages, tool calls, agent/subagent messages, and any injected context). Visibility into response tokens and any truncation or context-window warnings. Option to enable/disable verbose logging and to redact sensitive user content. This will help tune agents/subagents and manage context size to reduce costs and avoid hitting token limits. Priority: Medium — useful for debugging and cost optimization. Possible implementation notes: expose structured logs or a debug mode that outputs a JSON telemetry object per interaction with fields such as total_tokens, prompt_tokens, completion_tokens, messages[], truncated (bool), and redaction markers.
From what I've check the current logs (even in debug mode) don't include all information. Is this mode already supported ?
This issue might be a duplicate of existing issues. Please check:
- #2637: Both issues focus on token management, cost optimization, and the need for detailed visibility into opencode-model interactions. While #2637 specifically requests input token limiting and #2666 requests detailed telemetry, they share the core motivation of better token usage visibility and debugging capabilities.
Feel free to ignore if none of these address your specific case.
I've tried to experiment with code myself for now. The payload that I dump contains for example ~11K words. the token count on the other hand that I see on opencode zen is ~20K. So if anyone can tell me what else should be added to the payload that I'm missing .... (i used opencode qwen3 coder model)
here a link to the change set: https://github.com/m1pro/opencode/commit/db15b5f05ff974a4dd1b91e58e6a17a107ece7f8
it creates a session folder in /Users/user/.local/share/opencode/session-payloads. each interaction dumps the payload in a 'human-readable' markdown format and also the raw payload as json.
would appreciate your comments.
We already track tokens in, out, cached etc for every interaction we just don't show all of them currently but it makes sense to have the option
things like that breakdown would require changes
We do need a better way to allow people to see their sessions outside of the tui either in markdown or in json, there is a /export command to get a markdown file but it doesn't include the token data you are thinking about
Why not adopt the same approach, that Codex CLI / Gemini CLI / ClaudeCode went with: OpenTelemetry?
https://docs.claude.com/en/docs/claude-code/monitoring-usage https://developers.openai.com/codex/security/#enable-otel-optin https://google-gemini.github.io/gemini-cli/docs/cli/telemetry.html
We had it at one point but it was causing issues, there are a few more pressing items that are taking priority before adding it back again
We had it at one point but it was causing issues, there are a few more pressing items that are taking priority before adding it back again
Hey, just wondering if this feature is planned to be implemented anytime soon or if it’s still in discussion?
@Raviguntakala you need open telemetry? I don't think it is high on our priority list atm but maybe you can tell your usecase
@Raviguntakala you need open telemetry? I don't think it is high on our priority list atm but maybe you can tell your usecase
Yeah, mainly for tracking token consumption and cost monitoring across multiple APIs. Having OpenTelemetry support would make it easier to analyze usage patterns and optimize context size when tuning agents and subagents.
PR #4978 allows to enable the AI SDK opentelemetry support but this seems to only cover spans.
This PR seems to add metrics to AI SDK OTEL export but will need additional code on opencode's side to enable.
Hey @jeantil Is there anything else I need to add or change on the opencode side to enable this feature? Happy to make any necessary updates.
@Ray0907 not yet on open code side, once this vercel/ai PR it will send metrics automatically if otel is enabled.
Opencode will probably want to replicate the recordMetrics configuration flag added in ai in its own configuration. It should be possible to control what is exported by OTEL using the otel exporter environment variable but a config flag is easier to discover for newcomers. Either that or adding the a bit more information to opencode's telemetry flag documentation once ai is udpated to a version which sends metrics would be good.