[BUG] Statusline context_window JSON contains cumulative tokens instead of current context usage
Description
The context_window data passed to custom statusline scripts contains cumulative/accumulated tokens from the entire session, not the current context window usage. This makes the statusline context display completely inaccurate.
Environment
- Claude Code version: 2.0.65
- OS: macOS 14.6.1 (Darwin 23.6.0)
- Model: claude-opus-4-5-20251101
Steps to Reproduce
- Configure a custom statusline that displays context usage from the JSON input
- Have a conversation with multiple turns (enough to trigger auto-compact or use significant context)
- Compare the
context_windowvalues from the statusline JSON input vs/contextcommand output
Expected Behavior
The context_window JSON passed to statusline scripts should reflect the current context window usage, matching what /context displays.
Actual Behavior
The JSON contains cumulative tokens that keep growing beyond the context window size:
From /context command (correct):
claude-opus-4-5-20251101 ยท 80k/200k tokens (40%)
From statusline JSON input (incorrect):
{
"total_input_tokens": 330050,
"total_output_tokens": 10614,
"context_window_size": 200000
}
This results in 340k tokens shown for a 200k context window (169%), which is impossible and clearly wrong.
Debug Evidence
Statusline script outputs 339K/200K 169% while /context shows 80k/200k (40%).
The total_input_tokens and total_output_tokens appear to be session totals (all tokens ever sent/received), not the current context window contents. When auto-compact discards old context, these values are not adjusted.
Impact
- Custom statuslines showing context usage are completely broken
- Users cannot trust the context_window data in the statusline API
- The feature added in v2.0.65 ("Added context window information to status line input") is unusable for its intended purpose
Suggested Fix
The statusline JSON should pass the current context window token counts (matching /context output), not cumulative session totals. Alternatively, provide both values with clear naming:
-
current_context_tokens- what's actually in the context window now -
session_total_tokens- cumulative for the session (if useful)
Related Issues
- #5601 - Custom Statusline auto-compact percentage incorrect
- #12565 - Token counting discrepancy between transcript and /context
- #3375 - Context left until auto-compact: NaN%
Found 3 possible duplicate issues:
- https://github.com/anthropics/claude-code/issues/13653
- https://github.com/anthropics/claude-code/issues/13765
- https://github.com/anthropics/claude-code/issues/13766
This issue will be automatically closed as a duplicate in 3 days.
- If your issue is a duplicate, please close it and ๐ the existing issue instead
- To prevent auto-closure, add a comment or ๐ this comment
๐ค Generated with Claude Code
I found an additional issue: context_window also excludes tool call results entirely.
Fresh session test (no auto-compact involved):
- /context Messages: 40.2k tokens (20.1%)
- context_window input+output: 12.8k tokens (6.4%)
- Gap: ~27k tokens from file reads via Read tool during /prime
Even with pure conversation after priming (no additional tool use), the gap remains constant - confirming tool results from earlier aren't counted.
This contradicts API documentation (https://docs.anthropic.com/en/api/messages) which states: "Tool result content provided back to the model counts toward input_tokens in the subsequent request"
So there appear to be TWO bugs:
- Cumulative instead of current context (your report)
- Tool results excluded entirely from the count (our finding)
Both make context_window unusable for context tracking in tool-heavy workflows.
When I saw "Added context window information to status line input" in the v2.0.65 changelog, I was genuinely excited. Finally, a way to surface the one metric that actually matters when working with LLMs: how much context you have left.
I spent a solid half hour implementing this in my custom status line tool, only to discover that the data provided has nothing to do with actual context window usage. The changelog effectively promised one thing and delivered something entirely different.
As a Max subscriber, I don't care about cumulative session tokens - that's a billing concern, not an operational one. What I care deeply about is knowing how much of my 200k context window is currently in use, so I can make informed decisions about when to compact, clear, or restructure my approach.
Anyone who has spent time working with LLMs knows that context management is fundamental. It's baffling that the implementation provides session totals under a field called context_window instead of, you know, context window data.
Please fix this - provide the actual current context usage that /context shows. The infrastructure is clearly there; it just needs to be exposed correctly.
When I saw "Added context window information to status line input" in the v2.0.65 changelog, I was genuinely excited. Finally, a way to surface the one metric that actually matters when working with LLMs: how much context you have left.
Same. Disappointed. I'm just going to have to remove it from my context bar until the usage works properly.
When I saw "Added context window information to status line input" in the v2.0.65 changelog, I was genuinely excited. Finally, a way to surface the one metric that actually matters when working with LLMs: how much context you have left.
I spent a solid half hour implementing this in my custom status line tool, only to discover that the data provided has nothing to do with actual context window usage. The changelog effectively promised one thing and delivered something entirely different.
Somebody at Anthropic assumed it should work that way too ๐ : https://code.claude.com/docs/en/statusline#context-window-usage
looks like the update just out is intended to fix it
โข Added current_usage field to status line input, enabling accurate context window percentage calculations
Updated on my statusline https://www.npmjs.com/package/contextbricks will see how it goes today
Looks like we're still not there eh(?)
CC version 2.0.72's update has a gap on my test at least. E.g. "50k/200k tokens (25%)" via /context, and โก๏ธ15% on statusline ๐ซ