github copilot: 2x token (counts total=total+cached) compacts at 64k, instead of 128k
Description
- The TUI shows 2X the actual tokens being used.
- This ends up forcing opencode to compact much, much earlier than needed.
- Taking a look at the console from the share link I can see that the JSON itself looks valid (1/2 TUI) and the correct number of tokens seems to be presented in the console.
OpenCode version
0.15.8
Steps to reproduce
- To reproduce this you can use Grok 4 fast from OpenRouter or you can also use any model that's connected via GitHub Copilot (GPT-5-mini).
- enter "hi" into prompt
- e.g. https://opencode.ai/s/IVO2CNuh
Screenshot and/or share link
Operating System
mac
Terminal
zsh
This issue might be a duplicate of existing issues. Please check:
- #1937: Reports identical behavior where TUI shows 2X the actual tokens, causing premature auto /compact, while share link shows correct counts. Multiple users have reported this at ~50% rate across various providers and models.
Feel free to ignore if none of these address your specific case.
very annoying...any hack for this...? especially with copilot, effective context is 64k
- wondering if I can somehow override and "fool" opencode to think max model input-context is 2x -- i.e. 256k
EDIT: I guess, I can do something like this, to artificially double max context?
{
"provider": {
"openai": {
"models": {
"gpt-4o": {
"limit": {
"context": 131072,
"output": 8192
}
}
}
}
}
}
same issue on bunx opencode-ai@opentui
0.0.0-opentui-202510210139
Hm will look into that
I'm currently working on an opencode client and I had to look into how opencode reports token usage and for a while I thought I found the same issue, but then I realized that I forgot to add the cached tokens to calculate the actual number of tokens used. If you add cache tokens for this chat, it adds up to the same value in both the TUI and the JSON in the share link:
Input: 35844 Output: 404 Cache read: 35712
This is how the TUI calculates token usage
https://github.com/sst/opencode/blob/a99bd3aa2c0f7100d0bcbfa4a11d818b7c753661/packages/tui/internal/components/chat/messages.go#L897-L901
edit: maybe this is just a bug in the web view
Webview token count (at the bottom of webview) is a completely different issue folks.
This issue is causing frequent compacts. The issue is with the TUI or backend
This is like a P0 issue... a basic feature is not working correctly.
@hchaudhary1 the max context size is 128k for gpt-5-mini on github copilot, are you sure that isn't the cause?
I will look more into your session to verify
Yes, but why do you compact at 64k? When we hit 64k, OC thinks it's hitting 128k, and then compacts. I hope I'm explaining correctly 😀
It shouldn't compact at 64k, i guess I just wanted to verify that you understood the decreased context limit since most people that have said this with copilot normally are unaware of the substantially smaller context window than the real model has
But if it is actually compacting at 64k that's a big issue and we need to fix, ill look into it mroe
Bingo.