[Bug] Max plan usage consumed at unusually high rate without workflow changes
Bug Description Hello,
I’m experiencing what appears to be a technical issue with how my Max plan usage is being tracked.
Up until yesterday — and consistently over the past couple of weeks — I was using Opus full-time without hitting usage limits. No changes were made to my workflow or usage pattern.
However, since the last 24 hours, usage has been consumed at an unusually high rate (over 40% in ~40 minutes). I’ve already tried standard troubleshooting steps (clearing cache, starting new conversations, switching models), but the issue persists.
Thank you.
Environment Info
- Platform: darwin
- Terminal: iTerm.app
- Version: 2.0.64
- Feedback ID: 02077bc4-f096-41b1-b9e1-eeec171f340f
Errors
[]
Found 3 possible duplicate issues:
- https://github.com/anthropics/claude-code/issues/8742
- https://github.com/anthropics/claude-code/issues/8505
- https://github.com/anthropics/claude-code/issues/9544
This issue will be automatically closed as a duplicate in 3 days.
- If your issue is a duplicate, please close it and 👍 the existing issue instead
- To prevent auto-closure, add a comment or 👎 this comment
🤖 Generated with Claude Code
Same here. This is insane, Anthropic has charging the same price for 20% of previous use.
MAX Plan ($100/month) - Session limit consumed by simple chat conversation only I'm on the MAX 5x plan paying €100/month. Today I hit "Limit reached" after just two small code fixes in Claude Code. When I checked my usage, my session was at 100% while weekly was only at 15%. After the 5-hour reset, I started a new session. I did NOT use Claude Code at all - just had a single chat conversation on claude.ai discussing this usage issue. Here's what happened to my session limit during this ONE conversation:
Started at 0% After a few messages: 2% Shortly after: 10% Now: 15%
No code written. No Claude Code used. Just text chat with some web searches and a few screenshot uploads. At this rate, I can have about 6-7 chat conversations before hitting my 5-hour session limit. For €100/month.
Same here
Same, insane rate, never reach limits until today
1% per 1000 tokens
Came here in search of others experiencing this. I'm doing the same thing I've been doing for months and I've hit my limit twice today. Each time, right around the 2hr mark. While I realize this isn't terribly scientific as far as evidence goes, the point is that the exact same behavior is yielding different results. Out of character results. I often get close to hitting the limits as I near the 5hr mark.
have same issue... one question about code(with referenced files) - 25% off... write plan on small chage -25% then ask about plan and correct one thing that war wrong - 25% then implement by plan --> -25% hit limit and question if I want to buy more tokens to finish task or wait 4 hours
+1
Same here, thinking on switching to Antigravity. Same workflow and 1 day I'm at 20% weekly usage and I reach my 5 hours limit in 1 hour.
Same here for MAX Plan ($100/month). I've reached the limit in 3.5 h, before 2.0.64 the highest level was 60-70 % after very-very intensive sessions with lots of code analysis and code/documents generation. @claude
Environment Info
Platform: darwin Terminal: iTerm.app Version: 2.0.64
● Write(candidatos/vagas/desenvolvedor_backend_python_fastapi.md)
⎿ Wrote 95 lines to candidatos/vagas/desenvolvedor_backend_python_fastapi.md
… +85 lines (ctrl+o to expand)
Current session
█ 2% used
Resets 4pm (UTC)
This response use 2% from my $100 5 hour limit, what is that?
Hey folks - the release of instant compaction in 2.0.64 may have caused compaction to run too often in some situations. We turned off the gate for this feature around noon PT yesterday.
Hey folks - the release of instant compaction in 2.0.64 may have caused compaction to run too often in some situations. We turned off the gate for this feature around noon PT yesterday.
Hi Igor, thanks for the update. It looks like myself and others are still experiencing reduced usage limits today/on v2.0.65
The problem is still there after the update, it's not duplicate due to the current version : 2.0.65 The smallest exchange costs 1-2%, and as soon as Claude modifies the code, it will consume 2 or 3%. In one hour, the work session reaches its limit, and weekly consumption is also impacted. It's as if the Max plan had become a simple Pro plan...
Today, 12% usage just to configure statusline? it was a test and is worse
Same here. Hitting a 5 hour limit within 30 minutes without any change in my workflow. This is ridiculous.
🤖Opus 4.5 📊268k/200k(134%) 💵$18.50 ₿91,4k 🌡️29°C ⧉ In RESUMO_ANALISES.md
⏵⏵ accept edits on (shift+tab to cycle) Context left until auto-compact: 0%
· Compacting conversation… (esc to interrupt · ctrl+t to hide todos · 42m 24s · ↓ 21.4k tokens)
4 compacts for 268k tokens - 40% session limit
another sample
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 🤖Opus 4.5 📊291k/200k(145%) 💵$20.86 ₿90,5k 🌡️29°C ⧉ In RESUMO_ANALISES.md ⏵⏵ accept edits on (shift+tab to cycle) Context left until auto-compact: 5%
🤖Opus 4.5 📊296k/200k(148%) 💵$21.18 ₿90,5k 🌡️29°C ⧉ In RESUMO_ANALISES.md ⏵⏵ accept edits on (shift+tab to cycle) Context left until auto-compact: 0%
6k tokens, 5% context
Likewise, usage is suddenly off the charts with zero change to workflow. Feels a little shady tbh.
⭐ TIPS:
I've been browsing similar area:cost issues re: Claude this morning and have gathered a few tips in one place. These don't fix the issue entirely but they've helped cut down my usage in the interim: hope these help someone!
- direct Claude to not use Task feature, and to use utility tools for common tasks. Add
–no-agent flagto your prompt as well (several users reporting that Claude is recruiting subagents without disclosing this to the user)
- ie “ --no-agent for the remainder of our work in this session. DO NOT use the Task tool. Use Grep to find files, Read to check them, and Edit to fix them directly."
- use
/configin the CLI to disable auto-compact, as this is another source of token overconsumption
If anyone else finds ways to reduce token consumption while we wait, feel free to share! 👍
why no one is talking about this?
@grandtheftdisco yes just don't use any of CC's most useful features! Might I suggest that you guys troubleshoot this by using that one Claude Code trace tool (node version of CC required, iirc) to see if it's actual excessive token usage from broken internals or another shady limit change by Anthropic. Everybody knows this party is coming to an end, the question is when ;)
@transcendr Thanks for the tip and the feedback. I appreciate how friendly you are! Any chance you could explain how to use the trace tool or provide doc links to help those of us that are less enlightened?
From what I’ve seen, this doesn’t look accidental. Anthropic is trying to attribute this to the instant auto-compact bug (?), yet even after that was removed, severe rate limits persist- and we’ve received zero communication since. We shouldn't be forced to alter our established workflows just to access the usage limits we previously enjoyed. At this point, the least we can do is cancel our subscriptions.
@fhdggervvtcusg Good point - we shouldn't have to alter our workflows, no. And I'm seriously considering canceling my subscription: whether they've silently decreased token limits or this is a systemic issue, it's ridiculous for anyone to pay money for it at this point.
Here's the trace utility for anyone who wants to dig in more: https://www.npmjs.com/package/@mariozechner/claude-trace
Interesting, still no official acknowledgement from Anthropic on this. Still experiencing severe usage limit reductions compared to ~a week ago on the Max plan.
Can someone from Anthropic please acknowledge this?
Multiple Max plan users are experiencing a drastic and sudden increase in usage consumption with no workflow changes. This has been ongoing for days, support is unresponsive, and there has been zero official communication.
Please confirm whether this is a known issue or a change in usage accounting. Paying customers deserve at least an acknowledgement.
Can someone from Anthropic please acknowledge this?
Multiple Max plan users are experiencing a drastic and sudden increase in usage consumption with no workflow changes. This has been ongoing for days, support is unresponsive, and there has been zero official communication.
Please confirm whether this is a known issue or a change in usage accounting. Paying customers deserve at least an acknowledgement.
all the while, the stock price is climbing. It's ridiculous!