[BUG] Exponential backoff is broken
Environment
- Platform (select one):
- [ ] Anthropic API
- [x] AWS Bedrock
- [ ] Google Vertex AI
- [ ] Other:
- Claude CLI version: 0.2.56
- Operating System: Linux
- Terminal: irrelevant
Bug Description
In 0.2.56 exponential backoff for throttling with HTTP 429 is broken. In 0.2.55 and below it is working.
Steps to Reproduce
- Get throttled with 429
Expected Behavior
Automatic retry with exponential backoff and execution continues.
⎿ API Error (429 Too many tokens, please wait before trying again.) · Retrying in 1 seconds… (attempt 1/10) ⎿ API Error (429 Too many tokens, please wait before trying again.) · Retrying in 1 seconds… (attempt 2/10) ⎿ API Error (429 Too many tokens, please wait before trying again.) · Retrying in 2 seconds… (attempt 3/10) ⎿ API Error (429 Too many tokens, please wait before trying again.) · Retrying in 4 seconds… (attempt 4/10) ...
Actual Behavior
⎿ API Error: 429 Too many tokens, please wait before trying again.
Execution stops.
Additional Context
Same behavior I am seeing.
EDIT: workaround that fix this for me is to downgraded to @0.2.55 which is the previous version.
npm install -g @anthropic-ai/[email protected]
@severity1 can you run the new claude 4 models with [email protected] ?
I have tried on old models and much recent versions and im not seeing this behavior anymore. I wll try eith newer models and will let you know what i get. cheers!
if you can find the issue/workaround it would also be breat for https://github.com/anthropics/claude-code/issues/1293
This issue has been inactive for 30 days. If the issue is still occurring, please comment to let us know. Otherwise, this issue will be automatically closed in 30 days for housekeeping purposes.