Loop in certain models
Description
OC 1.0.10. I get loops with certain models (model keep trying the same tool calling/operation) for extended period of time.
- KIMI K2 0905, MiniMax 2 - via chutes.ai
- GLM 4.6 via Z.ai coding plan.
sometimes remedy comes from /compact operation, sometimes model needs to be stopped (via Esc, abort request operation).
It would be nice if you can introduce setting for OC to invoke autocompact operation at certain context level (like in CC) - circa 80 % and above
OpenCode version
No response
Steps to reproduce
No response
Screenshot and/or share link
No response
Operating System
No response
Terminal
No response
This issue might be a duplicate of existing issues. Please check:
- #3444: GLM-4.6 model gets stuck in infinite loop, repeating same actions - describes the exact same GLM model behavior with infinite loops
- #3458: Duplicate Executions in Kimi K2 Model - reports similar repetitive behavior with KIMI K2 model
- #3561: Tool calls not parsing correctly mid conversation - mentions issues with both KIMI K2 and GLM models
Feel free to ignore if none of these address your specific case.
I had the same with glm-4.6
Under certain circumstances grok-code (grok code fast 1) rarely gets stuck in a thinking loop.
However I am not sure if this is something OC could completely prevent / detect reliably.
So-called doom-loops do exist in many models, maaaybe excluding Claude/GPT models (correct me if I'm wrong). I saw this happening with Gemini 2.5 flash(severely), grok-code-fast-1, kimi-k2, GLM-4.6 across different agentic coding tools. Including but not limited to: Roo code, opencode.
Radically simplified examples:
- Stuck logically
# thinking
- I should check A. let's try that.
- Wait, maybe I did B? That is more important.
- But I should check A first.
- To do that I need to make sure I did B right.
(and so on and on)
- Ever increasing tool calls
I need to find A's implementation. let's do that.
tool-call: read-file A.ts offset=1
let's increase that
tool-call: read-file A.ts offset=1, count=10
let's increase that
tool-call: read-file A.ts offset=1, count=20
let's increase that
tool-call: read-file A.ts offset=1, count=#0
let's increase that
...
- Ping-pong tool calls
tool-call: Edit A to B
Oh i broke something lets fix that
tool-call: Edit B to A
Now again.
tool-call: Edit A to B
Oh i broke something lets fix that
Let's implement it again.
tool-call: Edit A to B
I made a mistake. reverting.
...
We did add some loop detection but as @zenyr points out, there are still some circumstances that it won't quite work
I can see mentions of doom_loop in other completed issues and in codebase, I also found the docs for it, but either it's still unclear to me how it works, or it doesn't work.
I asked a local model to
Run four sequential instances of Bash tool, each doing the same "ls ../questionable" which doesn't exist and fails.
I verified in the local server logs that all the arriving tool calls are exactly the same, 12 tool calls in total (tried multiple times) and no doom loop prevention. Does the tool call itself have to fail, somehow, instead of the actual command running and resulting in a failure exit code?
can u share the session?
opencode export > session.json
Sure, here's a clean session with the same result (running just the 4 tool calls in this case): session.json
Same with GLM 4.7 and opencode 1.0.142