kilocode
kilocode copied to clipboard
Kilo Code appears to be stuck in a loop
Description
Kilo Code is having trouble... This may indicate a failure in the model's thought process or inability to use a tool properly, which can be mitigated with some user guidance (e.g. "Try breaking down the task into smaller steps").
Kilo Code appears to be stuck in a loop, attempting the same action (apply_diff) repeatedly. This might indicate a problem with its current strategy. Consider rephrasing the task, providing more specific instructions, or guiding it towards a different approach.
and waste money
What model are you using?
Kilo Code is having trouble... This may indicate a failure in the model's thought process or inability to use a tool properly, which can be mitigated with some user guidance (e.g. "Try breaking down the task into smaller steps").
model o3
I'm having the same issue -- I used o3 and Grok 4. I haven't tested with other models, but the issue happens consistently, even with different prompts and different codebases.
Hi. Same issue for me on devstral-small-latest while trying to make it read a file.
Had the same issue when using Gemini 2.5 pro inserting a table in latex file. Persist when changing to Claude Sonnet 4 and GPT5. Guess it's something with the strategy.
Same issue with Qwen3 Coder
Kilo Code is having trouble... This may indicate a failure in the model's thought process or inability to use a tool properly, which can be mitigated with some user guidance (e.g. "Try breaking down the task into smaller steps").
Back to Back on GPT5, but Continue on error works well, so I think you guys need to update some stuff, this seems like a false flag on GPT5.
Same issue with Qwen-3-Coder
this doesn't seem to be related to specific models being used, I've had this all day today its been relentless to get past and its happening on multiple models across different API's (llama, glm 4.5, glm flash, grok code fast) its becoming super annoying
Error is
No sufficiently similar match found at line: 299 (96% similar, needs 100%)
Debug Info:
- Similarity Score: 96%
- Required Threshold: 100%
- Search Range: starting at line 299
- Tried both standard and aggressive line number stripping
- Tip: Use the read_file tool to get the latest content of the file before attempting to use the apply_diff tool again, as the file content may have changed
Having the same issue. What perfect yesterday. Started after the update.
Kilo Code tried to use search_files without value for required parameter 'path'. Retrying...
Kilo Code is having trouble... This may indicate a failure in the model's thought process or inability to use a tool properly, which can be mitigated with some user guidance (e.g. "Try breaking down the task into smaller steps").
Almost unusable whatever you try to do it ends up hitting this bug and halt.
The same thing I tried change from qwen to glm4.6 same issue happening. It happened kinda random not because of heavy tasks
Do you see improvement if you use JSON-style tool calling?
Do you see improvement if you use JSON-style tool calling?
![]()
I will try to change to this as it's experimental so I didn't use. But this happen kinda randomly and hard to reproduce for sometime. Something weird also as I can just change the model and it start to work again.
I'm here with the same problem.
Kilo Code is having trouble... This may indicate a failure in the model's thought process or inability to use a tool properly, which can be mitigated with some user guidance (e.g. "Try breaking down the task into smaller steps").
and it repeats itself constantly without resolution.
Same issue with Qwen3 Coder -$8 wasted on infinite file reading loop Encountered this bug today with Qwen3 Coder 480B. What happened:
Asked a simple question: "how do I make this file more solid" about a single uploaded TypeScript file Agent got stuck in an infinite reading loop:
Directory search for "DeviceClient" Read the same file 4 times consecutively Never provided an answer
Generated 8.5 million tokens Cost: $7.59 for a query that should have been ~$0.10
Root cause appears to be:
Agent accumulates full conversation history with each tool call No token budget limits No cost circuit breakers No duplicate file read detection
This bug has been open for 4+ months and users are still losing money on simple queries. The pattern is clear across all models - this is a fundamental issue with the agent's tool-calling strategy, not model-specific behavior.