agent-zero icon indicating copy to clipboard operation
agent-zero copied to clipboard

Stuck in a loop of "You have sent the same message again. You have to do something else!"

Open ovizii opened this issue 10 months ago • 8 comments

I didn't pay attention to a0 for a few minutes, and it got stuck in a loop. It looks like this. This went on for quite some time until I noticed. :-(

Image

ovizii avatar Feb 20 '25 15:02 ovizii

Running into the same poroblem. I used deepseek 1.5b with ollama. When I send a message, this is what I get . Did anyone find a solution ?

riskyrun avatar Feb 26 '25 13:02 riskyrun

Oh, in case it matters, I am only connected to OpenAI via API key, no local LLMs.

ovizii avatar Feb 26 '25 14:02 ovizii

Same here, running ollama as local LLM

luisernestopoland avatar Mar 19 '25 10:03 luisernestopoland

same here, I am using different LLMs and no local LLM

mickogoin avatar Mar 31 '25 07:03 mickogoin

This is driving me crazy—I’ve seen it happen a lot. It mostly occurs when the embedding model glitches, especially if it’s offline for a moment. Then it spirals into a dark loop, sending hundreds of thousands of API prompts repeatedly. I tried setting a max of 400 lines per prompt, but it struggles to detect the critical error and roll back 10 minutes or take a simple crash recovery path. I’ve already rebuilt the Docker setup three times… 😞 (reset not help) or reboot. its like it scans it memories files and falls back into loop, can clear all memories but then there was some in cache files in core build.

Otherwise, it’s incredible. I even told it, “There’s a critical power fault in 2 minutes. Send me all memories in text so I can paste them back after recovery.” That worked well—it’s almost like generating a custom image based on its own output in a fresh setup.

scottnzuk avatar Apr 03 '25 11:04 scottnzuk

I'm a new user - have run into the exact same issue when using ChatGPT via API key.

I originally asked a0 through ChatGPT to scaffold an Adobe Commerce module in /root

It was wonderful to see it generate all the different files it needed, in several separate requests that it ran back to back.

First, it created code without any single quotes:

IE: $this->_redirect(//*);

Which is semantically incorrect and unusable.

I then told it had this issue, so it tried to add them but then resulted in code that implemented single quotes with the single quotes escaped:

IE: $this->_redirect('//*');

I then told it that wasn't going to work and that was breaking the code. When trying to correct itself it kept trying to run a sed command to remove the escapes over and over and over again. It kept reporting that it had "Fixed" the issue. I kept telling it no, when you look at the contents of the file the escapes are still there. It would ALWAYS end up getting into that loop of "You have to do something else!" message as it would ultimately run the sed command and would never try anything else, even when directed to do so.

The only fix was to pause the agent and clear the chat. I then kept trying to ask it to do the same thing with various other prompts. It always fell back to doing the same thing. And always ended up in the same loop. It completely was unable to handle the escaping quote issue.

It appears this may be a flaw with interpretation/implementation of code received from AI agent APIs. I ran into this escape quote issue in every instance in which I asked it to generate code for me. It would either not put ANY single quotes. Or it would escape the single quotes.

Big bummer. Hopefully this is corrected in future, as it ultimately ends up in unusable code, and it cannot correct the issue.

rnicklin avatar Jun 03 '25 07:06 rnicklin

i dont use anymore but try the free scout model at openrouter.ai it has a 1 million context one of the biggest currently. not the smartest but the most memory in effect. https://openrouter.ai/meta-llama/llama-4-scout:free

mav is smaller but much smarter https://openrouter.ai/meta-llama/llama-4-maverick:free

scottnzuk avatar Jun 03 '25 09:06 scottnzuk

I had the exact same problem and this is what finally worked for me:

Settings -> Agent Settings -> Chat Model:

  • Chat model provider: LM Studio
  • Chat model name: openai/gpt-oss-20b (use whatever your actual model name is)
  • Chat model API base URL: http://192.168.1.69:1234/v1 (Notice the /v1 at the end!!!)
  • Context length: 12000 (anything above 10k should work)

KutsuyaYuki avatar Aug 29 '25 12:08 KutsuyaYuki