codex icon indicating copy to clipboard operation
codex copied to clipboard

Massive slow down and auto compact problems

Open cranyy opened this issue 3 weeks ago • 2 comments

What version of Codex is running?

codex-cli 0.72.0

What subscription do you have?

Pro

Which model were you using?

5.2 high/extrahigh/medium

What platform is your computer?

Microsoft Windows NT 10.0.26100.0 x64

What issue are you seeing?

Its so slow that its essentially unusable, compounded massively by the fact that there is auto-compact. So, so often it would work for 4-5 minutes then magically decide to auto-compact, and then it would start AAAAALLL over again and work for 16-20 minutes or more as below. Even the simplest of tasks take 3-4 minutes.


• Ran echo '--- claude_event_93292 ---'; wc -l /tmp/cox_autobf_claude_event_93292.txt; head -n 40 /tmp/cox_autobf_claude_event_93292.txt;
  │ echo '--- tail ---'; tail -n 40 /tmp/cox_autobf_claude_event_93292.txt
  └ --- claude_event_93292 ---
    12 /tmp/cox_autobf_claude_event_93292.txt
    … +23 lines
     Esc to cancel


─ Worked for 6m 09s ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

• Context compacted

• Explored
  └ List ls -la
    Read AGENTS.md
    List lib

and we are STILL running --

• Explored
  └ Read 25-autobf-menu.sh

◦ Updating print event (28m 19s • esc to interrupt)


What steps can reproduce the bug?

Just use it on any large repo and have 1 or 2 requirements in your AGENTS.md

What is the expected behavior?

At least 3x faster and no auto-compact, any such feature should be easily disable-able, yet no such option exists in the / commands

Additional information

No response

cranyy avatar Dec 14 '25 10:12 cranyy

Potential duplicates detected. Please review them and close your issue if it is a duplicate.

  • #7991

Powered by Codex Action

github-actions[bot] avatar Dec 14 '25 10:12 github-actions[bot]

Please use the /feedback command to upload a session where you've seen this behavior and paste the thread ID here.

Auto compaction is used only when you're close to running out of context window space. Without auto compaction, the session would not be able to continue.

If you're using most of your context window in just 4 or 5 minutes, that is not typical. It's likely that there's something about your prompt, your AGENTS.md, or your configuration that's causing this.

etraut-openai avatar Dec 14 '25 17:12 etraut-openai

@etraut-openai thank you for the response, this happens on every single one of my projects, even with one where my AGENTS.md is stripped to the bare minimum. on 5.1 xhigh in identical setups, it never took this much time, and even taking into account the increased intelligence, the slow down is at least 3-4x --- This thread id below is in a completely empty folder no agents.md or anything where i had codex make a trivia game, then change its colors. The creation of the whole file took 12m and the edit of colors took another 12m afterwards in a subsequent session. Did, admittedly, make a pretty solid game tho, zero shot at that, so thats dope. -- 019b201a-4e15-7dd3-afc4-468b4e4ec996

And this 019b2064-99bc-7df1-9942-0f2d22bbb0ed is with minimal AGENTS.md again empty repo where he only had to act as a reviewer for snippets.

Neither of these, however, managed to reach auto-compact. But on every one of my real projects, where there are 30-40 files at the lower end, and I have a basic requirement in AGENTS.md that nudges LLMs to hopefully not ruin any other part of the code, reaching the auto compact and 30-40 minute responses within a single prompt is now basically a common occurrence.

cranyy avatar Dec 16 '25 00:12 cranyy

Thanks for uploading those sessions.

I think the problem is that you're using "xhigh" reasoning effort. That setting may be useful for complex refactoring or design problems, but it's not needed for the type of problems you're asking the model to solve here. That explains why you're consuming so many tokens and seeing such long completion times. I recommend using the default reasoning level ("medium") for these types of problems.

etraut-openai avatar Dec 16 '25 00:12 etraut-openai