[Bug]: Critical CPU Overload (~400%) Caused by Scheduler/Queue in Coolify v4.2.1-19679516649 Hello Coolify Team,
Error Message and Logs
I've encountered a critical CPU overload issue that has threatened the stability of my server. I have isolated the problem to the Scheduler/Horizon (Queue) background processes.
Affected Coolify Version: v4.2.1-19679516649
Symptoms & Diagnosis
Massive CPU Overload:
After startup, the 'coolify' container enters an infinite loop in its background processes, causing extremely high CPU usage.
docker stats shows the 'coolify' container at ~382% CPU and 'coolify-db' at ~224% CPU.
The overall system load (uptime) reached 6.31 (leading to full VPS saturation).
Scheduler Isolated
The Scheduler starts (via the default /init command), runs a few tasks, and then enters a loop.
The configuration variable COOLIFY_SELF_HOSTED_SERVER_SCHEDULER_ENABLED=false is either ignored or the bug is triggered before it is processed, as the high CPU issue persisted after setting it.
Secondary Consequence Confirmed
This bug previously led to uncontrolled database growth: the PostgreSQL sessions table reached 58 GB. I manually truncated this table, which solved the disk space issue, but the CPU bug remains.
Steps to Reproduce
-
Preparation: Ensure your self-hosted Coolify stack is using the image version v4.2.1-19679516649 and that all containers are stopped.
-
Start Stack: Start the stack using the standard production command:
cd /data/coolify/source/
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
- Monitor Load: Immediately check the container CPU usage:
docker stats
-
Observe Bug: The coolify container CPU usage will immediately jump to ~380% to 400%, confirming the infinite loop in the Planificateur/Horizon processes.
-
Confirm System Overload:
uptime
The load average will rise to 6.00 and above.
Thank you for your prompt attention to this critical issue.
Example Repository URL
No response
Coolify Version
v4.0.0-beta.451
Are you using Coolify Cloud?
No (self-hosted)
Operating System and Version (self-hosted)
Debian GNU/Linux 12 (bookworm)
Additional Information
No response
Duplicate Detection
ℹ️ This issue may be similar to ISSUE-5651.
Related Issues
- coollabsio/coolify#5676: [Bug]: Coolify Container High CPU & Memory Usage
- coollabsio/coolify#5651: [Bug]: CPU usage spike every minute from php Laravel Horizon
- coollabsio/coolify#5585: [Bug]: Horizon not processing jobs specifically on the high queue, even if there are more than 40 processes.
- coollabsio/coolify#5584: [Bug]: Horizon not processing the
highqueue despite having over 50 workers and multiple supervisors - coollabsio/coolify#5768: [Bug]: High CPU from Coolify Redis Container
Suggested Issue Assignees
- SyahmiRafsan
- kelvinthh
- jordyvandomselaar
- zapier-interviews
- RealFascinated
Actions
- [ ] 🗺️ Generate an implementation plan for this issue
ℹ️ Note: Issue enrichment is currently in early access.
Disable automatic issue enrichment
To disable automatic issue enrichment, add the following to your .coderabbit.yaml:
issue_enrichment:
auto_enrich:
enabled: false
I am a bit confused. Coolify doesn't have a v4.2.1-19679516649 version. The latest version is v4.0.0-beta.452. Could you clarify where you are reading that version number from?
Ok, I am sorry. The version was v4.0.0-beta.451.
am facing the same issue, after updating to the latest version queue is getting delayed a lot
I experienced the same issue where my VPS CPU was running at 99.9%. To resolve it, I had to uninstall Coolify completely before proceeding with further troubleshooting.
same here. CPU stuck at 100% since today
What exactly is the solution? I’m considering looking for an alternative to Coolify because I’m still experiencing the same issue.
it's not a coolify issue. one of the services you use got hacked with the nextjs CVE that was going around in the last few days. in my case, it was umami which was already patched. you'll have to find what service from your list is using next.js and update that (or your own application if you're using nextjs). also you'll likely have to rotate all the secrets from the particular container that was compromised.
Thanks for the insight. In my case, the CPU overload was coming from a container running Laravel, and I’m not even using that in any of my projects. It looks like something might have been compromised on my end as well.
Can you advise on how I can properly clean this up, secure my server, and prevent this kind of issue from happening again? I also want to make sure I’m not leaving any backdoors or malicious containers running.
I don't know how your laravel app issue can be related to the nextjs CVE, unless of course the app runs both laravel and next.js. you should probably join Coolify's discord channel and ask for support there
Coolify is built with PHP Laravel. The Laravel CPU spike issue is definitely related to coolify. But what is the solution? This issue took down our entire VPS.
I am also facing laravel spikes.
Could you guys check your steal % on your VPS? We've getting reports that some server providers are overloading their servers.
mpstat -P ALL 1
Same issue here. it is a huge problem, brand new install does the same thing. I installed Dokploy to see if I got the same results and it worked great, so I went back to Coolify and the same issue right away.
12/11/25 x86_64 (2 CPU)
19:37:33 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 19:37:34 all 5.47 0.50 3.98 0.00 0.00 0.50 89.55 0.00 0.00 0.00 19:37:34 0 5.88 0.98 3.92 0.00 0.00 0.98 88.24 0.00 0.00 0.00 19:37:34 1 5.05 0.00 4.04 0.00 0.00 0.00 90.91 0.00 0.00 0.00
19:37:34 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 19:37:35 all 6.39 0.00 3.20 0.00 0.00 0.00 90.41 0.00 0.00 0.00 19:37:35 0 5.50 0.00 4.59 0.00 0.00 0.00 89.91 0.00 0.00 0.00 19:37:35 1 7.27 0.00 1.82 0.00 0.00 0.00 90.91 0.00 0.00 0.00
19:37:35 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 19:37:36 all 6.53 0.00 2.51 0.00 0.00 0.00 90.95 0.00 0.00 0.00 19:37:36 0 7.00 0.00 2.00 0.00 0.00 0.00 91.00 0.00 0.00 0.00 19:37:36 1 6.06 0.00 3.03 0.00 0.00 0.00 90.91 0.00 0.00 0.00
19:37:36 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 19:37:37 all 5.94 0.00 3.96 0.00 0.00 0.50 89.60 0.00 0.00 0.00 19:37:37 0 8.00 0.00 2.00 0.00 0.00 0.00 90.00 0.00 0.00 0.00 19:37:37 1 3.92 0.00 5.88 0.00 0.00 0.98 89.22 0.00 0.00 0.00
19:37:37 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 19:37:38 all 8.50 0.00 1.00 0.00 0.00 0.00 90.50 0.00 0.00 0.00 19:37:38 0 8.00 0.00 1.00 0.00 0.00 0.00 91.00 0.00 0.00 0.00 19:37:38 1 9.00 0.00 1.00 0.00 0.00 0.00 90.00 0.00 0.00 0.00
19:37:38 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 19:37:39 all 6.03 0.00 4.02 0.00 0.00 0.00 89.95 0.00 0.00 0.00 19:37:39 0 8.00 0.00 3.00 0.00 0.00 0.00 89.00 0.00 0.00 0.00 19:37:39 1 4.04 0.00 5.05 0.00 0.00 0.00 90.91 0.00 0.00 0.00
Your steal% is at 90% i.e your 2vcpu is really only 0.2cpu, i would suggest changing hosting provider
@thibault-peronno @rudro-gt @ufuah @abdulmejidshemsuawel could you check that as well?
Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle Average: all 47.22 0.75 11.60 0.04 0.00 3.71 0.00 0.00 0.00 36.68 Average: 0 47.06 0.77 11.51 0.04 0.00 3.78 0.00 0.00 0.00 36.85 Average: 1 47.32 0.69 11.66 0.04 0.00 3.65 0.00 0.00 0.00 36.64 Average: 2 47.82 0.58 11.26 0.05 0.00 3.78 0.00 0.00 0.00 36.50 Average: 3 46.67 0.96 11.96 0.05 0.00 3.63 0.00 0.00 0.00 36.74
Issue Report – High CPU Usage Debugging Experience
I encountered the same issue with Coolfy consuming 100% of my CPU. I spent around five hours debugging the Coolfy application, thinking the problem was within it. However, the root cause was actually my environment.
Here’s what happened: today at 11 AM, one of my machines reached 100% CPU usage. Upon investigation, I discovered that I was affected by the Next.js vulnerability (CVE). Accepting the situation, I uninstalled everything from my VPS, reset it completely, and changed all credentials for other services that were stored in the .env files of my applications on this VPS. Up to this point, everything was fine.
Since it is Saturday and I wanted to take the opportunity to do some other work, I quickly deployed a fresh Coolfy instance on this VPS using version v4.0.0-beta.444, which I also use on several other VPSs. After starting the service, I noticed significant system slowness. Checking the VPS, the CPU was again at 100%. I initially feared another infection, but upon closer inspection, the processes consuming all the CPU were PHP processes from Coolfy.
I attempted extensive debugging to identify the issue. Eventually, I discovered that my VPS provider had imposed CPU usage limits. Because of this, even a small Coolfy instance could saturate the VPS resources and appear as if the application was malfunctioning. To resolve the issue, I adjusted the resource limitations in my VPS provider’s settings (in this case, Hostinger), which had been set following the previous security incident. After this adjustment, everything returned to normal.
Currently, I am using Coolfy v4.0.0-beta.444, and everything has been stable for some time. I also know colleagues using v4.0.0-beta.453 whose applications are running fine.
I hope this report can help both the maintainers of Coolfy and other developers encountering similar high CPU usage issues.
before removing the limitation:
after removing the limitation :
mpstat -P ALL 1 Linux 6.1.0-41-cloud-amd64 (vps-2370f534) 12/17/25 x86_64 (8 CPU)
14:13:24 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 14:13:25 all 14.38 0.00 5.17 0.00 0.00 0.63 0.00 0.00 0.00 79.82 14:13:25 0 11.76 0.00 3.92 0.00 0.00 0.98 0.00 0.00 0.00 83.33 14:13:25 1 38.38 0.00 18.18 0.00 0.00 0.00 0.00 0.00 0.00 43.43 14:13:25 2 1.00 0.00 5.00 0.00 0.00 1.00 0.00 0.00 0.00 93.00 14:13:25 3 9.18 0.00 5.10 0.00 0.00 1.02 0.00 0.00 0.00 84.69 14:13:25 4 33.67 0.00 5.10 0.00 0.00 1.02 0.00 0.00 0.00 60.20 14:13:25 5 7.14 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 92.86 14:13:25 6 7.07 0.00 1.01 0.00 0.00 1.01 0.00 0.00 0.00 90.91 14:13:25 7 7.07 0.00 3.03 0.00 0.00 0.00 0.00 0.00 0.00 89.90
14:13:25 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 14:13:26 all 6.15 0.00 2.38 0.00 0.00 0.25 0.00 0.00 0.00 91.22 14:13:26 0 3.00 0.00 4.00 0.00 0.00 1.00 0.00 0.00 0.00 92.00 14:13:26 1 7.00 0.00 2.00 0.00 0.00 1.00 0.00 0.00 0.00 90.00 14:13:26 2 5.00 0.00 3.00 0.00 0.00 0.00 0.00 0.00 0.00 92.00 14:13:26 3 3.92 0.00 1.96 0.00 0.00 0.00 0.00 0.00 0.00 94.12 14:13:26 4 10.10 0.00 2.02 0.00 0.00 0.00 0.00 0.00 0.00 87.88 14:13:26 5 11.22 0.00 1.02 0.00 0.00 0.00 0.00 0.00 0.00 87.76 14:13:26 6 7.07 0.00 3.03 0.00 0.00 0.00 0.00 0.00 0.00 89.90 14:13:26 7 2.02 0.00 2.02 0.00 0.00 0.00 0.00 0.00 0.00 95.96
14:13:26 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 14:13:27 all 15.21 0.00 4.69 0.00 0.00 0.63 0.00 0.00 0.00 79.47 14:13:27 0 13.13 0.00 7.07 0.00 0.00 1.01 0.00 0.00 0.00 78.79 14:13:27 1 23.23 0.00 6.06 0.00 0.00 1.01 0.00 0.00 0.00 69.70 14:13:27 2 21.43 0.00 4.08 0.00 0.00 1.02 0.00 0.00 0.00 73.47 14:13:27 3 7.22 0.00 2.06 0.00 0.00 1.03 0.00 0.00 0.00 89.69 14:13:27 4 16.83 0.00 4.95 0.00 0.00 0.00 0.00 0.00 0.00 78.22 14:13:27 5 13.13 0.00 6.06 0.00 0.00 0.00 0.00 0.00 0.00 80.81 14:13:27 6 14.14 0.00 2.02 0.00 0.00 1.01 0.00 0.00 0.00 82.83 14:13:27 7 12.37 0.00 5.15 0.00 0.00 0.00 0.00 0.00 0.00 82.47
I am also facing the same issue, I suddenly got a cpu sike upto 100%, then i completely uninstalled coolify and get my cpu to normal, and then i fresh install coolify and suddenly got cpu spike upto 400%, then i didn't even running any service. I think its time to leave coolify.