coolify icon indicating copy to clipboard operation
coolify copied to clipboard

[Bug]: "coolify-db" container is just gone

Open SeriousM opened this issue 2 weeks ago โ€ข 10 comments

Error Message and Logs

Since two days, the "coolify-db" container is periodically removed. I don't know why and who's triggering this. The logs show: SQLSTATE[08006] [7] could not translate host name "coolify-db" to address: Name does not resolve

My guess is that the problem exists since the last update from v4.0.0-beta.451 to v4.0.0-beta.452.

To fix the issue I followed the troubleshooting guide and execute:

# su access to my /data/coolify/src needs su
sudo su

cd /data/coolify/source/
docker compose --env-file .env -f docker-compose.yml -f docker-compose.prod.yml up -d

The container is recreated and started. After that everything is working again.

Please help me to identify this problem.

Steps to Reproduce

  1. update from v4.0.0-beta.451 to v4.0.0-beta.452
  2. wait till morning
  3. "coolify-db" container is removed

Example Repository URL

No response

Coolify Version

v4.0.0-beta.452

Are you using Coolify Cloud?

No (self-hosted)

Operating System and Version (self-hosted)

Debian GNU/Linux 12 (bookworm)

Additional Information

No response

SeriousM avatar Dec 04 '25 09:12 SeriousM

Same problem here, the coolify-db contiainer dissapears and then dashboard is not accessible. I reran the installer and the containers came back. Restarting it as noted above would do the same thing.

christopherpickering avatar Dec 04 '25 13:12 christopherpickering

This has been a problem for a while, it's usually the redis container not starting properly. We aren't sure what exactly causes this issue as it's not easily reproducable. For some reason the update from v4.0.0-beta.451 to v4.0.0-beta.452 triggered it a lot for people this time. I would assume because of the migrations.

If affected people could share their update and installation logs under /data/coolify/source that could potentially help figuring out what exactly causes this.

Cinzya avatar Dec 04 '25 18:12 Cinzya

I went through the last few update files (not sure which one was the cause) so here are a few that had an error message:

Creating backup of existing .env file to .env-2025-12-03-06-00-23
Merging .env.production values into .env
.env file merged successfully
Checking and updating environment variables if necessary...
 Container coolify-realtime  Recreate
 Container coolify-redis  Stopping
 Container coolify-db  Stopping
 Container bfe57dc212d7_coolify-redis  Recreate
 Container 9b6d2f208f89_coolify-db  Recreate
 Container bfe57dc212d7_coolify-redis  Error response from daemon: Error when allocating new name: Conflict. The container name "/coolify-redis" is already in use by container "bfe57dc212d7ab9fdbf8ccdadfaacc28fb0689704dc53e84e160d9ab11dd48b3". You have to remove (or rename) that container to be able to reuse that name.
 Container 9b6d2f208f89_coolify-db  Error response from daemon: Error when allocating new name: Conflict. The container name "/coolify-db" is already in use by container "9b6d2f208f899cd96ccf67fec12143f74c1886fd4ee67818c15d98f365126373". You have to remove (or rename) that container to be able to reuse that name.
 Container coolify-realtime  Recreated
 Container coolify-db  Stopped
 Container coolify-db  Removing
 Container coolify-db  Removed
 Container coolify-redis  Error while Stopping
Error response from daemon: Error when allocating new name: Conflict. The container name "/coolify-db" is already in use by container "9b6d2f208f899cd96ccf67fec12143f74c1886fd4ee67818c15d98f365126373". You have to remove (or rename) that container to be able to reuse that name.
exit status 1

or

Creating backup of existing .env file to .env-2025-12-02-06-00-20
Merging .env.production values into .env
.env file merged successfully
Checking and updating environment variables if necessary...
Creating backup of existing .env file to .env-2025-12-02-06-00-20
Merging .env.production values into .env
.env file merged successfully
Checking and updating environment variables if necessary...
 Container coolify-realtime  Recreate
 Container coolify-redis  Recreate
 Container coolify-db  Recreate
 Container coolify-redis  Recreate
 Container coolify-realtime  Recreate
 Container coolify-db  Recreate
 Container coolify-db  Error response from daemon: Conflict. The container name "/9b6d2f208f89_coolify-db" is already in use by container "5257dbb408de5c5c873e4b847f1b3e6b7d570cfa9a0228ba7d515cf26b4ce195". You have to remove (or rename) that container to be able to reuse that name.
Error response from daemon: Conflict. The container name "/9b6d2f208f89_coolify-db" is already in use by container "5257dbb408de5c5c873e4b847f1b3e6b7d570cfa9a0228ba7d515cf26b4ce195". You have to remove (or rename) that container to be able to reuse that name.
exit status 1
 Container coolify-realtime  Error response from daemon: Conflict. The container name "/c6bf08f52fe7_coolify-realtime" is already in use by container "44ccfef60040229e55fb032c2d8403b19816c4e43fb7d2a415e6de3ca25c7827". You have to remove (or rename) that container to be able to reuse that name.
Error response from daemon: Conflict. The container name "/c6bf08f52fe7_coolify-realtime" is already in use by container "44ccfef60040229e55fb032c2d8403b19816c4e43fb7d2a415e6de3ca25c7827". You have to remove (or rename) that container to be able to reuse that name.
exit status 1

successful ones look like this:

Merging .env.production values into .env
.env file merged successfully
Checking and updating environment variables if necessary...
 Container coolify-db  Recreate
 Container coolify-realtime  Recreate
 Container 5c4e5f5b7c26_coolify-redis  Recreate
 Container 5c4e5f5b7c26_coolify-redis  Recreated
 Container coolify-db  Recreated
 Container coolify-realtime  Recreated
 Container coolify  Recreate
 Container coolify  Recreated
 Container coolify-redis  Starting
 Container coolify-db  Starting
 Container coolify-realtime  Starting
 Container coolify-realtime  Started
 Container coolify-db  Started
 Container coolify-redis  Started
 Container coolify-realtime  Waiting
 Container coolify-db  Waiting
 Container coolify-redis  Waiting
 Container coolify-db  Healthy
 Container coolify-realtime  Healthy
 Container coolify-redis  Healthy
 Container coolify  Starting
 Container coolify  Started
 Container coolify-realtime  Waiting
 Container coolify  Waiting
 Container coolify-db  Waiting
 Container coolify-redis  Waiting
 Container coolify-db  Healthy
 Container coolify-redis  Healthy
 Container coolify-realtime  Healthy
 Container coolify  Healthy

christopherpickering avatar Dec 04 '25 23:12 christopherpickering

this feels like some kind of race condition where docker hasn't release the old container names yet, before compose tries to recreate the container (note names are deterministic here, so the container name should be the same)

i mentioned this in a past thread, where you can have issues as tear down does not happen in reverse, so coolify could be trying to use the db container as youre trying to take it down etc.

imo you should just run compose down first if you want to recreate

docker compose down && docker compose up -d --remove-orphans --wait --wait-timeout 60 or even compose stop coolify, followed by compose down to prevent dependency issues

https://github.com/coollabsio/coolify/blob/a528f4c3d1256cc6d007e0aa093deb0deba6b947/scripts/upgrade.sh#L69-L71

or even just compose up since i cant see a reason why you would want to recreate the db or redis containers if they haven't changed image, and let compose handle recreation only when required.

djsisson avatar Dec 05 '25 17:12 djsisson

had the same exact isssue on veresion v4.0.0-beta.452

Realalirezayazdanpanah avatar Dec 06 '25 00:12 Realalirezayazdanpanah

If affected people could share their update and installation logs under /data/coolify/source that could potentially help figuring out what exactly causes this.

of course, here you go!

Logs.zip

A few highlights:

a lot of toomanyrequests

toomanyrequests: retry-after: 916.659ยตs, allowed: 44000/minute
exit status 1

missing coolify-realtime container?

# upgrade-2025-12-03-00-00-20.log
Creating backup of existing .env file to .env-2025-12-03-00-00-20
Merging .env.production values into .env
.env file merged successfully
Checking and updating environment variables if necessary...
 Container coolify-realtime  Recreate
 Container coolify-redis  Recreate
 Container coolify-db  Recreate
 Container coolify-realtime  Error response from daemon: No such container: 97de9a8d7f1bfb6b382c20b5a6287d3c58d8225a28324c44f7f1430baaea06a5
Error response from daemon: No such container: 97de9a8d7f1bfb6b382c20b5a6287d3c58d8225a28324c44f7f1430baaea06a5
exit status 1

after recreating the db container, for some reason, it got a wired prefix "a8ef7e4cd770_coolify-db". I removed and rerun the docker compose command, now the container has its normal name "coolify-db".

SeriousM avatar Dec 06 '25 10:12 SeriousM

@SeriousM due to the --force-recreate option, the containers are taken down first, but if you are rate limited or fail to pull the newer image, this can leave you in a broken state.

i notice it is running the upgrade at the exact same moment everyday, if this is the autoupdate and everyone runs at the same moment, you hit ghcr.io rate limits, this should absolutely be changed to be a random time in the day not all at the same time. as i imagine other services also do the same thing without realising.

i cant understand why compose is adding the prefix to the container name, usually this is the project or if not specified the directory which would be source, however this is invoked via a container with a mounted path, so i think it reverts to using the hostname of the helper container which is random. But container name is specified in the compose so this shouldn't happen.

there must be some quirk here with container naming conflicts, so 2 things to add

`) docker pull should be ran beofre upgrading to ensure images are present before taking down 2) -p coolify (project name) should be added to make sure the coolify helper hostname is not added as a prefix, since --force-recreate doesnt take down that container as its not named in the compose

djsisson avatar Dec 06 '25 13:12 djsisson

i notice it is running the upgrade at the exact same moment everyday, if this is the autoupdate and everyone runs at the same moment, you hit ghcr.io rate limits, this should absolutely be changed to be a random time in the day not all at the same time. as i imagine other services also do the same thing without realising.

Image

You're absolutely right, that's the default for every one. I wouldn't have thought about the global rate limit excess, good catch.

I would recommend to set it to a random time in the sleeping time range to avoid distributions. Eg: [0-59] [1-5] * * * This would kick the update between 1:00 and 5:59 (local time?).

https://cron.help is a grate helper!

SeriousM avatar Dec 06 '25 23:12 SeriousM

That's a fantastic find if true, global cross-users rate limit hit.

dsegovia90 avatar Dec 07 '25 19:12 dsegovia90

Thanks everyone (especially @djsisson). I will fix this in the next version with https://github.com/coollabsio/coolify/pull/7565 by adding --project-name coolify and removing --force-recreate.

andrasbacsai avatar Dec 10 '25 13:12 andrasbacsai