coolify icon indicating copy to clipboard operation
coolify copied to clipboard

[Bug] Deployment from GitHub marked as failed even though new container is healthy

Open albertorizzi opened this issue 2 weeks ago โ€ข 5 comments

Error Message and Logs

The deployment of a GitHub-backed application is marked as failed in the Coolify UI even though:

  • The Docker image builds successfully.
  • The rolling update completes.
  • The new container becomes healthy (healthcheck returns "healthy" with exit code 0).
  • Old container is stopped and removed without errors.
  • Helper/build container is also stopped and removed cleanly.

There is no explicit error in the deployment logs; the last messages show a successful rolling update and cleanup.

The deployment status shown in the UI does not match the actual outcome of the deployment process.

Key points from the logs:

  • Build completes successfully:

    • #13 DONE 63.4s
    • #16 DONE 34.0s
    • Building docker image completed.
  • Rolling update starts and finishes:

    • Rolling update started.

    • Healthcheck:

      • Healthcheck status: "healthy"
      • New container is healthy.
    • Old container removal:

      • Rolling update completed.
  • Helper container stopped and removed:

    • Gracefully shutting down build container: kskw0kc8ocs0coosc8wok4so
    • docker stop ...
    • docker rm ...

However, the deployment is still marked as failed in the Coolify dashboard.

Steps to Reproduce

Example Repository URL

No response

Coolify Version

v4.0.0-beta.453

Are you using Coolify Cloud?

No (self-hosted)

Operating System and Version (self-hosted)

No response

Additional Information

No response

albertorizzi avatar Dec 10 '25 09:12 albertorizzi

๐Ÿ“ CodeRabbit Plan Mode

Generate an implementation plan and agent prompts for this issue.

  • [ ] Create Plan
Examples

๐Ÿ”— Similar Issues

Possible Duplicates

Related Issues

๐Ÿ”— Related PRs

coollabsio/coolify#7011 - fix: ensure deployment failure notifications are sent reliably [merged] coollabsio/coolify#7248 - fix: eliminate duplicate error logging in deployment methods [merged] coollabsio/coolify#7460 - fix: prevent cleanup exceptions from marking successful deployments as failed [merged]

๐Ÿ‘ค Suggested Assignees

  • andrasbacsai
  • MatteoGauthier
  • GautierT
  • nurdism
  • levino

๐Ÿงช Issue enrichment is currently in early access.

To disable automatic issue enrichment, add the following to your .coderabbit.yaml:

issue_enrichment:
  auto_enrich:
    enabled: false

coderabbitai[bot] avatar Dec 10 '25 09:12 coderabbitai[bot]

Indeed it's still a issue on bet 453...

2025-Dec-10 12:01:00.763962
----------------------------------------
2025-Dec-10 12:01:00.776914
Rolling update started.
2025-Dec-10 12:01:01.220608
[CMD]: docker exec v8co8o44ccg4k04owoc8w8w4 bash -c 'SOURCE_COMMIT=f520534a04c5a6b26c679d76a5b9ac285eaffe76 COOLIFY_URL=XXXXXXXXX COOLIFY_FQDN=staging-mygreffe.notae.ai COOLIFY_BRANCH=main COOLIFY_RESOURCE_UUID=rkcsk0ws0w8gskgcgw84kkss  docker compose --project-name rkcsk0ws0w8gskgcgw84kkss --project-directory /artifacts/v8co8o44ccg4k04owoc8w8w4 -f /artifacts/v8co8o44ccg4k04owoc8w8w4/docker-compose.yaml up --build -d'
2025-Dec-10 12:01:01.220608
time="2025-12-10T12:01:01Z" level=warning msg="Found orphan containers ([rkcsk0ws0w8gskgcgw84kkss-182742042148]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up."
2025-Dec-10 12:01:01.220608
Container rkcsk0ws0w8gskgcgw84kkss-115833701499  Creating
2025-Dec-10 12:01:01.441835
rkcsk0ws0w8gskgcgw84kkss-115833701499 Your kernel does not support memory swappiness capabilities or the cgroup is not mounted. Memory swappiness discarded.
2025-Dec-10 12:01:01.448755
Container rkcsk0ws0w8gskgcgw84kkss-115833701499  Created
2025-Dec-10 12:01:01.448755
Container rkcsk0ws0w8gskgcgw84kkss-115833701499  Starting
2025-Dec-10 12:01:01.588036
Container rkcsk0ws0w8gskgcgw84kkss-115833701499  Started
2025-Dec-10 12:01:01.596705
New container started.
2025-Dec-10 12:01:01.608435
Removing old containers.
2025-Dec-10 12:01:02.263347
[CMD]: docker stop -t 30 rkcsk0ws0w8gskgcgw84kkss-182742042148
2025-Dec-10 12:01:02.263347
rkcsk0ws0w8gskgcgw84kkss-182742042148
2025-Dec-10 12:01:02.490789
[CMD]: docker rm -f rkcsk0ws0w8gskgcgw84kkss-182742042148
2025-Dec-10 12:01:02.490789
rkcsk0ws0w8gskgcgw84kkss-182742042148
2025-Dec-10 12:01:02.498834
Rolling update completed.
2025-Dec-10 12:01:03.100448
Gracefully shutting down build container: v8co8o44ccg4k04owoc8w8w4
2025-Dec-10 12:01:03.457408
[CMD]: docker stop -t 30 v8co8o44ccg4k04owoc8w8w4
2025-Dec-10 12:01:03.457408
v8co8o44ccg4k04owoc8w8w4
2025-Dec-10 12:01:03.648490
[CMD]: docker rm -f v8co8o44ccg4k04owoc8w8w4
2025-Dec-10 12:01:03.648490

GautierT avatar Dec 10 '25 12:12 GautierT

Also got the issue on 453..

Rolling update started.
[CMD]: docker exec ewosk8488co0008scks4c8s8 bash -c 'COOLIFY_URL=app.sellplus.fr,https COOLIFY_FQDN=https://app.sellplus.fr,https//www.app.sellplus.fr COOLIFY_BRANCH=main COOLIFY_RESOURCE_UUID=ww8k88ccko448k4884gkg884  docker compose --project-name ww8k88ccko448k4884gkg884 --project-directory /artifacts/ewosk8488co0008scks4c8s8 -f /artifacts/ewosk8488co0008scks4c8s8/docker-compose.yaml up --build -d'
time="2025-12-10T13:55:55Z" level=warning msg="Found orphan containers ([ww8k88ccko448k4884gkg884-145207818488]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up."
Container ww8k88ccko448k4884gkg884-134425594232  Creating
ww8k88ccko448k4884gkg884-134425594232 Your kernel does not support memory swappiness capabilities or the cgroup is not mounted. Memory swappiness discarded.
Container ww8k88ccko448k4884gkg884-134425594232  Created
Container ww8k88ccko448k4884gkg884-134425594232  Starting
Container ww8k88ccko448k4884gkg884-134425594232  Started
New container started.
Waiting for healthcheck to pass on the new container.
Healthcheck URL (inside the container): GET: http://localhost:3000/
Waiting for the start period (5 seconds) before starting healthcheck.
[CMD]: docker inspect --format='{{json .State.Health.Status}}' ww8k88ccko448k4884gkg884-134425594232
"healthy"
[CMD]: docker inspect --format='{{json .State.Health.Log}}' ww8k88ccko448k4884gkg884-134425594232
[{"Start":"2025-12-10T13:56:02.351501069Z","End":"2025-12-10T13:56:02.458532837Z","ExitCode":0,"Output":""}]
Attempt 1 of 10 | Healthcheck status: "healthy"
Healthcheck logs: (no logs) | Return code: 0
New container is healthy.
Removing old containers.
[CMD]: docker stop -t 30 ww8k88ccko448k4884gkg884-145207818488
ww8k88ccko448k4884gkg884-145207818488
[CMD]: docker rm -f ww8k88ccko448k4884gkg884-145207818488
ww8k88ccko448k4884gkg884-145207818488
Rolling update completed.
Gracefully shutting down build container: ewosk8488co0008scks4c8s8
[CMD]: docker stop -t 30 ewosk8488co0008scks4c8s8
ewosk8488co0008scks4c8s8
[CMD]: docker rm -f ewosk8488co0008scks4c8s8
Error response from daemon: removal of container ewosk8488co0008scks4c8s8 is already in progress

MatteoGauthier avatar Dec 10 '25 14:12 MatteoGauthier

hey all, was having the same issue, I just rolled back to .447 which is working again, on .448 it was still failing (maybe it's pepe silvia's fault), just writing here in case it helps

Choms avatar Dec 10 '25 15:12 Choms

@Choms Thanks a lot for your comment, will downgrade to 447 so ๐Ÿ™

MatteoGauthier avatar Dec 10 '25 16:12 MatteoGauthier

Error response from daemon: removal of container ewosk8488co0008scks4c8s8 is already in progress

i think this is just because coolify starts the coolify-helper with the --rm option, so the moment it stops, docker already initiates rm, the error is due to rm being called again by coolify

i cant see how this error sets the status to failed however, as i think it should just throw back to graceful_shutdown, which does not fail the deployment

https://github.com/coollabsio/coolify/blob/b7282ad565376ac3efcc70c8ca383b765cc79b2b/app/Traits/ExecuteRemoteCommand.php#L135

djsisson avatar Dec 10 '25 17:12 djsisson

This will be fixed in the next version.

andrasbacsai avatar Dec 11 '25 09:12 andrasbacsai

I updated to v 454, and the same problem persists. But the cause is clear now. @andrasbacsai

2025-Dec-12 10:09:51.215325
Container qc8ogssww8kc488o00oc4kgo-100642809573  Creating
2025-Dec-12 10:09:51.330227
qc8ogssww8kc488o00oc4kgo-100642809573 Your kernel does not support memory swappiness capabilities or the cgroup is not mounted. Memory swappiness discarded.
2025-Dec-12 10:09:51.342979
Container qc8ogssww8kc488o00oc4kgo-100642809573  Created
2025-Dec-12 10:09:51.342979
Container qc8ogssww8kc488o00oc4kgo-100642809573  Starting
2025-Dec-12 10:09:51.685743
Container qc8ogssww8kc488o00oc4kgo-100642809573  Started
2025-Dec-12 10:09:51.700518
New container started.
2025-Dec-12 10:09:51.720221
Waiting for healthcheck to pass on the new container.
2025-Dec-12 10:09:51.736070
Healthcheck URL (inside the container): GET: http://localhost:3000/
2025-Dec-12 10:09:51.750907
Waiting for the start period (5 seconds) before starting healthcheck.
2025-Dec-12 10:09:57.186612
[CMD]: docker inspect --format='{{json .State.Health.Status}}' qc8ogssww8kc488o00oc4kgo-100642809573
2025-Dec-12 10:09:57.186612
"healthy"
2025-Dec-12 10:09:57.532452
[CMD]: docker inspect --format='{{json .State.Health.Log}}' qc8ogssww8kc488o00oc4kgo-100642809573
2025-Dec-12 10:09:57.532452
[{"Start":"2025-12-12T10:09:56.685792867Z","End":"2025-12-12T10:09:56.883232768Z","ExitCode":0,"Output":""}]
2025-Dec-12 10:09:57.548089
Attempt 1 of 10 | Healthcheck status: "healthy"
2025-Dec-12 10:09:57.570157
Healthcheck logs: (no logs) | Return code: 0
2025-Dec-12 10:09:57.621911
New container is healthy.
2025-Dec-12 10:09:57.643161
Removing old containers.
2025-Dec-12 10:10:00.776273
[CMD]: docker stop -t 30 qc8ogssww8kc488o00oc4kgo-085844773678
2025-Dec-12 10:10:00.776273
qc8ogssww8kc488o00oc4kgo-085844773678
2025-Dec-12 10:10:01.199197
[CMD]: docker rm -f qc8ogssww8kc488o00oc4kgo-085844773678
2025-Dec-12 10:10:01.199197
qc8ogssww8kc488o00oc4kgo-085844773678
2025-Dec-12 10:10:01.211972
Rolling update completed.
2025-Dec-12 10:10:01.634801
========================================
2025-Dec-12 10:10:01.669615
Deployment failed: SQLSTATE[42P01]: Undefined table: 7 ERROR:  relation "webhook_notification_settings" does not exist
2025-Dec-12 10:10:01.669615
LINE 1: select * from "webhook_notification_settings" where "webhook...
2025-Dec-12 10:10:01.669615
^ (Connection: pgsql, SQL: select * from "webhook_notification_settings" where "webhook_notification_settings"."team_id" = 0 and "webhook_notification_settings"."team_id" is not null limit 1)
2025-Dec-12 10:10:01.711452
Error type: Illuminate\Database\QueryException
2025-Dec-12 10:10:01.728162
Error code: 42P01
2025-Dec-12 10:10:01.744853
Location: /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php:824
2025-Dec-12 10:10:01.762679
Caused by:
2025-Dec-12 10:10:01.780483
PDOException: SQLSTATE[42P01]: Undefined table: 7 ERROR:  relation "webhook_notification_settings" does not exist
2025-Dec-12 10:10:01.780483
LINE 1: select * from "webhook_notification_settings" where "webhook...
2025-Dec-12 10:10:01.780483
^
2025-Dec-12 10:10:01.799464
at /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php:411
2025-Dec-12 10:10:01.815083
Stack trace (first 5 lines):
2025-Dec-12 10:10:01.837655
#0 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php(778): Illuminate\Database\Connection->runQueryCallback()
2025-Dec-12 10:10:01.856486
#1 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php(397): Illuminate\Database\Connection->run()
2025-Dec-12 10:10:01.875812
#2 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Query/Builder.php(3188): Illuminate\Database\Connection->select()
2025-Dec-12 10:10:01.891692
#3 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Query/Builder.php(3173): Illuminate\Database\Query\Builder->runSelect()
2025-Dec-12 10:10:01.907966
#4 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Query/Builder.php(3763): Illuminate\Database\Query\Builder->{closure:Illuminate\Database\Query\Builder::get():3172}()
2025-Dec-12 10:10:01.922788
========================================
2025-Dec-12 10:10:01.938295
Deployment failed. Removing the new version of your application.
2025-Dec-12 10:10:03.756314
Gracefully shutting down build container: yos48ok4go088skg0wskkosk
2025-Dec-12 10:10:04.379877
[CMD]: docker stop -t 30 yos48ok4go088skg0wskkosk
2025-Dec-12 10:10:04.379877
yos48ok4go088skg0wskkosk
2025-Dec-12 10:10:04.732552
[CMD]: docker rm -f yos48ok4go088skg0wskkosk
2025-Dec-12 10:10:04.732552

albertorizzi avatar Dec 12 '25 10:12 albertorizzi

Contnue https://github.com/coollabsio/coolify/issues/7606

albertorizzi avatar Dec 12 '25 11:12 albertorizzi