watchtower
watchtower copied to clipboard
Container is depending on at least one other container. This is not compatible with rolling restarts
Describe the bug
As of yesterday, Watchtower is throwing an endless loop of errors about container dependencies related to my media management compose. Nothing has materially changed in this recently - this just started happening randomly. The name of the container in question changes randomly, depending on which ones are running and their start order.
EDIT: This issue does NOT exist on containrrr/watchtower:1.5.3, but it does on containrrr/watchtower:latest.
Steps to reproduce
Here is my WT compose:
Here is my WT .env
Here are snippets of the relevant section of the related compose. TLDR, each of the media downloaders uses the Gluetun VPN tunnel, to mask traffic.
Note: I have down'd and up'd the container several times. There are no volumes at all, so nothing should be stuck.
Expected behavior
This has always worked before, and is documented to do so, here.
Screenshots
No response
Environment
- Platform: Debian 11
- Architecture: amd64
- Docker Version:
Client: Docker Engine - Community
Version: 24.0.5
API version: 1.43
Go version: go1.20.6
Git commit: ced0996
Built: Fri Jul 21 20:35:35 2023
OS/Arch: linux/amd64
Context: default
Your logs
time="2023-10-03T12:39:02-07:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2023-10-03T12:39:03-07:00" level=debug msg="Making sure everything is sane before starting"
time="2023-10-03T12:39:03-07:00" level=debug msg="Retrieving running and restarting containers"
time="2023-10-03T12:39:03-07:00" level=error msg="\"/media-sabnzbd\" is depending on at least one other container. This is not compatible with rolling restarts"
time="2023-10-03T12:39:03-07:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
time="2023-10-03T12:39:05-07:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2023-10-03T12:39:06-07:00" level=debug msg="Making sure everything is sane before starting"
time="2023-10-03T12:39:06-07:00" level=debug msg="Retrieving running and restarting containers"
time="2023-10-03T12:39:06-07:00" level=error msg="\"/media-sabnzbd\" is depending on at least one other container. This is not compatible with rolling restarts"
time="2023-10-03T12:39:06-07:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
time="2023-10-03T12:39:08-07:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2023-10-03T12:39:09-07:00" level=debug msg="Making sure everything is sane before starting"
time="2023-10-03T12:39:09-07:00" level=debug msg="Retrieving running and restarting containers"
time="2023-10-03T12:39:09-07:00" level=error msg="\"/media-sabnzbd\" is depending on at least one other container. This is not compatible with rolling restarts"
time="2023-10-03T12:39:09-07:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
time="2023-10-03T12:39:13-07:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2023-10-03T12:39:14-07:00" level=debug msg="Making sure everything is sane before starting"
time="2023-10-03T12:39:14-07:00" level=debug msg="Retrieving running and restarting containers"
time="2023-10-03T12:39:14-07:00" level=error msg="\"/media-sabnzbd\" is depending on at least one other container. This is not compatible with rolling restarts"
time="2023-10-03T12:39:14-07:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
time="2023-10-03T12:39:17-07:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2023-10-03T12:39:18-07:00" level=debug msg="Making sure everything is sane before starting"
time="2023-10-03T12:39:18-07:00" level=debug msg="Retrieving running and restarting containers"
time="2023-10-03T12:39:19-07:00" level=error msg="\"/media-sabnzbd\" is depending on at least one other container. This is not compatible with rolling restarts"
time="2023-10-03T12:39:19-07:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
time="2023-10-03T12:39:23-07:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2023-10-03T12:39:24-07:00" level=debug msg="Making sure everything is sane before starting"
time="2023-10-03T12:39:24-07:00" level=debug msg="Retrieving running and restarting containers"
time="2023-10-03T12:39:24-07:00" level=error msg="\"/media-sabnzbd\" is depending on at least one other container. This is not compatible with rolling restarts"
time="2023-10-03T12:39:24-07:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
time="2023-10-03T12:39:28-07:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2023-10-03T12:39:29-07:00" level=debug msg="Making sure everything is sane before starting"
time="2023-10-03T12:39:29-07:00" level=debug msg="Retrieving running and restarting containers"
time="2023-10-03T12:39:29-07:00" level=error msg="\"/media-sabnzbd\" is depending on at least one other container. This is not compatible with rolling restarts"
time="2023-10-03T12:39:29-07:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
After stopping the SABnzbd container, it just starts complaining about the next one in that stack:
time="2023-10-03T12:39:37-07:00" level=error msg="\"/bc0-media-sabnzbd\" is depending on at least one other container. This is not compatible with rolling restarts"
time="2023-10-03T12:39:37-07:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
time="2023-10-03T12:40:27-07:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2023-10-03T12:40:28-07:00" level=debug msg="Making sure everything is sane before starting"
time="2023-10-03T12:40:28-07:00" level=debug msg="Retrieving running and restarting containers"
time="2023-10-03T12:40:29-07:00" level=error msg="\"/bc0-media-sonarr\" is depending on at least one other container. This is not compatible with rolling restarts"
time="2023-10-03T12:40:29-07:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
time="2023-10-03T12:40:30-07:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2023-10-03T12:40:31-07:00" level=debug msg="Making sure everything is sane before starting"
time="2023-10-03T12:40:31-07:00" level=debug msg="Retrieving running and restarting containers"
time="2023-10-03T12:40:31-07:00" level=error msg="\"/bc0-media-sonarr\" is depending on at least one other container. This is not compatible with rolling restarts"
time="2023-10-03T12:40:31-07:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
time="2023-10-03T12:40:33-07:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2023-10-03T12:40:34-07:00" level=debug msg="Making sure everything is sane before starting"
time="2023-10-03T12:40:34-07:00" level=debug msg="Retrieving running and restarting containers"
time="2023-10-03T12:40:35-07:00" level=error msg="\"/bc0-media-sonarr\" is depending on at least one other container. This is not compatible with rolling restarts"
time="2023-10-03T12:40:35-07:00" level=info msg="Waiting for the notification goroutine to finish" notify=no
time="2023-10-03T12:40:37-07:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
Additional context
No response
Hi there! 👋🏼 As you're new to this repo, we'd like to suggest that you read our code of conduct as well as our contribution guidelines. Thanks a bunch for opening your first issue! 🙏
EDIT: This issue does NOT exist on containrrr/watchtower:1.5.3, but it does on containrrr/watchtower:latest.
Interesting! The problem users had before 1.6.0 were that they could not update containers that depended on gluetun for networking. In v1.6.0, support for updating such containers was added and we now also detect such relationships and treat them as dependencies. I have no idea why this would have worked for you earlier, but the problem right now is that you have rolling updates enabled, which cannot be combined with containers with dependencies (as the proividing containers need to be stopped after, and started before the consuming containers).
I think the easiest solution is to just turn off rolling restarts, as it shouldn't be possible to do proper rolling restarts with your configuration (this is why watchtower refuses to do them).
Hi, I have the same problem. And Watchtower worked fine with the rolling restart before the update...
@Al4ndil it never worked correctly with rolling restarts and dependent containers. What probably worked for you before was ignoring the dependency and hoping it wouldn't break whenever it was updated.
Same issue here... I've got 200 mails from Watchtower... and continioun do get it every day. Watchtower in restart mode.
I just noticed that my docker containers hadn't been updated for quite a while. Watchtower is scheduled to run every 24 hours. I noticed in the logs the same error: "containerX is depending on at least one other container. This is not compatible with rolling restarts." This error also seems to prevent watchtower from updating my images. I have switched back to 1.5.3 which is working. Setting rolling restart to false also seems to work (I had this set to true before, which was nog problem in versions before 1.6.0).
Same issue... watchtower does not start because it identifies linked containers . How to solve? Does 1.5.3 ignore dependencies?
BUMP, Im suffering with the same issue
This just happened to me overnight as well. Over 1,000 emails were sent to me. ☹️
EDIT: As stated previously, I have disabled rolling restarts to mitigate the issue. Would be nice if it could be left as enabled, and watchtower could go ahead and exclude rolling restarts from occurring on containers that don't support it.