compose
compose copied to clipboard
[BUG] compose watch combined with depends_on can lead to failure of dependencies
Description
I've started to use 'watch' feature for one project, but it fails every time when I change files from state A to state B and then back to state A.
Steps To Reproduce
- Compose yaml (truncated):
apiserver:
build:
context: ./backend
dockerfile: api.Dockerfile
container_name: apiserver
pull_policy: build
restart: always
depends_on:
- "db"
- "rabbit"
expose:
- 3000
volumes:
- /etc/localtime:/etc/localtime:ro
develop:
watch:
- path: ./backend/
action: rebuild
analytics:
build:
context: ./backend
dockerfile: analytics.Dockerfile
container_name: analytics
pull_policy: build
restart: always
depends_on:
- "db"
- "rabbit"
volumes:
- /etc/localtime:/etc/localtime:ro
develop:
watch:
- path: ./backend/
action: rebuild
# ...
# more services here. No more "depends_on" statements.
- Run watch:
docker compose watch
. - Edit file under ./backend folder, e.g. add new line.
- Watch detects changes and rebuilds both services as expected.
- Edit same file under ./backend folder bringing it to the state it was before step 3 (remove new line).
- Watch starts to rebuild both services, but fails:
...
service "apiserver" successfully built
Failed to recreate service after update. Error: Error response from daemon: Conflict. The container name "/eaad56074ed9_db" is already in use by container "30b1ad6dcf9135f90dcef865d7de2bfaf1e6a2faa4b3c1ac56351d97e18cf2fa". You have to remove (or rename) that container to be able to reuse that name.
WARN[0011] Error handling changed files for service apiserver: Error response from daemon: Conflict. The container name "/eaad56074ed9_db" is already in use by container "30b1ad6dcf9135f90dcef865d7de2bfaf1e6a2faa4b3c1ac56351d97e18cf2fa". You have to remove (or rename) that container to be able to reuse that name.
Failed to recreate service after update. Error: Error response from daemon: No such container: 30b1ad6dcf9135f90dcef865d7de2bfaf1e6a2faa4b3c1ac56351d97e18cf2fa
WARN[0012] Error handling changed files for service analytics: Error response from daemon: No such container: 30b1ad6dcf9135f90dcef865d7de2bfaf1e6a2faa4b3c1ac56351d97e18cf2fa
As you can see, the error mentions container db. This container dies. Looks like compose is trying to restart dependencies after rebuilding apiserver but somehow conflict occurs.
The problem does not occur when I run watch only for one service (docker compose watch apiserver)
Compose Version
# docker compose version
Docker Compose version 2.26.1
# docker-compose version
Docker Compose version 2.26.1
Docker Environment
Client:
Version: 26.0.0
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: 0.13.1
Path: /usr/lib/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: 2.26.1
Path: /usr/lib/docker/cli-plugins/docker-compose
scan: Docker Scan (Docker Inc.)
Version: v0.1.0-280-gc7fa31d4c4
Path: /usr/lib/docker/cli-plugins/docker-scan
Server:
Containers: 7
Running: 7
Paused: 0
Stopped: 0
Images: 632
Server Version: 26.0.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: true
Native Overlay Diff: false
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: dcf2847247e18caba8dce86522029642f60fe96b.m
runc version:
init version: de40ad0
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.8.2-arch2-1
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.08GiB
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Default Address Pools:
Base: 192.168.128.0/20, Size: 24
Anything else?
This might be related to https://github.com/docker/compose/issues/9014
After some more time using watch
feature, I can say that the bug in the topic starter does not seem related to "depends_on" field. I've removed all depends_on
fields and still getting same error about container conflict