compose
compose copied to clipboard
[BUG] compose up --wait exits 1 on init containers successfully completing
Description
When using docker compose up --wait
, where I have an init container that populates some data and then exits (0), docker compose will exit 1.
If any other container depends on the init container finishing, it will exit properly, as described/exampled in this PR: https://github.com/docker/compose/pull/9572. In my situation however, there is no container that runs a service that depends on the init container finishing.
Steps To Reproduce
docker-compose.yml example:
version: '3'
services:
postgres:
image: postgres:14-alpine
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
healthcheck:
test: ['CMD', 'pg_isready']
postgres_setup:
image: alpine
depends_on:
postgres:
condition: service_healthy
restart: "no"
command: pwd
Run docker compose/print exit code:
$ docker compose up --wait
[+] Running 2/3
✔ Network test_default Created 0.1s
✔ Container test-postgres-1 Healthy 31.9s
⠿ Container test-postgres_setup-1 Waiting 31.9s
container test-postgres_setup-1 exited (0)
$ echo $?
1
Compose Version
Docker Compose version v2.17.3
Docker Environment
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.10.4
Path: /Users/nathan/.docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.17.3
Path: /Users/nathan/.docker/cli-plugins/docker-compose
dev: Docker Dev Environments (Docker Inc.)
Version: v0.1.0
Path: /Users/nathan/.docker/cli-plugins/docker-dev
extension: Manages Docker extensions (Docker Inc.)
Version: v0.2.19
Path: /Users/nathan/.docker/cli-plugins/docker-extension
init: Creates Docker-related starter files for your project (Docker Inc.)
Version: v0.1.0-beta.4
Path: /Users/nathan/.docker/cli-plugins/docker-init
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
Version: 0.6.0
Path: /Users/nathan/.docker/cli-plugins/docker-sbom
scan: Docker Scan (Docker Inc.)
Version: v0.26.0
Path: /Users/nathan/.docker/cli-plugins/docker-scan
scout: Command line tool for Docker Scout (Docker Inc.)
Version: v0.10.0
Path: /Users/nathan/.docker/cli-plugins/docker-scout
Server:
Containers: 7
Running: 4
Paused: 0
Stopped: 3
Images: 69
Server Version: 23.0.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 2806fc1057397dbaeefbea0e4e17bddfbd388f38
runc version: v1.1.5-0-gf19387a
init version: de40ad0
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 5.15.49-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 5.804GiB
Name: docker-desktop
ID: dc980d43-b287-4b8d-90b1-992be4c7b457
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
hubproxy.docker.internal:5555
127.0.0.0/8
Live Restore Enabled: false
Anything else?
No response
There's unfortunately no way to declare your setup
service isn't actually a service, and termination is expected.
Hit this as well - does anyone have some workaround, please?
Hit this as well - does anyone have some workaround, please?
I'm using this:
docker-compose -f docker-compose.yml up --wait || true
docker-compose -f docker-compose.yml up --wait || true
We use the docker compose as part of test setup - the or-true approach will effectively void ANY kind of error coming from the stack (not only the "harmless one").
docker-compose -f docker-compose.yml up --wait || true
We use the docker compose as part of test setup - the or-true approach will effectively void ANY kind of error coming from the stack (not only the "harmless one").
Yes, it will. It will also make your test setup complete without errors (until this issue is fixed). The choice is yours :)
There's unfortunately no way to declare your
setup
service isn't actually a service, and termination is expected.
what about just using the return code of the "pseudo service" process? if it's 0, then do not consider it as an error by the compose up --wait
command.
While this is not to argue that --wait
couldn't/shouldn't take into account the exit code of services that exit (with the "complications" given by a restart policy taken into account), and with the observation that it's kinda' like that already:
$ docker compose -f - up --wait <<'EOF'
services:
sleepy:
image: alpine
command: sleep 5
EOF
[+] Running 1/1
✔ Container waitish-sleepy-1 Healthy
$ echo $?
0
... there is also this page of documentation: https://docs.docker.com/compose/profiles/#auto-starting-profiles-and-dependency-resolution
Well, noting that ... there is an unfortunate limitation - run can only run one service - and ... extrapolating a bit, here's an example of what works now:
# compose.yaml
services:
postgres:
image: postgres
environment:
POSTGRES_PASSWORD: password
healthcheck:
test: pg_isready
start_interval: 1s
start_period: 30s
psql:
profiles:
- .init
image: postgres
depends_on:
postgres:
condition: service_healthy
environment:
PGUSER: postgres
PGDATABASE: postgres
PGPASSWORD: password
entrypoint: psql -h postgres
command: -c \\conninfo
init:
profiles:
- .init
build:
dockerfile_inline: FROM scratch
init: true
entrypoint: /sbin/docker-init --version
depends_on:
psql:
condition: service_completed_successfully
$ docker compose run init
[+] Creating 3/3
✔ Network init_default Created 0.1s
✔ Container init-postgres-1 Created 0.1s
✔ Container init-psql-1 Created 0.1s
[+] Running 2/2
✔ Container init-postgres-1 Healthy 5.8s
✔ Container init-psql-1 Started 0.3s
tini version 0.19.0 - git.de40ad0
$ echo $?
0
# postgres is started because of the init -> psql -> postgres depends-ons
# and init could depend on a number of other services/"tasks"/one-offs
# the alternative would be to run each needed "task"/one-off sequentially, but that requires remembering them
$ docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
init-postgres-1 postgres "docker-entrypoint.s…" postgres 2 minutes ago Up 2 minutes (healthy) 5432/tcp
# next, up --wait, could start the rest and it would ignore psql and init because they are behind a profile;
# it could even be done by the init itself if you can assume enough (e.g. about docker's sock)
$ docker compose up --wait
[+] Running 1/1
✔ Container init-postgres-1 Healthy
$ echo $?
0
# PS
$ docker compose --profile .init logs psql
psql-1 | You are connected to database "postgres" as user "postgres" on host "postgres" (address "172.28.0.2") at port "5432".
Just hit this too and confirmed this is still a problem as of 2.24.7. This thread doesn't seem to lack examples, but I'll include mine just in case too.
services:
minio-create-buckets:
image: quay.io/minio/mc
environment:
MINIO_HOST: http://minio:9000
MINIO_ACCESS_KEY: minio-user
MINIO_SECRET_KEY: minio-password
entrypoint:
- bash
- -c
- |
mc alias set minio http://minio:9000 minio-user minio-password
mc mb minio/bucket-1
mc mb minio/bucket-2
mc anonymous set public minio/bucket-1
mc anonymous set public minio/bucket-2
echo "Hello, world!" > sample.txt
mc cp sample.txt minio/bucket-1
mc cp sample.txt minio/bucket-2
networks:
- docker-network
depends_on:
minio:
condition: service_healthy
minio:
image: quay.io/minio/minio
environment:
MINIO_ROOT_USER: minio-user
MINIO_ROOT_PASSWORD: minio-password
command: server /data --console-address ":9001"
ports:
- 9000:9000
- 9001:9001
networks:
- docker-network
healthcheck:
test: ["CMD", "mc", "ping", "--exit", "local"]
start_period: 10s
start_interval: 1s
interval: 3s
timeout: 2s
retries: 5
networks:
docker-network:
FWIW I opened this PR and your example @gmile executes successfully with --wait-allow-exit
@ndeloof who could be an appropriate reviewer for the PR mentioned above, ty 🙂