compose
compose copied to clipboard
New TUI spams lines when pulling multiple images causing you to lose your terminal history
Description
@thaJeztah has asked me to open an issue after commenting on #8753
For your use-case, is there a specific reason why the new output is problematic for you? If so, could you describe your use-case? Perhaps there's enhancements to be made to address.
I'm not the person who created this issue but I have one complaint about the new output as well. First of all, I love BuildKit's output in docker build. But, compose's implementation makes me question how this made it through an official 2.0 release.
https://user-images.githubusercontent.com/28601081/137222298-31ca5d3a-68be-4041-82e8-8d58cb8e8999.mp4
This can be reproduced in both Windows' command prompt, Windows Terminal and probably others. It literally spams thousands of lines to the terminal ruining the ability to scroll back. Sorry if I'm hijacking the issue but it seemed fairly fitting as I have to resort to rolling back to a 1.x release.
Originally posted by @clrxbl in https://github.com/docker/compose/issues/8753#issuecomment-942773151
Steps to reproduce the issue:
- docker compose up on a docker-compose.yml file that contains multiple images & a small enough terminal window.
Describe the results you received: See the above video
Describe the results you expected: I should be able to scroll back and not lose all of my terminal history, even after the command is done.
Additional information you deem important (e.g. issue happens only occasionally): It seems like this issue isn't present if you're reproducing it in a large enough terminal window (e.g. fullscreen)
Output of docker compose version
:
Docker Compose version 2.0.1
Output of docker info
:
WARNING: Plugin "/usr/local/lib/docker/cli-plugins/docker-buildx" is not valid: failed to fetch metadata: fork/exec /usr/local/lib/docker/cli-plugins/docker-buildx: no such file or directory
WARNING: Plugin "/usr/local/lib/docker/cli-plugins/docker-compose" is not valid: failed to fetch metadata: fork/exec /usr/local/lib/docker/cli-plugins/docker-compose: no such file or directory
WARNING: Plugin "/usr/local/lib/docker/cli-plugins/docker-scan" is not valid: failed to fetch metadata: fork/exec /usr/local/lib/docker/cli-plugins/docker-scan: no such file or directory
Client:
WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
Context: default
Debug Mode: false
Plugins:
Server:
Containers: 12
Running: 0
Paused: 0
Stopped: 12
Images: 11
Server Version: 20.10.9
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8686ededfc90076914c5238eb96c883ea093a8ba.m
runc version: v1.0.2-0-g52b36a2d
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.10.60.1-microsoft-standard-WSL2
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 15.63GiB
Name: DESKTOP-BQ26BOE-wsl
ID: CWUC:IOEW:TCJZ:EZJS:RTO5:YSBV:X7BE:YHNS:MQNY:V3VQ:N3NI:I4T4
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: clrxbl
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Additional environment details: Arch Linux WSL2 w/ systemd-genie
@crazy-max any thoughts?
I don't think it just build, although multiple builds will definitely be adding to the output. The example video shows that pulling (if the compose-file has many images) can also be problematic if there's more pulls than fit on the screen.
Doing a quick test with a stack that uses a postgres
image (which has various layers);
services:
foo:
image: postgres
Produces this output (ignore the error at the end, it's just for illustration):
docker compose up
[+] Running 14/14
⠿ foo Pulled 14.6s
⠿ e5ae68f74026 Pull complete 5.6s
⠿ 7b8fcc7e1ad0 Pull complete 6.0s
⠿ 7527d03e2f77 Pull complete 6.1s
⠿ 80e55689f4d0 Pull complete 6.2s
⠿ 8a79eb6d69c9 Pull complete 6.7s
⠿ 397705f2d093 Pull complete 6.8s
⠿ de36ec4eb0a5 Pull complete 6.9s
⠿ 08d878a022c1 Pull complete 7.0s
⠿ 7677029670ff Pull complete 11.4s
⠿ 1d24b3d9557e Pull complete 11.5s
⠿ e085b018338c Pull complete 11.5s
⠿ 063b09ff12e9 Pull complete 11.6s
⠿ a39fee215a44 Pull complete 11.7s
[+] Running 2/2
⠿ Network compose-progress_default Created 0.1s
⠿ Container compose-progress-foo-1 Created 1.4s
Attaching to compose-progress-foo-1
compose-progress-foo-1 | Error: Database is uninitialized and superuser password is not specified.
compose-progress-foo-1 | You must specify POSTGRES_PASSWORD to a non-empty value for the
compose-progress-foo-1 | superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
compose-progress-foo-1 |
compose-progress-foo-1 | You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
compose-progress-foo-1 | connections without a password. This is *not* recommended.
compose-progress-foo-1 |
compose-progress-foo-1 | See PostgreSQL documentation about "trust":
compose-progress-foo-1 | https://www.postgresql.org/docs/current/auth-trust.html
compose-progress-foo-1 exited with code 1
While the pull is running, Downloading
and Extracting
progress bars are shown for each layer:
⠼ e5ae68f74026 Extracting [==================================================>] 31.37MB/31.37MB 4.4s
⠼ 7b8fcc7e1ad0 Download complete 4.4s
⠼ 7527d03e2f77 Download complete 4.4s
⠼ 80e55689f4d0 Download complete 4.4s
⠼ 8a79eb6d69c9 Download complete 4.4s
⠼ 397705f2d093 Download complete 4.4s
⠼ de36ec4eb0a5 Download complete 4.4s
⠼ 08d878a022c1 Download complete 4.4s
⠼ 7677029670ff Downloading [=============================> ] 53.55MB/91.23MB 4.4s
⠼ 1d24b3d9557e Download complete 4.4s
Based on that output, some ideas:
Remove the pull complete
lines after they have completed / after the image has fully downloaded
Collapse the section, and replace with a summary.
⠿ foo Pulled 14.6s
Perhaps the summary should have more details than just "pulled"; users may be interested to know what image (and tag, digest?) was pulled, and how large it was in total (?), as well as the architecture that was pulled (?) haven't given that a lot of thinking yet;
⠿ foo Pulled postgres:latest (digest sha256:xxx) linux/amd64, 12345MB 14.6s
Use a single progress (or counter) per image during pull
Perhaps depending on the amount of available space or the number of services / pulls, perhaps it's possible to group the progress bars. The size would show the total size / sum of all layers (I'm not sure if this works if layers are already downloaded before though);
postgres:latest Downloading 3/13 [=============================> ] 53.55MB/91.23MB 4.4s
postgres:latest Extracting 2/13 [==================================================>] 31.37MB/31.37MB 4.4s
Looking at the output, these are also somewhat confusing:
[+] Running 14/14
[+] Running 2/2
Wondering if we should use different names for such sections (e.g. "Pulling images for services") or even if some actions should be grouped "per service". For example, the current output also shows:
[+] Running 3/3
⠿ Network compose-progress_default Created 0.0s
⠿ Container compose-progress-bar-1 Created 3.2s
⠿ Container compose-progress-foo-1 Created 3.2s
Perhaps the Container compose-progress-bar-1 Created
(etc.) should be under a Service foo
group (not sure if "created" and "started" should be shown separately after it's done);
[+] Starting services 2/2
[+] Service foo 3/3
⠿ Image postgres:latest pulled (digest sha256:xxx) linux/amd64, 12345MB 14.6s
⠿ Container compose-progress-foo-1 Created 3.2s
⠿ Container compose-progress-foo-1 Started 1.1s
[+] Service bar 3/3
⠿ Image postgres:latest pulled (digest sha256:xxx) linux/amd64, 12345MB 14.6s
⠿ Container compose-progress-bar-1 Created 3.2s
⠿ Container compose-progress-bar-1 Started 1.1s
While trying, I also noticed some "jittery" output if two services use the same image (this may be the same if they use different images, but layers that are shared between those images);
services:
foo:
image: postgres
bar:
image: postgres
During the pull, the output is "jumpy". Perhaps because both foo
and bar
services try to show progress for the same layers:
docker compose down -v
docker rmi postgres
docker compose up
[+] Running 8/15
⠋ foo Pulling 12.0s
⠿ 8a79eb6d69c9 Pull complete 4.9s
⠿ 397705f2d093 Pull complete 5.1s
⠿ de36ec4eb0a5 Pull complete 5.2s
⠏ 1d24b3d9557e Download complete 8.9s
⠏ a39fee215a44 Download complete 8.9s
⠋ bar Pulling 12.0s
⠿ e5ae68f74026 Pull complete 3.4s
⠿ 7b8fcc7e1ad0 Pull complete 3.9s
⠿ 7527d03e2f77 Pull complete 4.0s
⠿ 80e55689f4d0 Pull complete 4.2s
⠿ 08d878a022c1 Pull complete 5.3s
⠏ 7677029670ff Extracting [=============================================> ] 83.56MB/91.23MB 8.9s
⠏ e085b018338c Download complete 8.9s
⠏ 063b09ff12e9 Download complete 8.9s
After the pull completed, the output is a bit confusing, because some layers are shown under service foo
, and some under bar
docker compose up
[+] Running 15/15
⠿ foo Pulled 14.5s
⠿ 8a79eb6d69c9 Pull complete 4.9s
⠿ 397705f2d093 Pull complete 5.1s
⠿ de36ec4eb0a5 Pull complete 5.2s
⠿ e085b018338c Pull complete 11.3s
⠿ bar Pulled 14.5s
⠿ e5ae68f74026 Pull complete 3.4s
⠿ 7b8fcc7e1ad0 Pull complete 3.9s
⠿ 7527d03e2f77 Pull complete 4.0s
⠿ 80e55689f4d0 Pull complete 4.2s
⠿ 08d878a022c1 Pull complete 5.3s
⠿ 7677029670ff Pull complete 11.2s
⠿ 1d24b3d9557e Pull complete 11.2s
⠿ 063b09ff12e9 Pull complete 11.4s
⠿ a39fee215a44 Pull complete 11.4s
[+] Running 3/3
⠿ Network compose-progress_default Created 0.0s
⠿ Container compose-progress-bar-1 Created 3.2s
⠿ Container compose-progress-foo-1 Created 3.2s
Attaching to compose-progress-bar-1, compose-progress-foo-1
Not sure what the solution to that would be;
- for the "two services use the same image", I guess compose could detect this case, and only do a pull once, but of course that won't help if the images are different (but share common layers).
- alternatively, do the "reverse", and make sure that layer progress is always shown under "both" (duplicate the progress in case it's shared); possibly challenging to do this
Hiding progress of individual layers (as mentioned in my previous comment) may help here as well.
@thaJeztah I can confirm that this is not just happening with build
; I've been seeing this with up
too in Compose 2.1.1 in Ubuntu 20.
I have no idea about the implementation, but could the live progress output be trimmed to the available space with a "snipped" indicator?
@thaJeztah buildkit indeed consider the build requests we schedule in parallel for services to be fully independent, and as such won't do any magic on concurrent pulls for same layers.
@ericslandry for your information, this output is fully managed by buildx.
this output is fully managed by buildx.
In my example, these are pulls
, no build used, or do you mean it's using the buildx library code for this?
Maybe I'm wrong, I thought progress writer introduced by https://github.com/docker/compose-cli/pull/233 was a copy/paste from buildkit. Maybe a full re-implementation with the same UX in mind?
Does docker buildx bake
have the same issue rendering build/pull events ?
Based on a quick test with a FROM postgres
build, it looks like for "pull as part of a build" it also is quite verbose; I think for that output it would also make sense to "collapse" those lines after they're completed (similar to how RUN
lines are shown while they're running, but collapsed after they complete;
echo 'FROM postgres' | docker build -
[+] Building 13.4s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 56B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/postgres:latest 2.7s
=> [auth] library/postgres:pull token for registry-1.docker.io 0.0s
=> [1/1] FROM docker.io/library/postgres@sha256:f76241d07218561e3d1a334eae6a5bf63c70b49f35ffecb7f020448e30e37390 10.6s
=> => resolve docker.io/library/postgres@sha256:f76241d07218561e3d1a334eae6a5bf63c70b49f35ffecb7f020448e30e37390 0.0s
=> => sha256:e5ae68f740265288a4888db98d2999a638fdcb6d725f427678814538d253aa4d 31.37MB / 31.37MB 1.4s
=> => sha256:f76241d07218561e3d1a334eae6a5bf63c70b49f35ffecb7f020448e30e37390 1.86kB / 1.86kB 0.0s
=> => sha256:e94a3bb612246f1f672a0d11fbd16415e2f95d308b37d38deaa8c2bd3c0116d8 10.23kB / 10.23kB 0.0s
=> => sha256:7527d03e2f7758fcbc420254a6a9ae51b970e70fec727269376356568f42e9bc 1.80kB / 1.80kB 0.4s
=> => sha256:7b8fcc7e1ad054463615f8e9ada48d0c011c51bc03317a709c4cfc23a3af52c7 4.41MB / 4.41MB 0.7s
=> => sha256:fb0630c9679aeef051ca89dddf5919c19be96109ae0648116900be09eb79545e 3.04kB / 3.04kB 0.0s
=> => sha256:80e55689f4d0cdd390957c8e1135b143ca3afb1486a1ca5a9fc01429d483b48d 1.42MB / 1.42MB 0.9s
=> => sha256:8a79eb6d69c9c99a8e75aa15011c9e57d0af9f3822905c12ec38e68d9d5c5cb9 8.05MB / 8.05MB 1.5s
=> => sha256:397705f2d09375da10b9c3cbfe61556a95d9673f6e016382f20bfed7284e85db 441.55kB / 441.55kB 1.2s
=> => sha256:de36ec4eb0a50925495a0bbc72e83cab5bd5d8ecf490c913f13412b2786fc25e 149B / 149B 1.5s
=> => sha256:08d878a022c1a8a3333ae9a3de8170431bde517831abbca34f768501e5cfda51 3.05kB / 3.05kB 1.6s
=> => sha256:7677029670ff4fe3625939c77c161e208cdc1e9c21ff9095d23160380ff492e5 91.23MB / 91.23MB 4.6s
=> => extracting sha256:e5ae68f740265288a4888db98d2999a638fdcb6d725f427678814538d253aa4d 2.7s
=> => sha256:1d24b3d9557e5acd875742dd0d101e562a5f2ca32ed7cc3351d3b4d8bb8bed7a 9.54kB / 9.54kB 1.7s
=> => sha256:e085b018338cc45b376fb135133a09b9736b5ea35be6d9925dc8fbde17e7e98b 129B / 129B 1.8s
=> => sha256:063b09ff12e95630f4462889f6f1fb572de84bbde104ea7f469d6497799a9736 201B / 201B 1.9s
=> => sha256:a39fee215a44ffe3b744f8a79378afe57a00606a5f28856ddbd0096000c0d95d 4.72kB / 4.72kB 2.0s
=> => extracting sha256:7b8fcc7e1ad054463615f8e9ada48d0c011c51bc03317a709c4cfc23a3af52c7 0.2s
=> => extracting sha256:7527d03e2f7758fcbc420254a6a9ae51b970e70fec727269376356568f42e9bc 0.1s
=> => extracting sha256:80e55689f4d0cdd390957c8e1135b143ca3afb1486a1ca5a9fc01429d483b48d 0.1s
=> => extracting sha256:8a79eb6d69c9c99a8e75aa15011c9e57d0af9f3822905c12ec38e68d9d5c5cb9 0.4s
=> => extracting sha256:397705f2d09375da10b9c3cbfe61556a95d9673f6e016382f20bfed7284e85db 0.1s
=> => extracting sha256:de36ec4eb0a50925495a0bbc72e83cab5bd5d8ecf490c913f13412b2786fc25e 0.0s
=> => extracting sha256:08d878a022c1a8a3333ae9a3de8170431bde517831abbca34f768501e5cfda51 0.0s
=> => extracting sha256:7677029670ff4fe3625939c77c161e208cdc1e9c21ff9095d23160380ff492e5 3.7s
=> => extracting sha256:1d24b3d9557e5acd875742dd0d101e562a5f2ca32ed7cc3351d3b4d8bb8bed7a 0.0s
=> => extracting sha256:e085b018338cc45b376fb135133a09b9736b5ea35be6d9925dc8fbde17e7e98b 0.0s
=> => extracting sha256:063b09ff12e95630f4462889f6f1fb572de84bbde104ea7f469d6497799a9736 0.0s
=> => extracting sha256:a39fee215a44ffe3b744f8a79378afe57a00606a5f28856ddbd0096000c0d95d 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:d2474174ccb2b79d24c4a02130c602d0800484df71818a93ae2aff9adc651663 0.0s
Opened https://github.com/moby/buildkit/issues/2511
@ndeloof : does buildx also control the part when containers are creating/starting?
[+] Running 43/43redacted-1 Started 4.5s
⠿ Network redacted-default Created 0.0s
⠿ Volume "redacted_a-data" Created 0.0s
⠿ Volume "redacted_b-data" Created 0.0s
⠿ Container redacted-a-1 Started 3.0s
⠿ Container redacted-b-1 Started 3.1s
⠿ Container redacted-c-1 Started 3.3s
⠿ Container redacted-postgres-1 Started 3.3s
⠿ Container redacted-kafka-1 Started 10.0s
In a short windows (say 4 lines high), the problem still occurs, even without the pulls
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
.
This issue has been automatically marked as not stale anymore due to the recent activity.
Was this fixed by #9476 (in release v2.6.0)? I don't get any flickering or line spam at all anymore when multiple containers use the same image.
Here's my use case to reproduce the problem.
- Docker 20.10.17
- Docker Compose v2.6.0
- This
compose.yaml
:
x-service: &default-service
image: nginx:latest
command: >
bash -c "sleep 10 && nginx -g 'daemon off;'"
healthcheck:
test: service nginx status || exit 1
interval: 2s
timeout: 1s
retries: 200
start_period: 600s
services:
svc_a:
<<: *default-service
svc_b:
<<: *default-service
depends_on:
svc_a:
condition: service_healthy
svc_c:
<<: *default-service
depends_on:
svc_b:
condition: service_healthy
svc_d:
<<: *default-service
depends_on:
svc_c:
condition: service_healthy
svc_e:
<<: *default-service
depends_on:
svc_d:
condition: service_healthy
svc_f:
<<: *default-service
depends_on:
svc_e:
condition: service_healthy
svc_g:
<<: *default-service
depends_on:
svc_f:
condition: service_healthy
svc_h:
<<: *default-service
depends_on:
svc_g:
condition: service_healthy
svc_i:
<<: *default-service
depends_on:
svc_h:
condition: service_healthy
svc_j:
<<: *default-service
depends_on:
svc_i:
condition: service_healthy
svc_k:
<<: *default-service
depends_on:
svc_j:
condition: service_healthy
svc_l:
<<: *default-service
depends_on:
svc_k:
condition: service_healthy
- Make terminal 11 lines high
-
docker compose up -d
- Notice terminal scroll
@BBaoVanC The problem is still present if your docker-compose contains multiple services using different images
There's should be a toggle option (cli + env. var. ?) so the output is not a mess on complexe stack pull/build :(
@ericslandry I can reproduce using your config.
I think the reason that v2.6.0 fixed it for me was because it made it so only one service would show the pull progress at the same time. After that change, my terminal was never small enough to not be able to fit it all on one screen without scrolling.
Your config works fine for me if my terminal is tall enough to fit it all (about 23 lines), but if I do 11 lines like you said, I see the issue happen again.
Boy is this a mess on a large compose pull. How about an option to just suppress individual layer progress? It doesn't take much to overshoot the line height of most terminals with a reasonable compose file. Otherwise you have to either watch the fireworks, or resort to something like:
docker compose ps --services |xargs -n 1 docker compose pull
Have a large number of containers within a docker-compose file. Same problem:
- It spams the output buffer by jumping around like crazy
- it flashes as fireworks like crazy while it's happening
- The ultimate output simply says "[container] Pulled ... X.Xs" but provides no indications of which were up to date in the first place and which were actively pulled
- The ultimately output is a mix of 'containername Pulled", "XXXX already exists" and "xxxx pull complete" with some items showing the breakout of parts and others not.
As pointed out, it's very easy to exceed a typical terminal height in this case.
Possible ideas:
- Do the docker compose pull sequentially or in batches, which reduces the number of pulls at any given time and thus concludes some items before the next few are started
- Spare the output of any that are "up to date" to the end. A quick test of whether the image is up to date, and if it is, output nothing until after all are updated and then output the "XXXX is up to date"
- A better terminal implementation
A workaround that works for me is to add --quiet-pull
to docker compose up
. This doesn't show progress for individual images, only the service itself.
This is becoming significantly more problematic as our users get automatically opted in to Compose V2 - with the size of our compose file, docker compose pull
is virtually unreadable.
I do believe it's the same as #9377 but the height required exceeds that of my built in monitor on my laptop, so it's difficult to verify.
I believe I have been having this issue.
Connecting to a common server, I was not getting the issue when on my mobile client (Termius for iOS) but was getting it on my desktop using SecureCRT where it was producing thousands of lines when pulling images.
I've just played with the settings on SecureCRT and found that if I disabled the "Line wrap" emulation mode, it stopped the issue from happening instantly.
Apologies if this comment isn't of any relevance or significance but I found this issue when trying to troubleshoot the problem and wanted to share the fix that worked for me.
@laurazard Thank you very much for improving the situation! Unfortunately, my problem persists as per your PR comment:
"Detect when the number of events we have to display is > than terminal height, and adjust the output to omit child events when that's the case. This isn't a perfect solution (if the number of services > terminal height we will still run into issues, and the shortened output isn't very explicit), but it only kicks in in cases where the output would otherwise be worse, so it's an improvement over the current situation."
Would it be ok to leave this issue open or should a new issue be created?
Please open a fresh new issue. The main question to address is "how to display status for N services when N > terminal height?". Any suggestion is welcome
I'm at least still seeing this on Docker Compose version v2.17.3
with docker version 23.0.5
I'm not sure line-height is the ultimate culprit here. I'm seeing hectic-as-heck output even with single service build in a full height terminal window on a laptop with a built-in 1600x1200 display with scaling disabled. It started happening with the release of v2.17. With v2.16, output with the same services was as-expected.
@dcarbone since your issue started more recently and doesn't seem related, please open a new issue with all the requested information + information on how to reproduce it so we can take a look :)
I'm at least still seeing this on Docker Compose version
v2.17.3
with docker version23.0.5
Same for me. I am using Docker Desktop 4.19.0
on a M1 Macbook 16-inch, 2021 with Ventura 13.3.1.
Interestingly this bug only surfaced today after updating Docker Desktop from 4.17.x
to 4.19.0
, before the log output didn't jump and everything was smooth even with the same exact same environment and same Dockerfile
and docker-compose.yml
.
Maybe all of you should comment on open issue #10256