for-mac
for-mac copied to clipboard
Docker Desktop doesn't limit log sizing
Description
The electron app logs issues encountered with running containers, however there is no max size to the total number of logs generated.
Whilst they roll over into a new log file at 20MB in size, I've had almost 70GiB of logs accumulate over the past 3 days (5th-8th).
Reproduce
- Run a container with some unsupported feature such as amd64 images under Rosetta on Arm Mac #6265
- Observe large log rollover of 20MB into hundreds of files
Expected behavior
Docker Desktop should by default have a max size associated with logging for the Electron app.
docker version
Client:
Version: 27.0.3
API version: 1.46
Go version: go1.21.11
Git commit: 7d4bcd8
Built: Fri Jun 28 23:59:41 2024
OS/Arch: darwin/arm64
Context: desktop-linux
Server: Docker Desktop 4.32.0 (157355)
Engine:
Version: 27.0.3
API version: 1.46 (minimum version 1.24)
Go version: go1.21.11
Git commit: 662f78c
Built: Sat Jun 29 00:02:44 2024
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.7.18
GitCommit: ae71819c4f5e67bb4d5ae76a6b735f29cc25774e
runc:
Version: 1.7.18
GitCommit: v1.1.13-0-g58aa920
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker info
Client:
Version: 27.0.3
Context: desktop-linux
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.15.1-desktop.1
Path: /Users/dporter/.docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.28.1-desktop.1
Path: /Users/dporter/.docker/cli-plugins/docker-compose
debug: Get a shell into any image or container (Docker Inc.)
Version: 0.0.32
Path: /Users/dporter/.docker/cli-plugins/docker-debug
desktop: Docker Desktop commands (Alpha) (Docker Inc.)
Version: v0.0.14
Path: /Users/dporter/.docker/cli-plugins/docker-desktop
dev: Docker Dev Environments (Docker Inc.)
Version: v0.1.2
Path: /Users/dporter/.docker/cli-plugins/docker-dev
extension: Manages Docker extensions (Docker Inc.)
Version: v0.2.25
Path: /Users/dporter/.docker/cli-plugins/docker-extension
feedback: Provide feedback, right in your terminal! (Docker Inc.)
Version: v1.0.5
Path: /Users/dporter/.docker/cli-plugins/docker-feedback
init: Creates Docker-related starter files for your project (Docker Inc.)
Version: v1.3.0
Path: /Users/dporter/.docker/cli-plugins/docker-init
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
Version: 0.6.0
Path: /Users/dporter/.docker/cli-plugins/docker-sbom
scout: Docker Scout (Docker Inc.)
Version: v1.10.0
Path: /Users/dporter/.docker/cli-plugins/docker-scout
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 43
Server Version: 27.0.3
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2
Default Runtime: runc
Init Binary: docker-init
containerd version: ae71819c4f5e67bb4d5ae76a6b735f29cc25774e
runc version: v1.1.13-0-g58aa920
init version: de40ad0
Security Options:
seccomp
Profile: unconfined
cgroupns
Kernel Version: 6.6.32-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: aarch64
CPUs: 8
Total Memory: 3.831GiB
Name: docker-desktop
ID: d6394c82-2642-4f03-933f-738b19620cad
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal
Labels:
com.docker.desktop.address=unix:///Users/dporter/Library/Containers/com.docker.docker/Data/docker-cli.sock
Experimental: false
Insecure Registries:
hubproxy.docker.internal:5555
127.0.0.0/8
Live Restore Enabled: false
WARNING: daemon is not using the default seccomp profile
Diagnostics ID
97B678FB-54D1-423B-8723-4C2C8557A080/20240709194122
Additional Info
A 3 line snippet of the log message:
2024-07-07T15:39:08.868Z error Unhandled uncaught exception : { message: 'write EPIPE', errno: -32, code: 'EPIPE', syscall: 'write' }
2024-07-07T15:39:08.868Z error Unhandled uncaught exception : { message: 'write EPIPE', errno: -32, code: 'EPIPE', syscall: 'write' }
2024-07-07T15:39:08.869Z error Unhandled uncaught exception : { message: 'write EPIPE', errno: -32, code: 'EPIPE', syscall: 'write' }
Log file location:
❯ pwd
/Users/dporter/Library/Containers/com.docker.docker/Data/log
❯ du -sh *
67G host
214M vm
Can confirm that I have the exact issue. Using the same build of Docker for Mac and in my case the disc usage was almost 250GB and most of the files were created on or after July 7th.
Hey @radarsh @Stealthii I'm assuming you're using Mac M1, could you please try out this build to see if the issue still persists?
https://desktop-stage.docker.com/mac/main/arm64/159051/Docker.dmg
Sorry for any inconvenience btw
@trungutt I am using Mac M1. But strangely I haven't had this issue occur again since I deleted the electron log files. Maybe I haven't done the same steps that reproduced this issue in the first place.
Let me try to use this build but it might be tricky due to enterprise firewall restrictions at my work.
@radarsh Thanks for the update. Next time I'd like to know how to reproduce it locally (if possible, having a minimal reproduction always helps, I'm aware it isn't easy most of the time).
You could still try the above build which is supposed to rotate logs correctly. Please let me know if logs continue to be accumulated in similar way.
@trungutt thanks for the build. Since reporting, I've been unable to replicate the issue that resulted in log floods in the first place. I've set up a watcher to see if this occurs again and will report back if the log rotation changes alleviate the logsize problem!
My disk completely ran out. Over 500GB of docker electron error logs.
EDIT: I'll update my docker desktop app and report back if I see this again
Same on Mac M1 (Sonoma 14.6.1) version 4.34.0.
Electron logs consumed almost 220G in a day and a half, and all kinds of apps started failing because of maxed out disk.
I was able to recover by deleting "~/Library/Containers/com.docker.docker/Data/log/host/electron-.log."
cross post to https://github.com/docker/for-win/issues/13926
Still seeing this after upgrading to latest Docker desktop version. Looks like I have 2GB of log files since yesterday
I'm having this issue as well. Mine generated 40GB overnight. Here's the log entry:
2024-09-15T19:05:37.589Z error Unhandled uncaught exception : { message: 'write EPIPE', errno: -32, code: 'EPIPE', syscall: 'write' }
2024-09-15T19:05:37.590Z info [SSE] Disconnecting from /clicontext/events
2024-09-15T19:05:37.590Z info [SSE] Connecting to /clicontext/events
2024-09-15T19:05:37.590Z error [BUGSNAG] Uncaught exception…
Error: write EPIPE
at afterWriteDispatched (node:internal/stream_base_commons:161:15)
at writeGeneric (node:internal/stream_base_commons:152:3)
at Socket._writeGeneric (node:net:953:11)
at Socket._write (node:net:965:8)
at writeOrBuffer (node:internal/streams/writable:564:12)
at _write (node:internal/streams/writable:493:10)
at Writable.write (node:internal/streams/writable:502:10)
at console.value (node:internal/console/constructor:304:16)
at console.warn (node:internal/console/constructor:384:26)
at console.<computed> [as error] (/Applications/Docker.app/Contents/MacOS/Docker Desktop.app/Contents/Resources/app.asar/build/electron.cjs:2:52676)
```
Thanks for raising this, and sorry for the pain it's causing. We've identified the likely cause and we've got a patch into 4.35 (released this week). We think that'll fix it, but we're also tightening up our logging code as extra protection against this sort of over-logging, and hopefully we'll get that into this week's release as well. Please let us know if this persists in 4.35 and I'll make it my top priority to fix it.
Again today. V 4.34.2
Quick update: We've hit some delays on releasing 4.35, so we're planning to release a 4.34.3 version (likely this week) that addresses this issue.
I hit this problem last week. The log files used up all the free disk space on my C: drive, so I freed-up some space, but in a few hours it was gone again. I eventually realized it was 'electron-YYYY-MM-DD-HH.log.NNN' files building up in "C:\Users\UserName\AppData\Local\Docker\log\host". I would often get six 20MB log files per minute. And ended up with nearly 6000 log files, totalling 87 GB. It was logging the following error:
2024-09-30T21:28:07.267Z info [SSE] Disconnecting from /clicontext/events
2024-09-30T21:28:07.267Z info [SSE] Connecting to /clicontext/events
2024-09-30T21:28:07.267Z error [BUGSNAG] Uncaught exception…
Error: EPIPE: broken pipe, write
at Socket._write (node:internal/net:61:18)
at writeOrBuffer (node:internal/streams/writable:564:12)
at _write (node:internal/streams/writable:493:10)
at Writable.write (node:internal/streams/writable:502:10)
at console.value (node:internal/console/constructor:304:16)
at console.warn (node:internal/console/constructor:384:26)
at console.<computed> [as error] (C:\Program Files\Docker\Docker\frontend\resources\app.asar\build\electron.cjs:2:37393)
at C:\Program Files\Docker\Docker\frontend\resources\app.asar\build\electron.cjs:8:326039
at C:\Program Files\Docker\Docker\frontend\resources\app.asar\build\electron.cjs:8:2135903
at a._error (C:\Program Files\Docker\Docker\frontend\resources\app.asar\build\electron.cjs:8:2129515)
2024-09-30T21:28:07.267Z error Unhandled uncaught exception : {
message: 'EPIPE: broken pipe, write',
errno: -4047,
syscall: 'write',
code: 'EPIPE'
}
The docker desktop GUI window was just a white frame with nothing displayed. When things are running normal, the log files are recording " info [IPC] dockerListImages: undefined" every minute. By the way, this is Docker v4.34.2 Update: Sorry, just realized that this thread is for Mac and not Windows, but presumably a similar issue.
Thanks Nigel. That's consistent with the bug we're fixing in 4.34.3, I'll be sure to post here when it's available.
@smithnigelw, @iamwpj, @TylerBarnes, @stoddn: Sorry for the delay, but I'm happy to let you know that 4.34.3 released today! It'll be offered to you in the app as it's rolled out to users, or you can jump the queue and grab an installer - see release notes and download links here.
The new version should immediately clear up your log files when you run it and then keep them at a reasonable level. Please reach out here if you see any issues and I'll jump straight on it.
@smithnigelw, @iamwpj, @TylerBarnes, @stoddn: Sorry for the delay, but I'm happy to let you know that 4.34.3 released today! It'll be offered to you in the app as it's rolled out to users, or you can jump the queue and grab an installer - see release notes and download links here.
The new version should immediately clear up your log files when you run it and then keep them at a reasonable level. Please reach out here if you see any issues and I'll jump straight on it.
Hi @mark-dr, this still seems to persist in our org while on the latest version of Docker
Thanks for the report @klssland. Could you either submit a diagnostic, or describe what you see in your ~/Library/Containers/com.docker.docker/Data/log/host directory? (the relevant log files will start with electron-[YYYY]-[MM]-[DD]). It'd be interesting to see the output of:
ls -l ~/Library/Containers/com.docker.docker/Data/log/host/electron-*log*
Would also be interesting if there's any repeating pattern in the latest log files.
ls -l ~/Library/Containers/com.docker.docker/Data/log/host/electron-log
User has cleared their logs, I'll try find another device where this is occuring
/Users/xxx/Library/Containers/com.docker.docker/Data/log/host/electron-2024-10-15-14.log
/Users/xxx/Library/Containers/com.docker.docker/Data/log/host/electron-audit-log.json
Another device has thousands of these
-rw-r--r-- 1 xxx staff 20971600 Oct 11 15:30 electron-2024-10-11-15.log.531
-rw-r--r-- 1 xxx staff 20972145 Oct 11 15:30 electron-2024-10-11-15.log.532
-rw-r--r-- 1 xxx staff 20972145 Oct 11 15:30 electron-2024-10-11-15.log.533
-rw-r--r-- 1 xxx staff 20971600 Oct 11 15:30 electron-2024-10-11-15.log.534
-rw-r--r-- 1 xxx staff 20972231 Oct 11 15:30 electron-2024-10-11-15.log.535
-rw-r--r-- 1 xxx staff 20972145 Oct 11 15:30 electron-2024-10-11-15.log.536
-rw-r--r-- 1 xxx staff 20971600 Oct 11 15:30 electron-2024-10-11-15.log.537
-rw-r--r-- 1 xxx staff 20971650 Oct 11 15:30 electron-2024-10-11-15.log.538
-rw-r--r-- 1 xxx staff 20972010 Oct 11 15:30 electron-2024-10-11-15.log.539
-rw-r--r-- 1 xxx staff 20971600 Oct 11 15:30 electron-2024-10-11-15.log.540
-rw-r--r-- 1 xxx staff 20972145 Oct 11 15:30 electron-2024-10-11-15.log.541
-rw-r--r-- 1 xxx staff 20972145 Oct 11 15:30 electron-2024-10-11-15.log.542
-rw-r--r-- 1 xxx staff 20972145 Oct 11 15:30 electron-2024-10-11-15.log.543
-rw-r--r-- 1 xxx staff 20971600 Oct 11 15:31 electron-2024-10-11-15.log.544
-rw-r--r-- 1 xxx staff 20972145 Oct 11 15:31 electron-2024-10-11-15.log.545
-rw-r--r-- 1 xxx staff 20972145 Oct 11 15:31 electron-2024-10-11-15.log.546
-rw-r--r-- 1 xxx staff 20971600 Oct 11 15:31 electron-2024-10-11-15.log.547
If you're able to get hold of one of those files, it'd be really handy to see their contents - not necessarily the whole file, especially if there's anything sensitive there, but if there's a particular message that appears a lot then I can look into it.
If you're able to get hold of one of those files, it'd be really handy to see their contents - not necessarily the whole file, especially if there's anything sensitive there, but if there's a particular message that appears a lot then I can look into it.
The log files are 20mb each and seem to repeat the same information in the logs
2024-10-10T14:19:26.376Z info [IPC] dockerListImages: undefined
2024-10-10T14:20:26.379Z info [IPC] dockerListImages: undefined
2024-10-10T14:21:26.379Z info [IPC] dockerListImages: undefined
2024-10-10T14:22:26.377Z info [IPC] dockerListImages: undefined
2024-10-10T14:22:53.630Z info [TRAY] Rebuilding items for login status: LoginSuccess
2024-10-10T14:23:26.378Z info [IPC] dockerListImages: undefined
2024-10-10T14:24:26.380Z info [IPC] dockerListImages: undefined
2024-10-10T14:25:26.379Z info [IPC] dockerListImages: undefined
2024-10-10T14:26:26.382Z info [IPC] dockerListImages: undefined
2024-10-10T14:27:26.377Z info [IPC] dockerListImages: undefined
2024-10-10T14:28:26.379Z info [IPC] dockerListImages: undefined
2024-10-10T14:29:26.381Z info [IPC] dockerListImages: undefined
2024-10-10T14:30:26.381Z info [IPC] dockerListImages: undefined
2024-10-10T14:31:26.380Z info [IPC] dockerListImages: undefined
2024-10-10T14:32:26.380Z info [IPC] dockerListImages: undefined
2024-10-10T14:33:26.382Z info [IPC] dockerListImages: undefined
2024-10-10T14:34:26.371Z info [IPC] dockerListImages: undefined
2024-10-10T14:35:26.368Z info [IPC] dockerListImages: undefined
2024-10-10T14:36:26.369Z info [IPC] dockerListImages: undefined
2024-10-10T14:37:26.367Z info [IPC] dockerListImages: undefined
2024-10-10T14:37:37.842Z error /events failed: undefined
2024-10-10T14:37:37.846Z info [TRAY] Rebuilding items for kubernetes status: stopped
2024-10-10T14:37:37.853Z error [IPC] extensions.extensionsBackendEvents: Errored after 1480516525ms, emitted 10 times, error: { message: 'Unknown error' }
2024-10-10T14:37:37.854Z info [SSE] Disconnecting from /events
2024-10-10T14:37:37.864Z info [TRAY] Rebuilding items for engine state: stopping, idle: running
2024-10-10T14:37:37.868Z info [TRAY] Rebuilding items for engine state: stopping, idle: running
2024-10-10T14:37:37.871Z info [TRAY] Rebuilding items for engine state: stopped, idle: running
2024-10-10T14:37:37.876Z info [IPC] engine.dockerStreamEvents: Unsubscribed after 1480516570ms, emitted 0 times
2024-10-10T14:37:37.879Z error [BUGSNAG] Uncaught exception…
Error: aborted
at Socket.socketCloseListener (node:_http_client:460:19)
at Socket.emit (node:events:531:35)
at Pipe.<anonymous> (node:net:338:12)
2024-10-10T14:37:37.879Z error Unhandled uncaught exception : { message: 'aborted', code: 'ECONNRESET' }
2024-10-10T14:37:37.879Z error [BUGSNAG] Uncaught exception…
Error: aborted
at Socket.socketCloseListener (node:_http_client:460:19)
at Socket.emit (node:events:531:35)
at Pipe.<anonymous> (node:net:338:12)
2024-10-10T14:37:37.879Z error Unhandled uncaught exception : { message: 'aborted', code: 'ECONNRESET' }
2024-10-10T14:37:37.891Z info [IPC] systemResources.openStatsStream: Unsubscribed after 781743853ms, emitted 8563 times
2024-10-10T14:37:37.891Z info [IPC] systemResources.openStatsStream: Unsubscribed after 781743884ms, emitted 8571 times
2024-10-10T14:37:37.892Z info [IPC] engine.dockerStreamEvents: Unsubscribed after 1480515909ms, emitted 95 times
2024-10-10T14:37:37.892Z info [IPC] engine.dockerStreamEvents: Unsubscribed after 1480516484ms, emitted 95 times
2024-10-10T14:37:37.899Z info [HTTP] Error on /analytics/track: socket hang up
2024-10-10T14:37:37.899Z info [SSE] Disconnecting from /clicontext/events
2024-10-10T14:37:37.901Z info [SSE] Connecting to /clicontext/events
2024-10-10T14:37:37.902Z info [SSE] Disconnecting from /pause/events
2024-10-10T14:37:37.902Z info [SSE] Disconnecting from /clicontext/events
2024-10-10T14:37:37.902Z info [SSE] Connecting to /clicontext/events
2024-10-10T14:37:37.902Z error [BUGSNAG] Uncaught exception…
Error: write EPIPE
at afterWriteDispatched (node:internal/stream_base_commons:161:15)
at writeGeneric (node:internal/stream_base_commons:152:3)
at Socket._writeGeneric (node:net:953:11)
at Socket._write (node:net:965:8)
at writeOrBuffer (node:internal/streams/writable:564:12)
at _write (node:internal/streams/writable:493:10)
at Writable.write (node:internal/streams/writable:502:10)
at console.value (node:internal/console/constructor:304:16)
at console.warn (node:internal/console/constructor:384:26)
at console.<computed> [as error] (/Applications/Docker.app/Contents/MacOS/Docker Desktop.app/Contents/Resources/app.asar/build/electron.cjs:2:52676)
2024-10-10T14:37:37.902Z error Unhandled uncaught exception : { message: 'write EPIPE', errno: -32, code: 'EPIPE', syscall: 'write' }
Error: write EPIPE
at afterWriteDispatched (node:internal/stream_base_commons:161:15)
at writeGeneric (node:internal/stream_base_commons:152:3)
at Socket._writeGeneric (node:net:953:11)
at Socket._write (node:net:965:8)
at writeOrBuffer (node:internal/streams/writable:564:12)
at _write (node:internal/streams/writable:493:10)
at Writable.write (node:internal/streams/writable:502:10)
at console.value (node:internal/console/constructor:304:16)
at console.warn (node:internal/console/constructor:384:26)
at console.<computed> [as error] (/Applications/Docker.app/Contents/MacOS/Docker Desktop.app/Contents/Resources/app.asar/build/electron.cjs:2:52676)
2024-10-10T14:42:34.439Z error Unhandled uncaught exception : { message: 'write EPIPE', errno: -32, code: 'EPIPE', syscall: 'write' }
2024-10-10T14:42:34.439Z info [SSE] Disconnecting from /clicontext/events
2024-10-10T14:42:34.439Z info [SSE] Connecting to /clicontext/events
2024-10-10T14:42:34.439Z error [BUGSNAG] Uncaught exception…
@klssland: Are you certain that log is from Docker Desktop v4.34.3? It looks like it was generated by v4.34.2.
Closing as fixed, but please leave another message here if you see any issues with versions on or after v4.35.0.
Again on v.4.45.0, accumulated 70GB of logs in 4 days.
Thanks for the report, @guareber! Was that concentrated in any particular files? (there are some docs here on where to find them)