docker
docker copied to clipboard
Issue with docker cleaning up in /run and causing inodes to be depleted
Output of docker version
:
# docker version
Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-88.git07f3374.el7.centos.x86_64
Go version: go1.9.4
Git commit: 07f3374/1.13.1
Built: Fri Dec 7 16:13:51 2018
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-88.git07f3374.el7.centos.x86_64
Go version: go1.9.4
Git commit: 07f3374/1.13.1
Built: Fri Dec 7 16:13:51 2018
OS/Arch: linux/amd64
Experimental: false
Output of docker info
:
# docker info
Containers: 41
Running: 32
Paused: 0
Stopped: 9
Images: 91
Server Version: 1.13.1
Storage Driver: devicemapper
Pool Name: docker--vg-docker--pool
Pool Blocksize: 524.3 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 25.6 GB
Data Space Total: 53.17 GB
Data Space Available: 27.57 GB
Metadata Space Used: 8.995 MB
Metadata Space Total: 54.53 MB
Metadata Space Available: 45.53 MB
Thin Pool Minimum Free Space: 5.316 GB
Udev Sync Supported: true
Deferred Removal Enabled: true
Deferred Deletion Enabled: true
Deferred Deleted Device Count: 0
Library Version: 1.02.135-RHEL7 (2016-11-16)
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: /usr/libexec/docker/docker-init-current
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: fec3683b971d9c3ef73f284f176672c44b448662 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
WARNING: You're not using the default seccomp profile
Profile: /etc/docker/seccomp.json
selinux
Kernel Version: 3.10.0-514.10.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 4
Total Memory: 31.25 GiB
Name: os-node-p04
ID: 7ZD3:FQ3S:SUWP:UY5L:DA22:DNUL:RLMJ:ZTUW:DH3X:KM46:NKBU:LPT6
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 194
Goroutines: 154
System Time: 2019-01-10T14:40:04.603679055-05:00
EventsListeners: 0
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Experimental: false
Insecure Registries:
172.30.0.0/16
127.0.0.0/8
Live Restore Enabled: false
Registries: docker.io (secure)
Additional environment details (AWS, VirtualBox, physical, etc.):
CentOS Linux release 7.3.1611 (Core) Openshift 3.7.1
Describe the results you received:
The issue is that /run increases in inode use until all the inodes are depleted and Openshift and Docker become unstable. Normally restarting docker will clear some space or a reboot to completely clean it up.
The directory structures which use all the inodes are in /run/docker/libcontainerd/
247147 ./docker/libcontainerd/83a2edee0c029624b782965447e2176d4f268be926b9334ebc6e49ae4fd9bd68
340480 ./docker/libcontainerd/8df6fcee5e8b53102cb7f4feec78f87c19d160b89fd0141fab88dcd80679d201
340480 ./docker/libcontainerd/bfaf3881abb7ea3533681d294d8fc17e0604adbcf375f87470ffaef516001f63
and seem to be related to stdin/stderr/stdout of exec'd commands:
prwx------. 1 root root 0 Nov 13 10:58 5eee5c2df353b6363c798e6625c962157609c8d6cf13d94772d90654f8331732-stdout
prwx------. 1 root root 0 Nov 13 10:58 5eee5c2df353b6363c798e6625c962157609c8d6cf13d94772d90654f8331732-stdin
prwx------. 1 root root 0 Nov 13 10:58 5eee5c2df353b6363c798e6625c962157609c8d6cf13d94772d90654f8331732-stderr
Note that this is a current container and still has entries from November and has 247163 total entries.
Describe the results you expected: I would expect that docker clean up /run as appropriate so that /run does not use all inodes.
Additional information you deem important (e.g. issue happens only occasionally):
I saw a similar issue in the another repo:
https://github.com/docker/for-linux/issues/214
which seems to indicate this is fixed upstream somehow. I have not been able to find any further information.
Thanks!
--John
Can confirm that it happened in our setup
-
Ubuntu Server 16.04
-
Docker version 18.06-3-ce