runtime
runtime copied to clipboard
running stress test the runtime is unable to unmount some rootfs
create & remove stress test
for i in $(seq 1 50); do \
for j in $(seq 1 20); do \
docker run -dti busybox sh; \
done; \
sleep 5; \
docker rm -f $(docker ps -aq); \
done
docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 14
Server Version: 17.03.1-ce
Storage Driver: overlay
Backing Filesystem: xfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: cc30 cor runc
Default Runtime: cc30
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.11.10-100.fc24.x86_64
Operating System: Fedora 24 (Twenty Four)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.6 GiB
Name: localhost.localdomain
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 15
Goroutines: 25
System Time: 2017-09-18T15:55:53.424499259-05:00
EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
mount command shows mounted rootfs which should have been unmounted
$ mount | grep "docker"
/dev/mapper/fedora-root on /var/lib/docker/overlay type xfs (rw,relatime,attr2,inode64,noquota)
overlay on /tmp/hyper/shared/pods/8866775ed89987081d4a63e3c36e1c26ff4b562732349a45ca5cffaaf7b1dc58/8866775ed89987081d4a63e3c36e1c26ff4b562732349a45ca5cffaaf7b1dc58/rootfs type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay/4dbbd011a0012f0c57dec9b9f4674bada6bfc7bda44fc4778f15f28fd62fd072/root,upperdir=/var/lib/docker/overlay/ce73ffb5be14acec5f4c516d6d5fbe627e338ec01cda1274ad453a0817cfc82e/upper,workdir=/var/lib/docker/overlay/ce73ffb5be14acec5f4c516d6d5fbe627e338ec01cda1274ad453a0817cfc82e/work)
overlay on /tmp/hyper/shared/pods/e261f4042f571a1dcc0430db5d1d4601eac815c51bc8e73166801a594a2f9678/e261f4042f571a1dcc0430db5d1d4601eac815c51bc8e73166801a594a2f9678/rootfs type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay/4dbbd011a0012f0c57dec9b9f4674bada6bfc7bda44fc4778f15f28fd62fd072/root,upperdir=/var/lib/docker/overlay/90012e1fd8b5437acfe7f4d09e23888ec9e5783c346373e480487c0d9021044e/upper,workdir=/var/lib/docker/overlay/90012e1fd8b5437acfe7f4d09e23888ec9e5783c346373e480487c0d9021044e/work)
@devimc I assume you meant
mount command shows mounted rootfs which should have been unmounted
mount command shows unmounted rootfs
@mcastelino that's correct
using runc as runtime I can't reproduce the issue
$ mount | grep "docker"
/dev/mapper/fedora-root on /var/lib/docker/overlay type xfs (rw,relatime,attr2,inode64,noquota)
I wonder if this is at all related to: https://github.com/clearcontainers/tests/issues/492 @devimc can you also check to see if there are other components of CC running still - qemu, shim etc.?
@grahamwhaley yes, qemu is still running
$ ps -ef | grep qemu
root 6859 1 1 08:45 ? 00:00:13 /usr/bin/qemu-lite-system-x86_64 -name pod-659ed2faab0b010430e3d23ef4ca1827e28f47a13e5f9664b90f014a8eaeed66 -uuid 36353965-6432-6661-6162-306230313034 -machine pc,accel=kvm,kernel_irqchip,............
$ mount | grep "docker"
/dev/mapper/fedora-root on /var/lib/docker/overlay type xfs (rw,relatime,attr2,inode64,noquota)
overlay on /tmp/hyper/shared/pods/659ed2faab0b010430e3d23ef4ca1827e28f47a13e5f9664b90f014a8eaeed66/659ed2faab0b010430e3d23ef4ca1827e28f47a13e5f9664b90f014a8eaeed66/rootfs type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay/4dbbd011a0012f0c57dec9b9f4674bada6bfc7bda44fc4778f15f28fd62fd072/root,upperdir=/var/lib/docker/overlay/efe6ea07cd5b06a0965b0ec9159462406b018e9cd522bf5980890ea167a2f029/upper,workdir=/var/lib/docker/overlay/efe6ea07cd5b06a0965b0ec9159462406b018e9cd522bf5980890ea167a2f029/work)
@grahamwhaley @devimc The condition that I am worried about w.r.t to 9p is that all the QEMUs are shutdown but the mounts are still mounted. We have seen this happen when we change the caching model of 9p
@mcastelino sure - we should check the sanity of everything we can. Over in https://github.com/clearcontainers/tests/pull/491 I check many things, but intend to add the 'mounts' check as well. Let me see if I can go do that now (although if that PR landed in the meantime I'd not be sad ;-) )