gvisor
gvisor copied to clipboard
docker: Error response from daemon: OCI runtime create failed: /var/lib/docker/runtimes/runsc did not terminate sucessfully: creating container: fork/exec /proc/self/exe: operation not permitted
I cannot start a simple example container with runsc, the normal thingy works though (after a manual setup step).
I’ve yesterday downloaded your binary build and set it up following the instructions on the Github page (except I used /etc/init.d/docker instead of systemd, obviously) and get this (requested debugging output is at the bottom):
(sid-amd64)tglase@tglase:~ $ sudo docker run -it debian:buster /bin/bash
[sudo] password for tglase:
root@9174136227bd:/# echo this works; cat /etc/debian_version
this works
buster/sid
root@9174136227bd:/# exit
exit
(sid-amd64)tglase@tglase:~ $ sudo docker run --runtime=runsc -it debian:buster /bin/bash
docker: Error response from daemon: OCI runtime create failed: /var/lib/docker/runtimes/runsc did not terminate sucessfully: creating container: fork/exec /proc/self/exe: operation not permitted
: unknown.
(sid-amd64)125|tglase@tglase:~ $ sudo docker version
Client:
Version: 18.09.1
API version: 1.39
Go version: go1.11.5
Git commit: 4c52b90
Built: Mon, 11 Mar 2019 00:06:03 +0000
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.09.1
API version: 1.39 (minimum version 1.12)
Go version: go1.11.5
Git commit: 4c52b90
Built: Mon Mar 11 00:06:03 2019
OS/Arch: linux/amd64
Experimental: false
(sid-amd64)tglase@tglase:~ $ sudo docker info
Containers: 11
Running: 0
Paused: 0
Stopped: 11
Images: 2
Server Version: 18.09.1
Storage Driver: vfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc runsc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 1.0.0~rc6+dfsg1-3
init version: v0.18.0 (expected: fec3683b971d9c3ef73f284f176672c44b448662)
Security Options:
seccomp
Profile: default
Kernel Version: 4.18.0-2-amd64
Operating System: Debian GNU/Linux buster/sid
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 23.54GiB
Name: tglase.lan.tarent.de
ID: DT5O:FLGY:6QA7:7OWW:NF3F:PDPI:QMKH:NBIT:Q4MN:WXPR:TXTG:5IG5
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
The debugging stuff isn’t much:
(sid-amd64)tglase@tglase:/tmp/runsc $ ll
total 16
-rw-r--r-- 1 root root 14524 Mär 20 13:26 runsc.log.20190320-132637.812111.create
-rw-r--r-- 1 root root 0 Mär 20 13:26 runsc.log.20190320-132637.818888.gofer
-rw-r--r-- 1 root root 0 Mär 20 13:26 runsc.log.20190320-132637.820576.boot
The content of the runsc.log.20190320-132637.812111.create file follows:
I0320 13:26:37.812195 12375 x:0] ***************************
I0320 13:26:37.812312 12375 x:0] Args: [/usr/local/bin/runsc --debug-log=/tmp/runsc/ --debug --strace --root /var/run/docker/runtime-runsc/moby --log /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/log.json --log-format json create --bundle /var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa --pid-file /var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/init.pid --console-socket /tmp/pty510090681/pty.sock ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa]
I0320 13:26:37.812376 12375 x:0] Git Revision: 8a499ae65f361fb01c2e4be03122f69910a8ba4a
I0320 13:26:37.812402 12375 x:0] PID: 12375
I0320 13:26:37.812429 12375 x:0] UID: 0, GID: 0
I0320 13:26:37.812454 12375 x:0] Configuration:
I0320 13:26:37.812477 12375 x:0] RootDir: /var/run/docker/runtime-runsc/moby
I0320 13:26:37.812501 12375 x:0] Platform: ptrace
I0320 13:26:37.812530 12375 x:0] FileAccess: exclusive, overlay: false
I0320 13:26:37.812558 12375 x:0] Network: sandbox, logging: false
I0320 13:26:37.812586 12375 x:0] Strace: true, max size: 1024, syscalls: []
I0320 13:26:37.812613 12375 x:0] ***************************
W0320 13:26:37.815558 12375 x:0] Seccomp spec is being ignored
D0320 13:26:37.815616 12375 x:0] Spec: &{Version:1.0.1 Process:0xc00015f2b0 Root:0xc00018d940 Hostname:ad86f41f137c Mounts:[{Destination:/proc Type:proc Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/proc Options:[nosuid noexec nodev]} {Destination:/dev Type:tmpfs Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/tmpfs Options:[nosuid strictatime mode=755 size=65536k]} {Destination:/dev/pts Type:devpts Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/devpts Options:[nosuid noexec newinstance ptmxmode=0666 mode=0620 gid=5]} {Destination:/sys Type:sysfs Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/sysfs Options:[nosuid noexec nodev ro]} {Destination:/sys/fs/cgroup Type:cgroup Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/cgroup Options:[ro nosuid noexec nodev]} {Destination:/dev/mqueue Type:mqueue Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/mqueue Options:[nosuid noexec nodev]} {Destination:/etc/resolv.conf Type:bind Source:/var/lib/docker/containers/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/resolv.conf Options:[rbind rprivate]} {Destination:/etc/hostname Type:bind Source:/var/lib/docker/containers/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/hostname Options:[rbind rprivate]} {Destination:/etc/hosts Type:bind Source:/var/lib/docker/containers/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/hosts Options:[rbind rprivate]} {Destination:/dev/shm Type:bind Source:/var/lib/docker/containers/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/mounts/shm Options:[rbind rprivate]}] Hooks:0xc000033040 Annotations:map[] Linux:0xc0001921c0 Solaris:<nil> Windows:<nil>}
D0320 13:26:37.815779 12375 x:0] Spec.Hooks: &{Prestart:[{Path:/proc/11714/exe Args:[libnetwork-setkey ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa 61307c251bc6ac4d1db1c6fbdd6c12a9f93fec8e34ab27a0455c0abaf8e478f6] Env:[] Timeout:<nil>}] Poststart:[] Poststop:[]}
D0320 13:26:37.815830 12375 x:0] Spec.Linux: &{UIDMappings:[] GIDMappings:[] Sysctl:map[] Resources:0xc000072f60 CgroupsPath:/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa Namespaces:[{Type:mount Path:} {Type:network Path:} {Type:uts Path:} {Type:pid Path:} {Type:ipc Path:}] Devices:[] Seccomp:0xc00006fa00 RootfsPropagation: MaskedPaths:[/proc/asound /proc/acpi /proc/kcore /proc/keys /proc/latency_stats /proc/timer_list /proc/timer_stats /proc/sched_debug /proc/scsi /sys/firmware] ReadonlyPaths:[/proc/bus /proc/fs /proc/irq /proc/sys /proc/sysrq-trigger] MountLabel: IntelRdt:<nil>}
D0320 13:26:37.815916 12375 x:0] Spec.Linux.Resources.Memory: &{Limit:<nil> Reservation:<nil> Swap:<nil> Kernel:<nil> KernelTCP:<nil> Swappiness:<nil> DisableOOMKiller:0xc00019cdc6}
D0320 13:26:37.815956 12375 x:0] Spec.Linux.Resources.CPU: &{Shares:0xc00019cdc8 Quota:<nil> Period:<nil> RealtimeRuntime:<nil> RealtimePeriod:<nil> Cpus: Mems:}
D0320 13:26:37.815993 12375 x:0] Spec.Linux.Resources.BlockIO: &{Weight:0xc00019cdd8 LeafWeight:<nil> WeightDevice:[] ThrottleReadBpsDevice:[] ThrottleWriteBpsDevice:[] ThrottleReadIOPSDevice:[] ThrottleWriteIOPSDevice:[]}
D0320 13:26:37.816031 12375 x:0] Spec.Linux.Resources.Network: <nil>
D0320 13:26:37.816056 12375 x:0] Spec.Process: &{Terminal:true ConsoleSize:<nil> User:{UID:0 GID:0 AdditionalGids:[] Username:} Args:[/bin/bash] Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ad86f41f137c TERM=xterm] Cwd:/ Capabilities:0xc00012c280 Rlimits:[] NoNewPrivileges:false ApparmorProfile: OOMScoreAdj:0xc00019cb00 SelinuxLabel:}
D0320 13:26:37.816116 12375 x:0] Spec.Root: &{Path:/var/lib/docker/vfs/dir/f661ac7bfef34144056f92e393c18ac7660ead6001bbc7ee7d1c4bac3fa8593e Readonly:false}
D0320 13:26:37.816152 12375 x:0] Spec.Mounts: [{Destination:/proc Type:proc Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/proc Options:[nosuid noexec nodev]} {Destination:/dev Type:tmpfs Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/tmpfs Options:[nosuid strictatime mode=755 size=65536k]} {Destination:/dev/pts Type:devpts Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/devpts Options:[nosuid noexec newinstance ptmxmode=0666 mode=0620 gid=5]} {Destination:/sys Type:sysfs Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/sysfs Options:[nosuid noexec nodev ro]} {Destination:/sys/fs/cgroup Type:cgroup Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/cgroup Options:[ro nosuid noexec nodev]} {Destination:/dev/mqueue Type:mqueue Source:/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/mqueue Options:[nosuid noexec nodev]} {Destination:/etc/resolv.conf Type:bind Source:/var/lib/docker/containers/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/resolv.conf Options:[rbind rprivate]} {Destination:/etc/hostname Type:bind Source:/var/lib/docker/containers/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/hostname Options:[rbind rprivate]} {Destination:/etc/hosts Type:bind Source:/var/lib/docker/containers/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/hosts Options:[rbind rprivate]} {Destination:/dev/shm Type:bind Source:/var/lib/docker/containers/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/mounts/shm Options:[rbind rprivate]}]
D0320 13:26:37.816268 12375 x:0] Create container "ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa" in root dir: /var/run/docker/runtime-runsc/moby
D0320 13:26:37.816499 12375 x:0] Creating new sandbox for container "ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.816559 12375 x:0] Creating cgroup "/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.817610 12375 x:0] Joining cgroup "/sys/fs/cgroup/devices/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.817715 12375 x:0] Joining cgroup "/sys/fs/cgroup/perf_event/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.817885 12375 x:0] Joining cgroup "/sys/fs/cgroup/systemd/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.817977 12375 x:0] Joining cgroup "/sys/fs/cgroup/memory/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.818069 12375 x:0] Joining cgroup "/sys/fs/cgroup/net_cls/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.818256 12375 x:0] Joining cgroup "/sys/fs/cgroup/cpuset/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.818365 12375 x:0] Joining cgroup "/sys/fs/cgroup/net_prio/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.818478 12375 x:0] Joining cgroup "/sys/fs/cgroup/freezer/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.818576 12375 x:0] Joining cgroup "/sys/fs/cgroup/pids/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.818672 12375 x:0] Joining cgroup "/sys/fs/cgroup/blkio/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.818765 12375 x:0] Joining cgroup "/sys/fs/cgroup/cpu/docker/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.819027 12375 x:0] Starting gofer: /proc/self/exe [--root=/var/run/docker/runtime-runsc/moby --debug=true --log=/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/log.json --log-format=json --debug-log=/tmp/runsc/ --debug-log-format=text --file-access=exclusive --overlay=false --network=sandbox --log-packets=false --platform=ptrace --strace=true --strace-syscalls= --strace-log-size=1024 --watchdog-action=LogWarning --panic-signal=-1 --profile=false --log-fd=3 --debug-log-fd=4 gofer --bundle /var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa --spec-fd=5 --mounts-fd=6 --io-fds=7 --io-fds=8 --io-fds=9 --io-fds=10]
I0320 13:26:37.820461 12375 x:0] Gofer started, PID: 12380
I0320 13:26:37.820654 12375 x:0] Creating sandbox process with addr: runsc-sandbox.ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa
I0320 13:26:37.821010 12375 x:0] Sandbox will be started in new mount, IPC and UTS namespaces
I0320 13:26:37.821050 12375 x:0] Sandbox will be started in the current PID namespace
I0320 13:26:37.821076 12375 x:0] Sandbox will be started in the container's network namespace: {Type:network Path:}
I0320 13:26:37.821248 12375 x:0] Sandbox will be started in new user namespace
D0320 13:26:37.821398 12375 x:0] Donating FD 3: "/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/log.json"
D0320 13:26:37.821457 12375 x:0] Donating FD 4: "/tmp/runsc/runsc.log.20190320-132637.820576.boot"
D0320 13:26:37.821488 12375 x:0] Donating FD 5: "control_server_socket"
D0320 13:26:37.821515 12375 x:0] Donating FD 6: "|0"
D0320 13:26:37.821541 12375 x:0] Donating FD 7: "/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/config.json"
D0320 13:26:37.821579 12375 x:0] Donating FD 8: "|1"
D0320 13:26:37.821605 12375 x:0] Donating FD 9: "sandbox IO FD"
D0320 13:26:37.821631 12375 x:0] Donating FD 10: "sandbox IO FD"
D0320 13:26:37.821657 12375 x:0] Donating FD 11: "sandbox IO FD"
D0320 13:26:37.821687 12375 x:0] Donating FD 12: "sandbox IO FD"
D0320 13:26:37.821714 12375 x:0] Donating FD 13: "/dev/pts/12"
D0320 13:26:37.821769 12375 x:0] Donating FD 14: "/dev/pts/12"
D0320 13:26:37.821798 12375 x:0] Donating FD 15: "/dev/pts/12"
D0320 13:26:37.821828 12375 x:0] Starting sandbox: /proc/self/exe [runsc-sandbox --root=/var/run/docker/runtime-runsc/moby --debug=true --log=/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa/log.json --log-format=json --debug-log=/tmp/runsc/ --debug-log-format=text --file-access=exclusive --overlay=false --network=sandbox --log-packets=false --platform=ptrace --strace=true --strace-syscalls= --strace-log-size=1024 --watchdog-action=LogWarning --panic-signal=-1 --profile=false --log-fd=3 --debug-log-fd=4 boot --bundle=/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa --controller-fd=5 --mounts-fd=6 --spec-fd=7 --start-sync-fd=8 --io-fds=9 --io-fds=10 --io-fds=11 --io-fds=12 --console=true --stdio-fds=13 --stdio-fds=14 --stdio-fds=15 --setup-root --cpu-num 8 ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa]
D0320 13:26:37.821890 12375 x:0] SysProcAttr: &{Chroot: Credential:0xc00017dd10 Ptrace:false Setsid:true Setpgid:false Setctty:true Noctty:false Ctty:13 Foreground:false Pgid:0 Pdeathsig:signal 0 Cloneflags:0 Unshareflags:0 UidMappings:[{ContainerID:0 HostID:65533 Size:1} {ContainerID:65534 HostID:65534 Size:1}] GidMappings:[{ContainerID:65534 HostID:65534 Size:1}] GidMappingsEnableSetgroups:false AmbientCaps:[]}
D0320 13:26:37.822421 12375 x:0] Destroy sandbox "ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
W0320 13:26:37.822473 12375 x:0] error destroying sandbox: <nil>
D0320 13:26:37.822501 12375 x:0] Restoring cgroup "/sys/fs/cgroup/perf_event"
D0320 13:26:37.822666 12375 x:0] Destroy container "ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa"
D0320 13:26:37.822708 12375 x:0] Killing gofer for container "ad86f41f137ccef357f1ed1878de313f3b9f5690481b3c3c105e308e2fde5bfa", PID: 12380
W0320 13:26:37.923248 12375 x:0] FATAL ERROR: creating container: fork/exec /proc/self/exe: operation not permitted
Forgot the uname:
Linux tglase.lan.tarent.de 4.18.0-2-amd64 #1 SMP Debian 4.18.10-2 (2018-11-02) x86_64 GNU/Linux
Check that /usr/local/bin/runsc has execute access given to all. The sandbox run as user nobody and fails with the error above if it doesn't have permission to execute runsc.
Fabricio Voznika dixit:
Check that
/usr/local/bin/runschas execute access given to all. The
-r-xr-xr-x 1 root bin 19246606 Mär 19 22:17 /usr/local/bin/runsc*
Sure.
Hmm..., the log shows that the gofer process (which runs as the same user) was able to be started correctly. But the sandbox, which runs with a restricted user failed with operation not permitted. Could you try to execute runsc as the nobody user just to double check that is not a permission issue?
sudo -u nobody bash
/usr/local/bin/runsc
Fabricio Voznika dixit:
Hmm..., the log shows that the gofer process (which runs as the same user) was able to be started correctly. But the sandbox, which runs with a restricted user failed with
operation not permitted. Could you try to execute runsc as the nobody user just to double check that is not a permission issue?
sudo -u nobody bash
(btw mksh is a much cooler shell)
-----BEGIN cutting here may damage your screen surface-----
(sid-amd64)tglase@tglase:~ $ sudo -u nobody bash
[sudo] password for tglase:
(sid-amd64)nobody@tglase:/home/tglase$ /usr/local/bin/runsc
I0322 00:53:16.405587 24559 x:0] ***************************
I0322 00:53:16.405674 24559 x:0] Args: [/usr/local/bin/runsc]
I0322 00:53:16.405698 24559 x:0] Git Revision: 8a499ae65f361fb01c2e4be03122f69910a8ba4a
I0322 00:53:16.405709 24559 x:0] PID: 24559
I0322 00:53:16.405721 24559 x:0] UID: 32767, GID: 32767
I0322 00:53:16.405729 24559 x:0] Configuration:
I0322 00:53:16.405737 24559 x:0] RootDir: /var/run/runsc
I0322 00:53:16.405746 24559 x:0] Platform: ptrace
I0322 00:53:16.406173 24559 x:0] FileAccess: exclusive, overlay: false
I0322 00:53:16.406192 24559 x:0] Network: sandbox, logging: false
I0322 00:53:16.406208 24559 x:0] Strace: false, max size: 1024, syscalls: []
I0322 00:53:16.406220 24559 x:0] ***************************
Usage: runsc
Subcommands: checkpoint checkpoint current state of container (experimental) create create a secure container delete delete resources held by a container events display container events such as OOM notifications, cpu, memory, and IO usage statistics exec execute new process inside the container flags describe all known top-level flags gofer launch a gofer process that serves files over 9P protocol (internal use only) help describe subcommands and their syntax kill sends a signal to the container list list contaners started by runsc with the given root pause pause suspends all processes in a container ps ps displays the processes running inside a container restore restore a saved state of container (experimental) resume Resume unpauses a paused container run create and run a secure container spec create a new OCI bundle specification file start start a secure container state get the state of a container wait wait on a process inside a container
Subcommands for internal use only: boot launch a sandbox process (internal use only) debug shows a variety of debug information gofer launch a gofer process that serves files over 9P protocol (internal use only)
Use "runsc flags" for a list of top-level flags W0322 00:53:16.406444 24559 x:0] Failure to execute command, err: 2 -----END cutting here may damage your screen surface-----
bye, //mirabilos
FWIW, I'm quite impressed with mksh interactively. I thought it was much much more bare bones. But it turns out it beats the living hell out of ksh93 in that respect. I'd even consider it for my daily use if I hadn't wasted half my life on my zsh setup. :-) -- Frank Terbeck in #!/bin/mksh
Is there anything in your configuration that would prevent docker's user from creating namespaces, setting chroot or capabilities, etc? There is basically something causing exec.Cmd.Start() fail with EPERM. exec.Cmd is configured in Sandbox.createSandboxProcess.
Fabricio Voznika dixit:
Is there anything in your configuration
I don’t have a configuration, asides from what is mentioned on the gvisor site (as I freshly installed Docker specifically to test some software under gvisor).
that would prevent docker's user from creating namespaces, setting chroot or capabilities, etc?
I wouldn’t know. I’m a veteran unix admin, not a systemd user. No idea what most of these even are.
This is in a chroot, but as far as I know it doesn’t prevent further chrooting. Well at least not in proper Unix.
There is basically something causing
exec.Cmd.Start()fail withEPERM.exec.Cmdis configured in Sandbox.createSandboxProcess.
I don’t speak the Issue9 programming language, sorry.
Could you show output for these three commands? $ sudo unshare -Ur true && echo PASS; $ sudo unshare -Urpfimun true && echo PASS $ cat /proc/sys/kernel/unprivileged_userns_clone
I believe you’re on to something:
(sid-amd64)tglase@tglase:~ $ sudo unshare -Ur true && echo PASS;
[sudo] password for tglase:
unshare: unshare failed: Operation not permitted
(sid-amd64)1|tglase@tglase:~ $ sudo unshare -Urpfimun true && echo PASS
unshare: unshare failed: Operation not permitted
(sid-amd64)1|tglase@tglase:~ $ cat /proc/sys/kernel/unprivileged_userns_clone
1
Outside of the AMD64 chroot, these pass.
(sid-amd64)tglase@tglase:~ $ sudo unshare -U true && echo PASS;
unshare: unshare failed: Operation not permitted
(sid-amd64)1|tglase@tglase:~ $ sudo unshare -n true && echo PASS;
PASS
What should I look for, if one operation works but not the other?
This is in a chroot
Linux forbids creation of user namespaces while in a chroot. The man pages for both clone(2) and unshare(2) use the same wording:
EPERM (since Linux 3.9)
CLONE_NEWUSER was specified in flags and the caller is in a
chroot environment (i.e., the caller's root directory does not
match the root directory of the mount namespace in which it
resides).
Hm, that’s bad… I cannot test outside of the chroot as docker is not available for x32… I guess I’ll have to fire up a VM for that then.
Since this setup is not unusual, can you detect it and output a precise error message for this case?
Also fun: “Linux forbids creation of user namespaces while in a chroot” is not entirely right:
tglase@tglase:~ $ sudo chroot / $(which unshare) -U true && echo PASS;
PASS
tglase@tglase:~ $ sudo chroot /home/AMD64/ $(which unshare) -U true && echo PASS;
unshare: unshare failed: Operation not permitted
Linux forbids it while in a chroot whose root is not the same as the outside root.
Hm, the mount namespace thing… does it contain an idea of how to get around it?
You can try to use pivot_root instead of chroot.
It’s not so easy… my actual setup is an schroot instance, not a simple call to chroot, with a persistent session running, dæmons in the background, and optionally multiple shells into that session. The chroot(8) calls were just for testing.
Anyway, it’s okay to not support this if the underlying OS troubles you so, but if a precise warning can be emitted, please do so.
Thanks!
is the issue resolved? because I'm also facing the same issue
A friendly reminder that this issue had no activity for 120 days.
I think this issue was fixed in https://github.com/google/gvisor/commit/c6a1db5baec7616983b14ac06e84bee45330a9d3. Can you please confirm?
No, that commit has nothing to do with this issue.
Oh yep, that was a similar-looking failure(operation not permitted) while remounting rootfs. This is different.
A friendly reminder that this issue had no activity for 120 days.
This issue has been closed due to lack of activity.