crun
crun copied to clipboard
100% CPU usage when used in kubectl exec and connection is terminated
In my 3-node k8s cluster using:
root@nk8s1:~# /usr/libexec/crio/crun --version
crun version 1.20
commit: 9c9a76ac11994701dd666c4f0b869ceffb599a66
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
root@nk8s1:~# crio --version
crio version 1.32.2
GitCommit: 318db72eb0b3d18c22c995aa7614a13142287296
GitCommitDate: 2025-03-02T18:05:31Z
GitTreeState: dirty
BuildDate: 1970-01-01T00:00:00Z
GoVersion: go1.23.3
Compiler: gc
Platform: linux/amd64
Linkmode: static
BuildTags:
static
netgo
osusergo
exclude_graphdriver_btrfs
seccomp
apparmor
selinux
exclude_graphdriver_devicemapper
LDFlags: unknown
SeccompEnabled: true
AppArmorEnabled: false
root@nk8s1:~# kubelet --version
Kubernetes v1.32.2
...i noticed one crun process constantly consumig 100% CPU. ptrace revealed that it is spinning in loop trying to write to STDOUT:
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
Investigating further I found this hierarchy of processes (1567488 is the CPU consuming crun):
root@nk8s1:~# pstree -sp 1567488
systemd(1)───crio(2109)───crun(1567488)───bash(1567490)───watch(1567662)
I had to kill -KILL 1567490 (the bash process) for the whole hierarchy to go away.
What I did to make this happen is the following:
- I
kubectl exec -itinto a container where a ranwatch <some command>to constantly watch the output of some command. The the networking from my laptop (VPN connection) broke and the kubectl process hung. I terminated it on the client. The interactive bash in the container continued to run though, with crun trying to write it's STDOUT but instead spinning in the loop.
Should crun detect that the connection has been closed and kill the command itself?
Should crun detect that the connection has been closed and kill the command itself?
either that or maybe cri-o shoulid detect the connection is closed and kill crun
thanks for the report. What does runc do in this case?
@kolyshkin FYI
As I said, it is executing an interactive shell in the k8s container triggered with a command like this:
kubectl exec -it podname -- /bin/bash
...stdin,stdout,stderr of the shell are piped from/to terminal executing kubectl. the kubectl connection to API server is abruptly broken (VPN interface down). I suspect this manifests on the cri-o/crun side a little differently from when the connection is terminated in a regular way from either side (triggered by EOF or exit).
Should crun detect that the connection has been closed and kill the command itself?
either that or maybe cri-o shoulid detect the connection is closed and kill crun
doesn't CRI-O see any error when the connection is dropped? Wouldn't be enough to close the terminal to/from crun in this case?
...It might be that the SSL connection to the API server is left in a "lingering" state for a long time unless k8s API server has an active way of detecting such hung connections (with TCP keep-alive and timeouts). So in this case, kubelet and by extension cri-o and by extension runc also doesn't get notified about the lost session for a long time. Maybe the situation would resolve after a longer time. The problem is, all this time, 1 CPU core is being burnt. Is it possible to locate the code where in a loop the following system calls are being made:
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
writev(1, [{iov_base="8\33[53d\t ", iov_len=8}], 1) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(7, [{events=EPOLLIN, data=0x4}], 10, -1) = 1
...I'm speculating now as I don't know the code... the epoll_wait asks epoll descriptor about readyness of other descriptors and gets back 1. I think this means code may read or write to one descriptor. It looks that descriptor is STDOUT and the code wants to write to it, but when attempting to do so, it gets back EAGAIN. Which is returned when the descriptor is nonblocking and the call would block. This looks like a pipe with a full buffer. I speculate further that this is because the kubelet doesn't read from the other end because it wait for data to be sent to API server which doesn't read it since it waits for data to be sent back to kubectl but the SSL connection is hung in a lingering state.
What can crun do in this case? Is there a bug in crun? Maybe the interpretation of epoll_wait result it makes is wrong. If the pipe buffer is full, wouldn't epoll_wait also block? If it returns with 1, it is maybe signaling some other information and not that STDOUT may be written to without blocking...
I wonder if we are hitting https://github.com/containers/conmon/pull/551